Sitemap

kt.academy

Blog with mission to simplify Kotlin learning

Non-graph strategies, and when to use them in AI agents

KOOG-ing AI Agents With The Right Souce

11 min readOct 16, 2025

--

In case you missed my previous articles, I explained Why JetBrains’ Koog is The Most Advanced JVM Framework For Building AI Agents and How to design a flexible, graph-based strategy using it.

So… do I always have to build graphs to make my agents work correctly?

Press enter or click to view image in full size

Nope! Not at all!

First of all, many use cases can be covered with a simple LLM-loop with tools. If you’ve got:

  • A clear, scoped task definition
  • A limited set of required tools
  • A well-written prompt
  • Access to an LLM that doesn’t hallucinate too much…

…then congratulations! You’ve won the AI agent lottery!

Just grab Koog’s basic agent and go — no strategy needed, it just works:

val agent = AIAgent(
toolRegistry = ToolRegistry {
tool(::openMenu)
tool(::changeSettings)
tool(::openAccount)
tool(::showNotification)
},
systemPrompt = "You are an AI assistant automating simple menu actions in the current browser. Listen to the user's commands and help the user with their tasks. You can open the menu, show notifications to guide the user through the process (feel free to ask the questions using notifications system), and you can also modify some browser settings."
llmModel = OpenAIModels.Chat.GPT4o,
promptExecutor = simpleOpenAIExecutor("API_KEY"),
)

// Run the agent with the user's input:
agent.run("I want to switch the user and set Google as my default search engine")

Dead simple, right?

Looks easy! But why do I even need anything more complex, then?

Well... if your main task is helping users modify browser settings—honestly, you can probably stop here. Really. Ship it. Go grab a coffee ☕

But here's the thing: if you want to build something that reliably handles complex tasks, you'll probably need more control over your agent's behavior.

LLMs are incredible tools. Amazing tools! But they come with a catch. Their ability to understand text and images and their creativity are limited by their unpredictable behavior. Today, the same model might craft a perfect market strategy considering all your competitors. But tomorrow it forgets half of them and stops writing mid-paragraph because it got distracted by… who knows what… maybe OpenAI was preparing a patch release at the time?

Almost the same as humans, right?

The Team Management Approach to AI Agents

Press enter or click to view image in full size

First — You identify their strengths and weaknesses. (Sarah's great at strategy, Bob excels at details, Lisa catches every bug.)

Second — You split tasks between them (usually in Jira tickets, Google docs, or YouTrack if you're fancy). Clear responsibilities, requirements, timelines, and expected outcomes.

Third — You make sure they work as a team, not as isolated hermits. They share knowledge, benefit from each other, and see the common goal. You’d organize team syncs, planning sessions, shared knowledge bases and more things together.

Finally — You need the final product to actually meet customer expectations. That's why you have QAs who verify the outcome and either give their blessing or send it back with "feedback".

That's way more efficient than having one person do everything, right? Everyone stays focused on their lane while the team shares knowledge and pushes toward the common goal. That's your team strategy!

From Human Teams to AI Workflows

This team analogy becomes particularly useful if you think of complex AI workflows. To make your AI agents more resilient and predictable, you need to design a strategy that mitigates LLM weaknesses by:

  • splitting the work into smaller, focused steps
  • delegating to different LLMs with different strengths (just like your team: someone’s a great architect, another’s an amazing in communication, and someone else is your QA rockstar)
  • quality-checking to meet your standards (remember the QA part, right?)

The killer advantage of multi-step AI agent strategies over a bunch of independent agents is shared context!

Each step of the strategy is essentially not independent: while operating on a different model, it shares the context (i.e. knowledge and operation history) as well as the final goal with others. It’s like a well-oiled team that actually knows what they’re building. Way more efficient than several independent freelancers working on puzzle pieces without seeing the final product.

Cool team analogy, but how do I code this in Koog? And please, no graphs!

As promised, you can now (since Koog 0.5.0) program all that behavior using… plain code! Almost!

Non-Graph Strategies. Program Anything — Koog Will Help You!

You can define custom strategies as normal functions. For that reason, they’re called functionalStrategy in Koog. You can pass it to the strategy parameter of your AIAgent :

val agent = AIAgent(
toolRegistry = ToolRegistry {
tool(::openMenu)
tool(::changeSettings)
tool(::openAccount)
tool(::showNotification)
},
systemPrompt = "You are an AI assistant automating simple menu actions in the current browser. Listen to the user's commands and help the user with their tasks. You can open the menu, show notifications to guide the user through the process (feel free to ask the questions using notifications system), and you can also modify some browser settings."
llmModel = OpenAIModels.Chat.GPT4o,
promptExecutor = simpleOpenAIExecutor("API_KEY"),

// Pass the strategy:
strategy = functionalStrategy<String, String> { input -> /**strategy*/ },
)

// Run the agent with the user's input:
agent.run("I want to switch the user and set Google as my default search engine")

And that’s how you can build the strategy:

val strategy = functionalStrategy<String, String> { input ->
// Request multiple LLM responses
var responses = requestLLMMultiple(it)

// There's a bunch of pre-defined extensions to work with tools:
while (responses.containsToolCalls()) {
val tools = extractToolCalls(responses)
// There's also an extension to get the token usage
if (latestTokenUsage() > 100500) {
// Koog's advanced history compression is also available here !!!
compressHistory()
}

// And an extension for tool execution(s):
val results = executeMultipleTools(tools)
responses = sendMultipleToolResults(results)
}

// Result:
responses.single().asAssistantMessage().content
}

Essentially, you can use almost everything available in the graph API here — just without having to think about graphs!

How’s that possible?

Well, the magic is hidden in the context receiver! Every function is defined as a Kotlin extension that has AIAgentFunctionalContext as a receiver.

For example, here is the definition of requestLLMMultiple:

public suspend fun AIAgentFunctionalContext.requestLLMMultiple(message: String): List<Message.Response> {
return llm.writeSession {
updatePrompt {
user(message)
}

requestLLMMultiple()
}
}

And that’s how compressHistory is defined:

public suspend fun AIAgentFunctionalContext.compressHistory(
strategy: HistoryCompressionStrategy = HistoryCompressionStrategy.WholeHistory,
preserveMemory: Boolean = true
) {
llm.writeSession {
replaceHistoryWithTLDR(strategy, preserveMemory)
}
}

Straightforward, right?

Almost the same implementation as for the corresponding nodes (nodeLLMRequestMultiple and nodeLLMCompressHistory in our case).

Now as we know the basics — let’s define some really custom strategy.

Lets build the spaceship!!!

I’m not joking! Let’s program an agent that would build a spaceship in several steps: first, it would create the architecture, then — it’ll build engine and body, and finally — make some final checks before it’s ready!

Let’s start with defining the spaceship Architecture :

@Serializable
enum class MissionProfile { ORBITAL, LUNAR, INTERPLANETARY, LANDER }

@Serializable
data class SchemaDescriptor(
val id: String,
val version: String = "1.0.0",
val format: String = "json",
val url: String? = null
)

@Serializable
data class Constraints(
@property:LLMDescription("The maximum amount of fuel the vehicle can carry.")
val maxGLoad: Double = 3.0,
@property:LLMDescription("Maximum radiation exposure (in Joules per second) of the vehicle.")
val maxRadiationSv: Double = 0.1,
@property:LLMDescription("Maximum operating temperature (in degrees Celsius) of the vehicle.")
val maxOperatingTempC: Int = 120
)

@Serializable
data class Architecture(
val name: String,
val schema: SchemaDescriptor,
val version: String = "1.0",
val missionProfile: MissionProfile = MissionProfile.ORBITAL,
val constraints: Constraints = Constraints()
)

Now, let’s think of it’s parts: Engine and Body :

@Serializable
enum class FuelType { CHEMICAL, ION, NUCLEAR, ELECTRIC }
@Serializable
enum class Material { ALUMINUM_LITHIUM, TITANIUM, CARBON_COMPOSITE, STAINLESS_STEEL }
@Serializable
enum class ComponentStatus { DESIGNED, BUILT, TESTED, QUALIFIED }

@Serializable
data class Engine(
val name: String,
val model: String = "X-1",
val fuel: FuelType = FuelType.CHEMICAL,
val maxThrustKN: Double = 0.0,
val specificImpulseS: Int = 0,
val massKg: Double = 0.0,
val powerRequirementKW: Double? = null,
val status: ComponentStatus = ComponentStatus.DESIGNED
)

@Serializable
data class Body(
val name: String,
val hullMaterial: Material = Material.ALUMINUM_LITHIUM,
val dryMassKg: Double = 0.0,
val maxCargoKg: Double = 0.0,
val crewCapacity: Int = 0,
val heatShieldRating: String? = null,
val status: ComponentStatus = ComponentStatus.DESIGNED
)

Hint: don’t forget to mark all your data classes as Serializable, and provide LLMDescription for non-obvious fields. It would guide the LLM through the type structure it needs to generate.

And the final spacecraft class:

@Serializable
data class Spacecraft(
val engine: Engine,
val body: Body,
val architecture: Architecture,
val serial: String = "<serial>",
val notes: String? = null
)

What’s left? Let’s also define some quality assurance classes that we gonna use in the future:

@Serializable
data class QAReport(val correct: Boolean, val feedback: String) {
val feedbackIfIncorrect: String? = if (correct) null else feedback
}

@Serializable
data class FullQAReport(
@property:LLMDescription("The report for the engine component.")
val engineReport: QAReport,
@property:LLMDescription("The report for the body component.")
val bodyReport: QAReport,
@property:LLMDescription("The report about the architecture of the spacecraft.")
val architectureReport: QAReport
) {
val isCorrect: Boolean = engineReport.correct && bodyReport.correct && architectureReport.correct
}

Now we can define the AI agent that would build the spacecraft! Let’s provide it with multiple LLM connections: OpenAI, Anthropic, Google and Ollama:

val agent = AIAgent(
toolRegistry = ToolRegistry { /**all your tools*/ },
systemPrompt = "You are an expert aerospace manufacturer"
llmModel = OpenAIModels.Chat.GPT4o,
// Let's connect to multiple LLM providers at once:
promptExecutor = MultiLLMPromptExecutor(
OpenAILLMClient("OPENAI_KEY"),
AnthropicLLMClient("ANTHOPROIC_KEY"),
GoogleLLMClient("GOOGLE_KEY"),
OllamaClient()
),
// The strategy
strategy = functionalStrategy<String, Spacecraft> { input -> /**strategy*/ },
)

// Run the agent with the user's input:
agent.run("I need the starship that can travel to Mars!")

And now we would use a pre-defined method called subtask (PR) to program our strategy. It would pass the given subset of tools to a given model in order to solve the given task using inner agentic loop. Each subtask is also defined by it’s domain model: the input data type, and the output.

Let’s define the required steps (subtasks) that are essential for building a spacecraft:

First — we’re going to design the architecture:

suspend fun AIAgentFunctionalContext.designArchitecture(
input: String,
additionalInfo: String? = null
): Architecture = subtask<String, Architecture>(
input = input,
tools = architecturePlanningTools, // Some relevant tools subset
llmModel = OpenAIModels.Chat.GPT5
) {
"Create the architecture for the following machinery: $input" +
(additionalInfo?.let { "Additional feedback: $additionalInfo" } ?: "")
}

Second and Third — let’s build an engine and a body:

suspend fun AIAgentFunctionalContext.buildEngine(
architecture: Architecture,
additionalInfo: String? = null
): Engine = subtask<Architecture, Engine>(
input = architecture,
tools = engineeringTools, // Some relevant tools subset
llmModel = AnthropicModels.Sonnet_4_5
) {
"Create the engine for the given architecture: $it" +
(additionalInfo?.let { "Additional feedback: $additionalInfo" } ?: "")
}



suspend fun AIAgentFunctionalContext.buildBody(
architecture: Architecture,
tools = bodyDesignTools, // Some relevant tools subset
additionalInfo: String? = null
): Body = subtask<Architecture, Body>(
input = architecture,
llmModel = GoogleModels.Gemini2_0Flash
) {
"Create the body for the given architecture: $it" +
(additionalInfo?.let { "Additional feedback: $additionalInfo" } ?: "")
}

And now let’s finally define our strategy:

strategy = functionalStrategy<String, Spacecraft> { input ->
var qaReport: FullQAReport? = null
var product: Spacecraft? = null

// Let's keep re-building the spacecraft until it passes the QA
while (qaReport?.isCorrect != true) {
// First -- design the architecture
val architecture = designArchitecture(
input = input,
additionalInfo = qaReport?.architectureReport?.feedbackIfIncorrect
)

// Second/Third -- let's build the engie and the body:
val engine = buildEngine(
architecture = architecture,
additionalInfo = qaReport?.engineReport?.feedbackIfIncorrect
)
val body = buildBody(
architecture = architecture,
additionalInfo = qaReport?.bodyReport?.feedbackIfIncorrect
)

// Then -- let's assemble the final product (spacecraft)!
product = subtask<Assembly, Spacecraft>(
input = Assembly(engine, body),
tools = assemblyTools, // Some relevant tools subset
llmModel = OllamaModels.Meta.LLAMA_4
) {
"Assemble the product: $it"
}

// And finally -- we have to verify if the product works:
qaReport = subtask<Spacecraft, FullQAReport>(product, tools = qaTools) {
"Verify the product is built correctly: $it"
}
}

// The result of the strategy is the working product:
product!!
}

Cool! So I can program anything I want with Koog even without any graphs!

You can prototype any strategy of any complexity with plain code, yes! And you can even benefit from most of the Koog’s advanced features including LLM switching, history compression, subtasks, and automatic context management, but…

There is always one nasty “BUT”, right?

Functional (non-graph) strategies do not support Persistence. It’s not possible to checkpoint arbitrary code without a structure. They are great tools, really. But once you want to deploy your long-running AI agents to a cloud, you have to think about persistence and rollbacks in order to make your agents fault-tolerant.

Graph-Based Strategies — the Ultimate Ingredient for Fault-Tolerance.

Press enter or click to view image in full size

The graph-based structure unlocks powerful persistence capabilities for your AI agents. When you enable the Persistence feature, Koog doesn't just checkpoint the agent's context after each action—it captures the exact position within the state machine, preserving your algorithm's precise execution point.

This granular state management enables several key advantages:

  • Seamless Recovery: Restore AI agents exactly where they left off, even on a different machine. Simply persist the state to a database, and your agent can resume from any checkpoint across your infrastructure.
  • Time-Travel: Roll back your agent to an earlier checkpoint and Koog eliminate unwanted side effects. Koog’s explicit separation of side-effects (tools) from the strategy graph makes this rollback possible.

Another advantage of graphs is — better visualization. If you install OpenTelemetry to your agent:

val agent = AIAgent(...) {
install(OpenTelemetry) { ... }
}

The observability you gain would depend on your strategy type:

  • Functional strategies produce flat event logs showing just sequential tool calls and LLM requests.
  • Graph-based strategies would send nested event hierarchies that capture your complete execution flow, including all steps and subgraphs. This structured telemetry integrates seamlessly with observability platforms like Langfuse and W&B Weave, giving you deep insights into your agent's decision-making process.

The graph structure ultimately enables you to better reason about and debug your running AI agents, turning complex agent behaviors into transparent and traceable execution paths:

Press enter or click to view image in full size

The recipe for success:

  1. Start simple. Use basic AI agents, check their limitations. You might realize that it’s all you need.
  2. Progress into building non-graph strategies. Try fast, code as you like, fail fast. Find the perfect strategy that works for you.
  3. Once you figured out the working strategy — refactor it into a graph-based one before deploying to production. Take the most fault-tolerance and observability that Koog framework can offer.

In the next series…

In the upcoming articles, we’ll dive deeper into cooking more advanced AI agents with Koog:

  • Island cruise between Kotlin and Java with a chief: Using Koog seamlessly from Java applications.
  • Observing the kitchen: Leveraging Langfuse and W&B Weave to debug and monitor your AI agents with Koog.
  • Full recipe for fault-tolerant AI agents: Building resilient systems with Koog’s Persistence feature in production.
  • Koog yourself a cheap meal: Cost-optimizing AI agents for production usage with Koog framework
  • and much more!

Follow me to learn how to take full advantage of Koog’s features and start building your AI applications on the JVM!

Try Koog today — cook your AI tomorrow!

--

--

kt.academy
kt.academy

Published in kt.academy

Blog with mission to simplify Kotlin learning

Vadim Briliantov
Vadim Briliantov

Written by Vadim Briliantov

Technical Lead and creator of Koog at JetBrains. Over the past 8 years at JetBrains, worked on IDEs, frameworks, backend and AI. Now leading the Koog project

Responses (1)

Thank you for the excellent article.
I have one question: Is it correct to understand that FunctionalStrategy is intended for those unfamiliar with graph structures?
This is because I believe that if one is already familiar with graph structures, writing directly in graph structure from the start would be more efficient.

--