Building a Coding Agent : Part 6 - Hello MCP

Anthropic introduced a new protocol which allows standardization of how tools are written for LLMs - the Model Context Protocol. In this post lets integrate our agent with the MCP client SDK. This will allow us to integrate with publicly available MCP servers and we are freed from writing tools for every service we want to interact with.

MCP Protocol Library

Let's integrate the official Java MCP Client library as there is no library which allows us to consume MCP servers in Clojure. The documentation for the library is at MCP Java SDK.

I have never done Java interop in Clojure before. So, this was a nice learning experience. The sequence of steps for using a MCP server is as follows:

Create MCP Client ->
  Initialize the Server ->
    List Tools and hand them to the LLM ->
      Call tool as requested ->
        Close the session once done

For getting the MCP server connection we need to create a client first. The library supports two types of clients Sync and Async. To keep things simple let's use a sync client.

To create a sync client first we need a transport for the server. Multiple types of transports are possible but for a local server Stdio works fine. Let's create the transport first

(defn- make-transport
  [program args]
  (-> (ServerParameters/builder program)
      (.args (into-array String args))
      (.build)
      (StdioClientTransport. (McpJsonMapper/getDefault))))

We pass in a program and its arguments for the transport to be created. In our case we will use the default filesystem server provided by Anthropic. So, the call will be:

(make-transport "npx" ["-y" "@modelcontextprotocol/server-filesystem" "."])

Once we have a transport available, we will use it to create a sync MCP client.

(defn make-client
  "Create a client to the MCP server specified"
  [program args]
  (-> (make-transport program args)
      McpClient/sync
      (.build)))

Now we can query this server for a list of tools using:

(defn get-tools
  "Get the list of tools exposed by the MCP server in a format which is compatible to the OpenAI endpoint"
  [server]
  (let [result (.listTools server)
        tools (.tools result)]
    (mapv #(mcp-tool->openai-tool (tool-result->clj %)) tools)))

This list of tools we pass in addition to our other tools. Since, we are using the Anthropic filesystem MCP server we will unregister our tools. We will only keep the shell execution tool. We now just need to change the invoke-tool function to call the MCP tool if there is no coded tool available. The changed code looks like below:

(defn- invoke-tool [tools tc client]
  (let [name (get-in tc [:function :name])
        tool-fn (get tools name)
        args (get-in tc [:function :arguments])
        parsed-args (parse-json-arguments args)]
    (label "Tool" :green)
    (println name)
    (pprint/pprint parsed-args)
    (if (some? tool-fn)
      (tool-fn parsed-args)
      (agent.mcpclient/call-tool client name args))))

Our coded function expect Clojure maps but the MCP tools require JSON strings. To avoid round-tripping from JSON to Clojure and back we pass in the JSON string as is to the MCP tool call. The MCP tool function is written as:

(defn call-tool
  "Invoke an MCP tool with the given params"
  [client tool params]
  (let [request (McpSchema$CallToolRequest. (McpJsonMapper/getDefault) tool params)]
    (tool-result->clj (.callTool client request))))

The parsing of the tool result is done as below:

(defn- tool-result->json
  [tool-result]
  (let [mapper (McpJsonMapper/getDefault)]
    (try
      (.writeValueAsString mapper tool-result)
      (catch Exception e
        (throw (ex-info "Failed to serialize ToolResult to JSON" {:cause e}))))))

(defn- tool-result->clj
  [tool-result]
  (let [json (tool-result->json tool-result)]
    (cheshire/parse-string json true)))

Demo

Now let us look at a demo run using MCP tools for the filesystem.

Debug : Registering tool run_shell_command
Debug : Registering tool read_file
Debug : Registering tool read_text_file
Debug : Registering tool read_media_file
Debug : Registering tool read_multiple_files
Debug : Registering tool write_file
Debug : Registering tool edit_file
Debug : Registering tool create_directory
Debug : Registering tool list_directory
Debug : Registering tool list_directory_with_sizes
Debug : Registering tool directory_tree
Debug : Registering tool move_file
Debug : Registering tool search_files
Debug : Registering tool get_file_info
Debug : Registering tool list_allowed_directories
You : Which libraries are used in the current project?
Tool : list_directory
{:path "."}
Tool : read_text_file
{:path "deps.edn"}
LLM : I inspected deps.edn. Libraries (with versions) used in the project are:

Main :deps
- org.clojure/clojure 1.12.0
- net.clojars.wkok/openai-clojure 0.23.0
- cheshire/cheshire 6.1.0
- com.bunimo/clansi 1.3.1
- io.modelcontextprotocol.sdk/mcp 0.15.0

Aliases (extra deps)
- lambdaisland/kaocha 1.91.1392 (under :kaocha alias)
- io.github.borkdude/quickblog (specified by git sha: 68f22a3de58af9a48f8b5aa92ee329e495d246a7) (under :quickblog alias)
- org.babashka/cli 0.3.35 (under :quickblog alias)

If you want, I can search the code to show where each library is used.
You : Can you please run the tests
Tool : run_shell_command
{:command "clojure -M:kaocha"}
LLM : I ran the test suite. Output summary:

3 tests, 3 assertions, 0 failures.

Would you like the full test output or a breakdown of which tests ran?
You : Show the full test output
Tool : run_shell_command
{:command
 "clojure -M:kaocha --reporter documentation 2>&1 | sed -n '1,400p'"}
LLM : Here is the full test output (including the test names and the final summary):

--- unit (clojure.test) ---------------------------
agent.mcpclient-test
  mcp-tool->openai-tool-missing-name-test
  mcp-tool->openai-tool-test
  mcp-tool->openai-tool-defaults-test

3 tests, 3 assertions, 0 failures.
You : quit
Exiting...

The LLM is able to generate a summary of the libraries used in the project using the Filesystem MCP server. It also uses our run_shell_command tool to run the tests in the project.

The full listing of the code is here. In addition to Clojure you will also need Node installed on your machine for the ability to run MCP servers.

Published: 2025-11-09

Tagged: OpenAI LLM Clojure

Building a Coding Agent : Part 5 - Add more power to the agent

Now that we have a working agent - let's give it some more power. In the previous post some things were just not possible without user intervention. For example, we could not generate any Clojure/Clojurescript code as it requires a folder structure for the sources to be recognized correctly. Let's fix that in this post. We could add piecemeal tools like create folder etc. but to give the LLM more agency; we will arm it with a shell execution tool.

Run shell command

A simple run_shell_command tool can be implemented as below:

(defn
  ^{:tool {:name "run_shell_command"
           :description "Run the provided bash shell command and get the output"
           :parameters {:type "object"
                        :properties {:command {:type "string"
                                               :description "Command to be executed"}}
                        :required ["command"]}}}
  run-shell-command
  [{:keys [command]}]
  (try
    (let [{:keys [out err exit]} (shell/sh "bash" "-c" command)]
      (if (zero? exit)
        out
        (str "Error: " err)))
    (catch Exception e
      (str "Exception occurred: " (.getMessage e)))))

With this one single tool our coding agent is more agentic :D. Let us see a sample run with it.

Demo

You: Can you write a shadow-cljs based fizzbuzz script?

tool: edit_file {:path package.json, :old_str , :new_str {
  "name": "shadow-cljs-fizzbuzz",
  "version": "0.1.0",
  "private": true,
  "devDependencies": {
    "shadow-cljs": "^2.20.13"
  },
  "scripts": {
    "watch": "shadow-cljs watch fizz",
    "build": "shadow-cljs release fizz",
    "run": "shadow-cljs run -m fizz.core",
    "start": "npm run run"
  }
}
}
tool: edit_file {:path shadow-cljs.edn, :old_str , :new_str {:source-paths ["src"]
 :dependencies []
 :builds {
   :fizz {
     :target :node-script
     :main fizz.core
     :output-to "out/fizz.js"}
 }}
}
tool: edit_file {:path src/fizz/core.cljs, :old_str , :new_str (ns fizz.core)

(defn fizzbuzz [n]
  (cond
    (zero? (mod n 15)) "fizzbuzz"
    (zero? (mod n 3)) "fizz"
    (zero? (mod n 5)) "buzz"
    :else (str n)))

(defn -main [& args]
  (let [n (if (and args (seq args))
            (let [p (first args)]
              (js/parseInt p 10))
            100)]
    (doseq [i (range 1 (inc n))]
      (js/console.log (fizzbuzz i)))))
}
tool: edit_file {:path README.md, :old_str , :new_str Shadow-cljs FizzBuzz

This is a minimal shadow-cljs project that runs FizzBuzz on Node.js.

Usage:

1. Install dependencies:
   npm install

2. Run directly with shadow-cljs (uses local shadow-cljs):
   npm run run -- 100   # prints FizzBuzz up to 100

3. Build a release and run the generated script:
   npm run build
   node out/fizz.js 100

4. During development, you can watch:
   npm run watch

The main namespace is fizz.core and the -main function accepts an optional numeric argument (default 100).
}
tool: run_shell_command {:command mkdir -p src/fizz && ls -R}
tool: edit_file {:path src/fizz/core.cljs, :old_str , :new_str (ns fizz.core)

(defn fizzbuzz [n]
  (cond
    (zero? (mod n 15)) "fizzbuzz"
    (zero? (mod n 3)) "fizz"
    (zero? (mod n 5)) "buzz"
    :else (str n)))

(defn -main [& args]
  (let [n (if (and args (seq args))
            (let [p (first args)]
              (js/parseInt p 10))
            100)]
    (doseq [i (range 1 (inc n))]
      (js/console.log (fizzbuzz i)))))
}
tool: read_file {:path package.json}
tool: read_file {:path shadow-cljs.edn}
tool: read_file {:path src/fizz/core.cljs}
tool: run_shell_command {:command cat > package.json << 'EOF'
{
  "name": "shadow-cljs-fizzbuzz",
  "version": "0.1.0",
  "private": true,
  "devDependencies": {
    "shadow-cljs": "^2.20.13"
  },
  "scripts": {
    "watch": "shadow-cljs watch fizz",
    "build": "shadow-cljs release fizz",
    "run": "shadow-cljs run -m fizz.core",
    "start": "npm run run"
  }
}
EOF

cat > shadow-cljs.edn << 'EOF'
{:source-paths ["src"]
 :dependencies []
 :builds {
   :fizz {
     :target :node-script
     :output-to "out/fizz.js"
     :main fizz.core
   }
 }
}
EOF

cat > src/fizz/core.cljs << 'EOF'
(ns fizz.core)

(defn fizzbuzz [n]
  (cond
    (zero? (mod n 15)) "fizzbuzz"
    (zero? (mod n 3)) "fizz"
    (zero? (mod n 5)) "buzz"
    :else (str n)))

(defn -main [& args]
  (let [n (if (and args (seq args))
            (let [p (first args)]
              (js/parseInt p 10))
            100)]
    (doseq [i (range 1 (inc n))]
      (js/console.log (fizzbuzz i)))))

;; When compiled as a script, shadow-cljs will call -main if run with -m
EOF

cat > README.md << 'EOF'
Shadow-cljs FizzBuzz

This is a minimal shadow-cljs project that runs FizzBuzz on Node.js.

Usage:

1. Install dependencies:
   npm install

2. Run directly with shadow-cljs (uses local shadow-cljs):
   npx shadow-cljs run -m fizz.core -- 100   # prints FizzBuzz up to 100
   # or using the npm script:
   npm run run -- 100

3. Build a release and run the generated script:
   npm run build
   node out/fizz.js 100

4. During development, you can watch:
   npm run watch

The main namespace is fizz.core and the -main function accepts an optional numeric argument (default 100).
EOF

ls -la

cat src/fizz/core.cljs

cat shadow-cljs.edn

cat package.json
}
LLM: Yes — I created a minimal shadow-cljs FizzBuzz project for Node.js. Files created and what they do:

- package.json — npm scripts to run, watch, and build using shadow-cljs.
- shadow-cljs.edn — shadow-cljs build config (target :node-script, main fizz.core, output out/fizz.js).
- src/fizz/core.cljs — the FizzBuzz implementation and -main entry point.
- README.md — usage instructions.

Core code (src/fizz/core.cljs):
(ns fizz.core)

(defn fizzbuzz [n]
  (cond
    (zero? (mod n 15)) "fizzbuzz"
    (zero? (mod n 3)) "fizz"
    (zero? (mod n 5)) "buzz"
    :else (str n)))

(defn -main [& args]
  (let [n (if (and args (seq args))
            (let [p (first args)]
              (js/parseInt p 10))
            100)]
    (doseq [i (range 1 (inc n))]
      (js/console.log (fizzbuzz i)))))

Quick usage:

1. Install dependencies:
   npm install

2. Run directly (using local shadow-cljs):
   npx shadow-cljs run -m fizz.core -- 100
   or via npm script:
   npm run run -- 100

3. Build and run the generated script:
   npm run build
   node out/fizz.js 100

4. Watch during development:
   npm run watch

Example output for n = 15:
1
2
fizz
4
buzz
fizz
7
8
fizz
buzz
11
fizz
13
14
fizzbuzz

If you want, I can:
- change argument parsing to be more robust,
- add tests,
- or produce a version that prints to stdout using cljs.node/console equivalents. Which would you prefer?

With the run_shell_command tool available the LLM tries all sorts of things using various Unix commands. Once done with the code we can also ask it to clean the repo state and go back to a pristine state.

You: Go back to a clean state. Please preserve llm.edn

tool: run_shell_command {:command rm -f package.json shadow-cljs.edn README.md src/fizz/core.cljs || true

echo "Removed created files (if present)."

ls -la

echo "\nContents of llm.edn:"
cat llm.edn || true
}
LLM: Done — I removed the files I created (package.json, shadow-cljs.edn, README.md, and src/fizz/core.cljs) and left the repository otherwise intact. I preserved llm.edn as requested.
You: Empty folder exist.
tool: run_shell_command {:command ls -la src || true
rmdir src/fizz || true
ls -la src || true
}
tool: run_shell_command {:command rmdir src/fizz || true
ls -la src || true
}
LLM: Thanks — I removed the empty src/fizz directory so there are no leftover empty folders. The project has been returned to the earlier clean state and llm.edn preserved.
You: Is it a clean git state?
tool: run_shell_command {:command git status --porcelain --untracked-files=all || true

echo "--- git branch ---"

git branch --show-current || true

echo "--- last commits ---"

git --no-pager log --oneline -n 5 || true
}
LLM: Yes — your repo is in a clean git state.

What I checked:
- git status --porcelain returned nothing (no modified or untracked files).
- Current branch: main
- Latest commit: d5a5962 Add a simple FizzBuzz run

If you want to verify locally yourself, run:
- git status
- git status --porcelain   (empty output = clean)
- git branch --show-current
- git log --oneline -n 5

If you ever need to force-reset to the last commit (discard local changes/untracked files), you can use (be careful — these destroy local changes):
- git reset --hard HEAD
- git clean -fd    (add -x to remove ignored files too)

Would you like me to make any git changes (commit, create a branch, or revert something)?

And that is it. Anything you can do with a coding agent is possible with this code. Let's ask the LLM how large it is? Skipping the whole output. This what the LLM reported

LLM: Summary of the Clojure/CLJS sources in this repo (analysis run on src/):

- Total Clojure source files found: 2
  - src/agent/core.clj — 135 lines
  - src/agent/tools.clj — 101 lines
- Total lines (all files): 236
- Top-level def-forms found: 20 (all are defn/defn-; no defmacro, no plain top-level def)
  - defn / defn- total: 20
  - defn- (private fns): 12 (all in core.clj)
  - public defn: 8 (3 in core.clj, 5 in tools.clj)
- No macros detected (defmacro = 0).

236 lines only for a fully operational coding agent. Amazing, what a world we live in.

Disclaimer: This code is open to all sorts of security attacks would not recommend running it without safeguards in place.

Published: 2025-11-02

Tagged: OpenAI LLM Clojure

Building a Coding Agent : Part 4 - A basic usable coding agent

We will utilize this post to get an understanding of how easy it is to build a coding agent. You will see that there is not much required to build a coding agent, all the magic is in the LLM. It will make you question why all these coding agent startups have such a big valuation when it is so easy for a single developer to build one.

Write the file editing tools

We will build a few file editing tools which allow the LLM to manipulate files. This will unlock the behaviour of the coding agent.

Read file

The first tool we will provide the LLM is a file reading tool. It can be coded and annotated for the LLM to discover as below:

(defn
  ^{:tool {:name "read_file"
           :description "Read the contents of a given relative file path. Use this when you want to see what's inside a file. Do not use this with directory names."
           :parameters {:type "object"
                        :properties {:path {:type "string"
                                            :description "The relative path of a file in the working directory."}}
                        :required ["path"]}}}
  read-file
  [{:keys [path]}]
  (try
    (slurp path)
    (catch Exception e (str "Caught exception:" (.getMessage e)))))

We can immediately see the utility of this tool by asking the LLM to summarize files. An example run is below:

You: What is the secret-file.txt

tool: read_file {:path secret-file.txt}

LLM: secret-file.txt contains a riddle:

"what animal is the most disagreeable because it always says neigh?"

Answer: A horse — because it always says "neigh" (pun on "nay," meaning "no").

The LLM can now read file contents and reason over the contents. Like solving the puzzle in the file secret-file.txt

List directory

Next, let us give it a tool to list the files in a folder. That will free us from giving it precise file names for reasoning over.

(defn
  ^{:tool {:name "list_files"
           :description "List files and directories at a given path. If no path is provided, lists files in the current directory."
           :parameters {:type "object"
                        :properties {:path {:type "string"
                                            :description "Optional relative path to list files from. Defaults to current directory if not provided."}}}}}
  list-files
  [{:keys [path]}]
  (let [dir (io/file (or (not-empty (str/trim path)) "."))]
    (if (.isDirectory dir)
      (str (mapv (fn [f]
                   {:name (.getName f)
                    :type (if (.isDirectory f) "directory" "file")}) (.listFiles dir)))
      "No files in the folder")))

Utilizing the above tool we can now summarize entire folders. Let's see a sample run:

You: What does the clojure code inside this folder do?
tool: list_files {:path }
tool: list_files {:path src}
tool: list_files {:path src/agent}
tool: read_file {:path src/agent/core.clj}
tool: read_file {:path src/agent/tools.clj}
tool: read_file {:path llm.edn}
LLM: Summary
- This is a small Clojure CLI "assistant" that talks to an LLM (via the wkok.openai-clojure client) and supports tool/function calls the LLM can request.
- The main pieces are:
  - src/agent/core.clj — orchestration, LLM calls, message loop, tool discovery/invocation.
  - src/agent/tools.clj — example tool implementations (get_current_weather, read_file, list_files).
  - llm.edn — configuration (API key, endpoint, model, impl).

What the program does (high level)
1. Reads llm.edn to get API endpoint, key, model, etc.
2. Discovers "tools" in the agent.tools namespace by looking for functions with :tool metadata (this metadata includes name, description, and parameter schema).
3. Enters a REPL-style loop:
   - Prompts the user for input.
   - Adds the user message into the conversation history.
   - Calls the LLM via openai/create-chat-completion, passing the messages and the tool list (and tool_choice "auto").
   - Inspects the LLM response. If the LLM returns tool calls (under :tool_calls), the code:
     - Maps each tool call to the actual Clojure function (from the tool registry),
     - Parses the JSON arguments,
     - Invokes the function and collects the tool output,
     - Adds each tool output back into the conversation as a tool-message (role "tool"), and
     - Repeats the LLM call with the extended message history until there are no more tool calls.
   - When a final assistant message is produced, prints it and returns to prompting the user.

Details of important functions / behavior
- get-tool-list: produces a vector of tool descriptors (for sending to the LLM) by reading :tool metadata on public functions in agent.tools.
- build-tool-registry: builds a map from tool name to the actual Clojure fn for runtime invocation.
- call-llm-api: calls openai/create-chat-completion with model, messages, tools and config keys (:api-key, :api-endpoint, :impl).
- extract-first-response: pulls the first choice's message from the API result (warns if multiple choices).
- handle-tool-call / invoke-tool: parse the JSON arguments with Cheshire and call the matching function from the registry; returns a vector of tool responses inserted into message history.
- read-user-input! / display-assistant-response! — simple CLI I/O with colored output sequences.
- parse-json-arguments uses cheshire.core/parse-string to convert the JSON argument string into Clojure data.

Example tools (agent.tools)
- get-current-weather: prints a message and returns "-25.0 C" (stubbed).
- read-file: slurps and returns the contents of a given relative file path.
- list-files: lists files/directories at a path (or current directory) and returns a vector-like string of {:name … :type …} maps.

Security / safety notes
- llm.edn contains an API key and endpoint — treat it as sensitive. The repository also contains secret-file.txt; the read_file tool can read arbitrary files, so an LLM or a user could trigger reading sensitive files.
- Tools run arbitrary code in the running process (they are Clojure functions). Be careful exposing them to untrusted LLM output/inputs.

How to run (brief)
- Use the Clojure CLI (deps.edn is present). Run the main -main in agent.core. The program will prompt you in the terminal and interact with the configured LLM.

In short: this project implements a chat loop with an LLM that can request and receive the results of local "tools" (functions), and includes a few example tools (file listing/reading and a stub weather function).

Search and Replace text

Finally to allow the LLM to modify text, lets implement a search and replace tool. This tool is very primitive - only does a simple search and replace; but even with this tool the LLM can suddenly start writing code which can solve coding problems.

(defn
  ^{:tool {:name "edit_file"
           :description "Make edits to a text file.
Replaces 'old_str' with 'new_str' in the given file. 'old_str' and 'new_str' MUST be different from each other.
If the file specified with path doesn't exist, it will be created."
           :parameters {:type "object"
                        :properties {:path {:type "string"
                                            :description "The path to the file"}
                                     :old_str {:type "string"
                                               :description "Text to search for - must match exactly and must only have one match exactly"}
                                     :new_str {:type "string"
                                               :description "Text to replace old_str with"}}
                        :required ["path" "old_str" "new_str"]}}}
  edit-file
  [{:keys [path old_str new_str]}]
  (if (= old_str new_str)
    "Error: 'old_str' and 'new_str' must be different."
    (let [file (io/file path)]
      (if (.exists file)
        (let [content (slurp file)]
          (if (not (str/includes? content old_str))
            (str "Error: '" old_str "' not found in the file.")
            (let [updated-content (str/replace content old_str new_str)]
              (spit file updated-content)
              (str "Successfully replaced '" old_str "' with '" new_str "' in " path))))
        (try
          (spit file "") ; Create an empty file if it doesn't exist
          (str "File not found. Created an empty file at " path)
          (catch Exception e (str "caught exception: " (.getMessage e))) )))))

Let's see a sample run with this tool available:

You: create fizzbuzz.js that I can run with Nodejs and that has fizzbuzz in it and executes it
tool: list_files {:path }
tool: edit_file {:path fizzbuzz.js, :old_str , :new_str ...}
tool: read_file {:path fizzbuzz.js}
tool: edit_file {:path fizzbuzz.js, :old_str , :new_str ...}
tool: read_file {:path fizzbuzz.js}
LLM: I created fizzbuzz.js in the project root.

How to run:
- Default (1–100): node fizzbuzz.js
- Specify n: node fizzbuzz.js 30

The file exports the fizzbuzz function if you want to require/import it in another module.

user@user-home:~/Source/code-editing-agent$ node fizzbuzz.js
1
2
Fizz
4
Buzz
Fizz
7
8
Fizz
Buzz
11
Fizz
13
14
FizzBuzz
16
17
Fizz
19
Buzz
Fizz
22
23
Fizz
Buzz
26
Fizz
28
29
FizzBuzz
31
32
Fizz
34
Buzz
Fizz
37
38
Fizz
Buzz
41
Fizz
43
44
FizzBuzz
46
47
Fizz
49
Buzz
Fizz
52
53
Fizz
Buzz
56
Fizz
58
59
FizzBuzz
61
62
Fizz
64
Buzz
Fizz
67
68
Fizz
Buzz
71
Fizz
73
74
FizzBuzz
76
77
Fizz
79
Buzz
Fizz
82
83
Fizz
Buzz
86
Fizz
88
89
FizzBuzz
91
92
Fizz
94
Buzz
Fizz
97
98
Fizz
Buzz

You: Please edit fizzbuzz.js so that it only prints until 15
tool: edit_file {:path fizzbuzz.js, :old_str ... :new_str ... }
tool: read_file {:path fizzbuzz.js}
LLM: Done — fizzbuzz.js was updated to enforce a maximum of 15. It will:

- Accept an optional n argument, but only print up to 15.
- If you request more than 15, it prints a short message to stderr like "Requested 100; printing only up to 15." and then prints results 1–15.

Run with:
- node fizzbuzz.js
- node fizzbuzz.js 20

Let me know if you want it to silently cap at 15 (no message) or to ignore command-line arguments entirely.

user@user-home:~/Source/code-editing-agent$ node fizzbuzz.js
Requested 100; printing only up to 15.
1
2
Fizz
4
Buzz
Fizz
7
8
Fizz
Buzz
11
Fizz
13
14
FizzBuzz

The LLM is now able to generate a full program and do changes as requested by the user. This is possible with even such a basic search replace tool - due to which the LLM rewrites entire files. Imagine the possibilities with better tools.

You: Create a congrats.js script that rot13-decodes the following string 'Pbatenghyngvbaf ba ohvyqvat n pbqr-rqvgvat ntrag!' and prints it
tool: edit_file {:path congrats.js, :old_str , :new_str ...}
tool: edit_file {:path congrats.js, :old_str , :new_str ...}
tool: read_file {:path congrats.js}
LLM: I created congrats.js. It decodes the ROT13 string and prints the result.

Run with:
- node congrats.js

If you'd like the script to accept input from the command line or a file instead, I can update it.

user@user-home:~/Source/code-editing-agent$ node congrats.js
Congratulations on building a code-editing agent!

As you can see writing a coding agent is fairly trivial with a powerful LLM backing it. Someone should write me a cheque for this post :D. In the next post let's look at making the agent even more autonomous.

Published: 2025-11-01

Tagged: OpenAI LLM Clojure

Archive