Development patterns with ai

In Hrishi Olickel’s LLM Hacker Guide he describes some of the ways his thoughts about the development process have changed since working closely with Large Language Models (LLMs).

Development Process

Hrishi explores the ways that the development process has changed. He offers two other in-between processes, but below is the traditional and the new AI methodology.

flowchart LR Write --> Compile --> Run flowchart LR Chat --> Play --> Loop --> Nest

Traditional development is about writing code, compiling it, and running it. Other steps may be included depending on your flavor, such as Test-Driven Development (TDD), but this is the essential way to write software for decades. This is important to remember because the new approach is starkly divergent.

In Hrishi’s AI paradigm, a developer spends most of their time interacting with the LLM and only a fraction writing code themselves. He doesn’t suggest this necessarily because it’s the best development approach but because, with how new and unexplored this technology is, developers need a lot of exposure to the tooling to understand what they can and cannot achieve with it.

Hrishi’s sample project offers a view into this four-part process.

First, Hrishi gives the LLM some data, in this case Podcast notes, and begins asking questions about the data. He is exploring how the LLM perceives the data, what ambiguities it may have about what it’s received (which, if answered, dramatically improve later results), and what he might be able to do with it.

Second, he starts to ask the LLM for some results. Perhaps a Python script that will parse the data in a certain way, or a transformation into HTML or JSON. In playing around with his options, since the LLM can experiment so easily, he could have it spit out a basic layout, then change it to NextJS, then add a UI library in the space of five minutes.

Third, when he’s reasonably satisfied that the LLM is outputting the results he wants from his chain of prompts, he can loop the prompts with more data to process batches of content. In his example, to extract and structure hundreds more Podcast notes.

Fourth, he nests all these steps into a single program. This is also part of his recommended methodology as he works; break down the work into subtasks that can be unambiguously achieved by the LLM, then line them up to get the final result. When a step isn’t quite working, for example, when a text extraction doesn’t handle escape characters, it is much simpler to debug and update that subtask than a larger multi-step prompt.

Hrishi recommends the following distribution of time spent on different activities, at least when building a greenfield project or proof-of-concept.

Activity Time Spent
Playing 60%
Prompt Tuning 20%
Input Massaging 10%
Coding 10%
Tooling 1%

Hrishi notices that developers familiar with the original paradigm are likely to fall into a pattern of making small, incremental changes with the AI. He urges us to ignore that instinct and instead launch whole new chats. A developer would never consider writing the same program seven times before he gets it right, but the AI can spit out a program in a few seconds.

This AI process is less effective when working on a large, legacy codebase. LLMs aren’t able to effectively hold the context necessary to work on a software project of the average size of 50 person company.

Tips

Hrishi suggests a few tips that he’s gathered from months of working with LLMs.