Letting the Machines Think for Us

There are times when I find myself using LLMs even when I’d rather not, and it’s not because of any organizational policy. I’d rather be honest - when I’m lacking the focus or patience to do deep work, the models are there to take instructions for me. In these moments of weakness, I can give prompts like:

  • Read these logs, why could there be an issue here?
  • Please add print statements to this block of code for debugging. I promise I will do unit tests later.
  • Does this architecture diagram make sense?
  • Is this config file using valid syntax?

When you don’t have the time or attention to think things through, that’s when you end up leaning on generative AI tools too heavily. They should save you time, not thought, and thinking shouldn’t be a luxury.

There’s a good reason why no one likes to ‘center a div’, it’s tedious, frustrating, and doesn’t get you closer to what you were trying to do initially. The issues that beleaguer AI-written code are in my experience the same ones that affected code ripped from Stack Exchange or Reddit (fancy that) - redundant, not fully understood, and sometimes even sarcastically wrong.

A Convenient Tool

I did find there to be one thing that takes a consistent amount of effort, regardless of what’s on my plate that day, and that’s writing good tickets. I am fastidious about taking notes when investigating an issue - including my line of inquiry, any output or insights I’ve gathered, and any documentation or issues related to the problem - and I do this in the hopes that it might help the next person (often me) find a better resolution should it be relevant in the future.

While I do think that this is an appropriate use of time, and colleagues over the years have let me know how much they appreciate it, organizing your thoughts in this way can take a lot of effort. There should ideally be a structure for documenting an engagement, be it on the behalf of yourself or a user.

  1. “What happened?”
  2. “What did you do?”
  3. “Why did you do it?”
  4. “Did it work? If not, what was the result?”
  5. “What did you learn?”

Not only should these questions be answered in detail, but they should also be answered succintly. There’s no sense in having extraneous information front and center when an executive summary can facilitate a deeper conversation just as well, if not better.

To help make this process a little easier on myself, I wrote a (simple tool) for keeping myself accountable on the record. I have no doubt that TicketDuck will look pretty anemic in a few months time [i] given the increasing capabilities of agentic AI and MCP adoption, but it succeeds at helping me get thoughts down quickly, and produce a summary that at least has the bones of what my process has entailed. It works like this:

  • After launching the application, configure the model that you’d like to use.
  • Once that’s done, select your form type from the main menu.
  • Answer each question in the form, or skip the ones that you don’t like.
  • Submit the form, copy the output, and edit it down to what makes sense.
  • Did you save time? Maybe not, but the words were put to the page, and the task of documenting your work has been split into smaller chunks!

I’ve written a few different sets of questions and prompts for different types of tasks, and it’s on my list to add the ability to create more ad-hoc. I’ve always loved the idea of creating a generator which does the right thing for you, as opposed to the wrong thing, and once you’ve written things down and committed them to the record, you are making yourself more trustworthy. You can always go back later to make yourself more clear, but any void is subject to interpretation. Who hasn’t been through seemingly unending chains of meetings and had the goalposts move bit by bit, month by month, until they were unrecognizable? The more that I’ve written and the interacted with what I was writing, the better I’ve been able to reflect on what I put forward as an engineer.

One of my biggest takeaways from studying music has been recognizing the cost benefit characteristics of improvisation vs. composition. The act of playing out your inspiration is not the same as refining it, and each requires its own mindset.

When you get something done just for the sake of it needing to be done, it’s a lot better if there’s the potential upside of some larger accomplishment that can be built upon it around the corner. Even the act of turning a manual process into something like a (do-nothing script) can be revelationary. It’s true that some people want to be led, rather than wander down the garden path on their own, but if punching text into an LLM lowers your personal activation energy to embark on a project and get an MVP going, that’s phenomenal[ii].

What Else Could be Improved?

There’s a lot more that goes into running an effective support operation than waiting for things to break. Documentation can always be iterated on, runbooks can be refined, and the opportunities for fact-finding your product are infinite.

That being said, I think that some form of ’event-based’ ticketing could be a big improvement over the ‘wait and see’ approach. Instead of depending solely on user-initiated tickets, what if support teams could build up an internal understanding of their customer’s environment and history, without cross-referencing pages of tickets?

I know what you’re thinking - spying on your users is so zeitgesty, tell me more! The military and intelligence sectors have long embraced similar models, creating complex event-processing systems that integrate disparate streams of data into situational awareness platforms. Meanwhile, most enterprise IT support is trapped in loops of redundant questioning, as if each problem were unprecedented. Why do we do this ourselves, you say?

I am not saying that we should spy on users, and I want to be deliberate in saying that any hypothetical program should not collect anything other than the text that we and the user generate within the shared context of tickets.

The thing is, I don’t think that the questioning is due to lack of data. Rather, every system that I’ve used merely structures data in ways that directly accelerate problem-solving. Rather than making connections between events in situ, they farm the work out to some internal analytics appendage that traps any insights behind a set of fields - date, time, user, description, etc.

The familiar experience of funneling logs upwards and slicing and dicing them is a vast improvement over what came before; but it doesn’t do the work to place the user in the problem space. Placing the user and the system and a specific when where why is the first step in triangulating a given issue, and if we could have a footprint of that rather than searching through ticket titles and comments, that would be awesome.

I’m hoping to write more about what an event-driven ticketing system might look like in a subsequent article. In the meantime, I know that the startup Pylon is working on such a product, though I’m still waiting for a technical blog post from them on the subject.

Footnotes

[i] Lest I be mistaken for charlatan seeking to growth hack your startup with an API wrapper and a prayer, this has already happened several times since I’ve started writing it, where a friend asked me about a certain feature, and it was already available elsewhere, for free, looking like a shiny new penny.

[ii] If you know the chords you want, let the machine keep you in key.