Painting in the Dark
While working recently, GitHub Copilot interrupted my workflow with a kind of code suggestion many of us have seen before: an uncanny likeness of what I was reaching for in my mind. It was close, convincing, fast — and wrong. Still, it wasn’t bad. Given how little context it had about my actual work, it was impressive — like painting in the dark and still hitting near the mark. The problem is that it derailed my train of thought. I began working toward Copilot’s suggestion instead of leaning into a core skill essential to programming: computational thinking. This is my favourite part of engineering. Thinking through a problem programmatically, augmenting an existing structure, and creating things that are intuitive or just plain fun. This engineering mindset goes well beyond code. It forces you to articulate both your goal (declarative knowledge) and how you’ll reach it (procedural knowledge). Recently, I’ve felt a bit alienated from parts of my work and felt unsure as to why. I suspect it’s because I have a little robot nudging me away from my own computational instincts. I’m also seeing more and more code generated by others that began with robots (I’m calling LLMs “robots” now). On one hand, it’s fantastic that people have access to helpful productivity tools. On the other, productivity is often lost when we get stuck in the hallucinatory weeds produced by a robot working in the dark with minimal context.
LLMs tend to produce better results when you give them a first pass — your own initial sketch. This isn’t far from how humans collaborate best. If you handed me a broken function but with well-explained intent, I’d be far more likely to fix it. This echoes concepts like one-shot learning and human-in-the-loop feedback. But it’s good not to forget what transformers really are: big, inscrutable, statistical calculators. They generate next-most-likely tokens based on patterns. That makes them excellent at completing your ideas but less good at inventing solutions to problems that require unexposed context. Trying something yourself first gives you vital context to assess the robot’s output. In one study on the reusability of generated code, researchers wrote:
“Twelve participants attempted to repair the code when there was an error. However, they always found it difficult since the code was not written by themselves… P7 said, ‘It made debugging more difficult as I hadn’t written the code directly and didn’t have an initial intuition about where the bugs might be…’”
My own sense is that generative AI excels at completing boilerplate. But the difficult parts — the bits that really matter — are often the least accessible. While lots of ongoing research is focused on decomposing prompts into structured subgoals, that’s still a work in progress.
My suggestions for using generative AI without losing yourself:
-
Start without Copilot: begin by tackling the task on your own. Write code and leave clear comments explaining your thinking, just like you’re working with a teammate. If you do get stuck, this gives Copilot better guidance. Code + intent = far less ambiguity than just using natural language alone.
-
Learn some appropriate terminology: understanding terms like method signature, class, static method, or vectorisation isn’t just for show. It’s how humans and machines parse logic cleanly. The better your technical vocabulary, the clearer your thinking becomes for both you and the robot.
-
Consider the debugging trade-off: if you’re stuck debugging hallucinated logic, it might be time to pause. Ask: Am I getting anywhere? If you hand it off at that point, you might end up with two confused humans and one confused robot, when a single, focused person could have solved it faster.
None of this is to say robots should go away. As is commonly suggested:
A robot doesn’t beat a human. A human with a robot beats the robot.