Where will the technical debt go?
Here's a thought - With AI agents writing more and more code, I think there's going to be a shift from technical debt to cognitive debt. Both will still exist, but cognitive debt will be more and more the reason software stops working, gets poorly optimized, is less maintainable, etc...
This article sparked this thought:
Technical debt as defined by Wikipedia:
Technical debt (also known as design debt or code debt) is a qualitative description of the cost to maintain a system that is attributable to choosing an expedient solution for its development. While an expedited solution can accelerate development in the short term, the resulting low quality may increase future costs if left unresolved. The term is often used in the context of information technology and especially software development.
Tech debt is usually intentional. We live in a world with limited resources, and as much as I'd like to have 17 layers of abstractions that'd let my code handle every edge case, failure mode, input configuration, etc... It's good enough to just ship something that can run without issues most of the time. I think learning where to cut corners appropriately is an important skill of the trade.
Cognitive debt, on the other hand doesn't have a page on Wikipedia. Searching for it on Google points returns the page for Cognitive Impairment, which actually works pretty well:
Cognitive impairment is an inclusive term to describe any characteristic that acts as a barrier to the cognition process or different areas of cognition. Cognition, also known as cognitive function, refers to the mental processes of how a person gains knowledge, uses existing knowledge, and understands things that are happening around them using their thoughts and senses. Cognitive impairment can be in different domains or aspects of a person's cognitive function including memory, attention span, planning, reasoning, decision-making, language (comprehension, writing, speech), executive functioning, and visuospatial functioning
I'm seeing more and more influencers in the field talking about running multiple agents concurrently, not reviewing ai-generated code in any measure, letting the agent run for hours without supervision. This leads to a mismatch between what you understand of the codebase and how the codebase works. It doesn't mean the approach is inherently wrong, but it's good to be aware of it.
Cognitive debt would mean having a limited or even incorrect understanding of how your system works. As long as it works, that's not necessarily a problem. But as you ask for more and more features, the code grows and can become complex to maintain even for AI agents. It might not do the things you think it's doing. It might not properly understand how a feature is used and optimize the code for the wrong things.
In practice, some "failure modes" of the model:
- It works defensively for backwards compatibility: If I switch which API I use for a certain feature, AI will keep support for the previous provider unless I specifically tell it to remove references to it. This may or may not be fine, but you need to choose that consciously, not let the model default to supporting every version of API it's ever seen.
- No abstraction layers: Similar to the previous point, maybe I DO need to support a could of providers. But typically, the code will avoid creating an abstraction layer. The function get_api_response() will be riddled with if-else statements. A well designed abstraction layer could reduce complexity tremendously and make it easy to add new providers and improve code readability.
- it might use the wrong design patterns (if it uses any) to solve a problem. It doesn't know, unless you vibe-design with it, how the system will grow and be used. If you don't make design decisions, it will do it for you.
The thing is, the model could easily fix these problems. But because the human steering the machine doesn't have a good mental model of how the code works, code will sort of grow in all sorts of weird places and weird ways, eventually making the software horrible/impossible to maintain, improve or clean up.
All of that to say, we love to say we got a 20x increase in productivity because of AI. But it would often be beneficial to accept just a 5x and give ourselves time to create a good mental model of how the code works so that we can actually ship something good. Cognitive debt, like tech debt, will not be a problem early on. But it will become one at some point and it'll be too late to go back.
