Will GenAI Replace Developers?
"No machine can replace the human spark: the spirit, the will, and above all, the passion." — Louis V. Gerstner, Jr.
Late in my university studies I became quite interested in artificial intelligence and began taking some graduate level courses. One area in particular I remember thinking will disrupt developers - it was natural language processing. This was prior to cloud and LLMs existing. But even then systems had existed for sometime that could interpret grammatical language. People don't speak grammatically - there were nuances due to phonetics and the changing nature of spoken languages. Even within text based communication we are frequently changing patterns, emoticons have become the new hieroglyphics and are frequently used.
I remember thinking at the time development languages would be much easier to understand. Cloud enabled scaling for training such systems, which led to LLMs. Today, let's explore the question simmering from all those years - "Will GenAI replace developers?"
Historic Shifts In Other Industries
We've seen these kind of shifts in the auto industry. Industrialization created new types of jobs, but globalization in the 1950-1960s meant many roles were external and the prices of automobiles were cheaper cutting into margins. Automations by 80s & 90s meant many jobs were replaced, while increasing roles for engineers - that didn't equate 1:1 with manufacturing workers. Electrification since the 2000s has meant fewer components and that has impacts on tier 2 & 3 supply chain providers with a reduction on jobs. Unions had to make concessions post 2008 recession to ensure viability of businesses.
What's key here is the changes happened over decades meaning there was time to retrain, and adopt necessary mindset shifts to understand the impact of these changes. The AI revolution has been much faster, but will bring similar if not larger levels of disruption in a condensed timeline.
Understanding GenAI
It's important to understand what GenAI is and it's capabilities before determining use cases. The system needs to determine what's being asked, search its databases for responses, and formulate a response. A Retrieval Augmented Generation (RAG) architecture, which is popular these days, will augment retrieved data by the prompts and provide a contextual response. The quality of the responses is only going to be as good as the underlying vector databases, it's why LLMs need to refresh. An interesting aspect is you won't get the same response once the model has been retrained, as such the responses aren't deterministic.
In Sapiens, Yural Noah Harari highlights the role communication played in our development and how it distinguishes us from other species. In my opinion, we feel GenAI tools and systems communicate with us so we treat it as sentient. It's partly why there is so much excitement around this tech even though it's not the most interesting AI use case.
These systems will quickly replace repetitive and knowledge base tasks - want to have a quick brainstorming session and interview several personas? It's an excellent tool! Creating websites and landing pages, low level marketing tasks it will do quicker than humans. Interpreting and generating code, it will be very good at.
It's easy to overestimate the abilities of current systems because it seems so capable, and as mentioned above we deem it to be intelligent. It's not great for everything. For one, it can only provide responses based on the trained data, if a specific domain wasn't included it might "hallucinate" also known as just making stuff up. It's not responding accurately but providing the response you most likely want to hear - it's based on probabilities not depth of understanding. Inference as it stands today is weak, and often just inaccurate.
There was a recent study that came out of Bath University that highlighted it's good at recognizing patterns not deep learning, and GenAI can only learn when we train it. The data that it trains on becomes very imperative; now if we look at Google over the past few decades, the quality of search results have diminished as gamification of SEO took over. Now imagine the quality of responses if you're trained on that data - not great. So you will find it's ability to forecast is quite limited.
Market Disruptions
Looking forward this technology will be become disruptive once we move towards AI agents that can complete multiple tasks. AI agents will enable AI chatbots to conduct more complex tasks - imagine you want to book a flight for the best deal possible. An agent might search kayak deals, find the best deal, book the flight and seats, and finally complete payment. These types of defined tasks will be handled well be such tools.
Enshitification Tax
Enshittification: a pattern in which online products and services decline in quality (Source Wikipedia)
Unicorns from Silicon Valley have a process where services will be cheap for mass adoption. Once they become a market leader, the quality of service goes down and the prices rise. We've seen this on Netflix vs cable, Uber vs taxis, the list goes on. Even when we look at cloud services, PaaS and certain infrastructure has increased while basic resources such as storage have decreased.
The underlying systems for GenAI require significant computing power including GPUs, energy, cooling systems, high-throughput storage, and high-bandwidth networking to perform. This can effectively mean basic calculations done in a deterministic model could be considerably more expensive. It's not a far throw to imagine once mass adoption is realized the price of these services will actually increase.
Development Teams
Software development is a complex lifecycle, if you're judging by the amount of code produced you might have the wrong perspective. While it makes sense to judge teams on production of code, as business we need to look at it holistically. Arguably the more lines of code you have the higher risk factors from bugs to security vulnerabilities. A great developer will balance new features while minimizing risk to the business, sometimes it's as simple as reworking existing features and reducing code footprint.
Current GenAI systems ability to handle multi-dimensional complexity and abstract reasoning as contextual understanding is limited - meaning quite often you will just get pointless responses. It means you need some expertise, at the very least a foundational understanding, to evaluate the quality of output.
Not all software is created equally. When we're building systems that need to accommodate scale for instance decisions we will make different types of decisions. We would want the architecture to be loosely coupled so that individual components could scale, allowing us to deploy often, making it easier to manage any hotfixes. We would also want as a system grows to minimize cyber risks, and introduce vigorous testing, allowing individual teams to dictate technologies. Often times this means more abstract software architecture such as microservices.
In other systems such as Boeing's Starliner focus on software are mission critical systems or at least should be, you might be developing using Real-Time Operating Systems (RTOS). It would mean prioritizing tasks based on criticality, building modular components, and task completion.
Tying business needs to software production is why we need developers and something that is out of scope for GenAI systems such as GitHub CoPilot.
Can Engineering Benefit From GenAI?
With limited budgets GenAI tooling can help engineering teams extend resources further. Let's look at a few areas:
Pair Programming 2.0 - Pair programming allowed developers to collaborate, reduce defects and build higher quality code. While there have been plenty of tools over the ages to support refactoring; CoPilot is a step better. If the developer begins navigator role and lets CoPilot be the driver results tend to excellent. We can then produce higher quality code, aligned with business needs.
Support with non-development engineering tasks - everything from creating QA test cases to developing user stories for product backlogs to creating documentation & even brainstorming, engineering teams will benefit from utilizing GenAI. It could offset tasks that can often be barriers for completion, additional it could be trained to produce outputs from your team templates.
Security systems - This isn't GenAI specific, but broader. One of the issues with detection and incident management I often see with teams is that they don't have the resources to dedicate towards reviewing logs - it is too much data. Even with tools such as Splunk the organization can be very poor. Pattern recognition is definitely a strength, pair that with GenAI we could have level 1 support providing significantly more insights than before. I'm just scratching the surface here.
Team sizes - It would be a lie to say team sizes wouldn't be impacted, you might not need to hire as quickly to grow a team. As the average output of a developer increases, we should be able to get more done with less. This is entirely dependent on the goals of the organization and timelines for releases.
Six Page Memo is a reader-supported publication. To receive new posts and support my work, consider becoming a subscriber.


