· 7 min read
Making Software in 2026
As AI and LLM tools transform software development, teams that integrate these technologies will move faster than their competition. This post explores how the decreasing complexity of writing code shifts focus to critical areas like thoughtful design, user research, CI/CD infrastructure, code reviews, and strategic team composition. Learn how to prepare your engineering organization for the accelerated pace of 2026 and beyond.

The AI/LLM Impact
I took a two-week break over the holidays, which gave me a chance to think about what’s ahead. I’d like to share some thoughts on preparing for 2026, specifically in how software is designed, developed, and operated.
With all our fancy AI and LLM tools, producing high-quality, production-ready code now requires substantially less investment. Teams that integrate LLM in their day-to-day will undeniably move faster and achieve more than their competition.
Having said this, there are many things that software engineering teams do that are more than writing code. In a world where the complexity of developing software is decreasing, and the pace of development accelerates, pressure intensifies on other critical parts of the software delivery pipeline. Building faster is only valuable if you are building the right thing. This places a greater importance on practices like thoughtful design, critical thinking, and thorough user research. Effective planning, coordination within teams, and a clear process for deciding what to build become essential.
The operation and management of software will also become simpler; we’ll see a rise in tools similar to Claude Code that assist teams with observability, cloud infrastructure operations, and incident resolution.
This transition will demand both re-skilling and a fundamental shift in perspective. Teams that have not adopted these new capabilities will continue to fall behind. Leaders who champion this transformation and encourage their teams to do likewise will realize the greatest benefits.
Building on these shifts, here are several critical areas where organizations must focus their efforts for 2026:
Continuous Integration
With the accelerating pace of development, teams require faster and more reliable continuous integration infrastructure. Regardless of whether you think code coverage is an important metric to follow, there is no excuse for not maintaining a high level of code coverage and near exhaustive test scenarios.
Establishing a good QA process becomes even more critical, as the sheer amount of code in the pipeline makes it challenging to manage effectively. Human input is vital here. Automated browser testing is not a good substitute yet for manual review, and the sharp eye for detail that a good tester provides can have a significant effect.
So make sure your CI is as reliable and as fast as possible and include good human loops as part of the release process, ideally by releasing every branch to some sort of staging or testing environment and having a mix of automated and manual checks in place.
Infrastructure
How fast and frequent can you deploy? What is the speed of rollback when an issue occurs? How straightforward is it for engineers to provision testing environments or new computational resources? What level of assistance is required from infrastructure teams, and to what degree are these processes self-service?
How easy is it to take those product ideas, most probably coming from prototyping tools like V0, Lovable, etc., and deploying them as live experiments?
All fundamental infrastructure components continue to be critical: metrics, logging, incident response, feature toggles, automated scaling, workflow orchestration, caching, and networking. Teams that ensure these are accessible for both human developers and large language models will realize the greatest advantages.
Code Reviews
How much of your code review process are you currently automating? What are the main expectations that your team has on code reviews and how much of that needs a human vs an Agent to get to the same output. Most models and algorithms are excellent at things like style, linting, etc., so we don’t really need humans for that. The value of humans in code reviews comes from interface changes that define different or new business rules, data persistence or state, and big deviations from standard coding patterns. The volume of AI-generated code can lead to review fatigue, making it harder to catch these issues. To counter this, the review process must be strategic. Senior engineers are necessary not just for performing reviews, but for establishing the automated checks and firm guidelines that focus human oversight on the most critical changes.
As a group, we must answer several questions: What code, while less than ideal, is stylistically acceptable to commit? What code should be absolutely forbidden from the repository? What are the new warning signs that indicate degrading quality?
Junior Engineers Vs Senior Engineers
The debate between what type of engineers to hire is a strong one; on the one hand, you could hire a bunch of college graduates and give them access to Claude Code with a lot of guidance to complete work cost-effectively. On the other hand, having senior engineers will ensure that LLMs are managed properly, removing the output of spaghetti code over time. This is a decision that totally depends on the maturity of your engineering team. In my opinion, the more senior the team, the more junior engineers could be added; on the contrary, I would never start a new team or a new venture without the well-developed “system taste” that senior engineers bring.
Elements such as module boundaries, library interfaces, and the contracts between infrastructure and product layers are increasingly important tools for sustaining long-term code quality. Systems that do not have these well-defined boundaries will accrue technical debt at a more rapid pace, and bring engineers down with it.
Build vs Buy Decisions
So SaaS is dead, or that’s the trendy thing to say these days… Is that really true? Code is cheaper to write than ever before, so why would you pay for that expensive license of Miro, Asana, Confluence, Slack, etc. if you could build it in a week? Well, that is the narrative, but it is far from the truth. I do expect most of these vendors to start lowering the price to be competitive. On the other hand, most UI over CRUD SaaS is probably a commodity and those buying decisions should be scrutinized. A competent team of 2 or 3 engineers should be able to build those internally quickly. Besides that, there should be very little impact on infrastructure as a service, operating costs would not fall as much as development costs have.
Estimations
I expect the estimates will lack a lot of consistency, as team members will find it challenging to make comparable judgments, specially between AI adaptors and the ones that are not there yet. Before getting to estimating anything, the more important problem is how teams can funnel their LLM resources toward their highest-impact work. In that light, are estimates really that meaningful? Some would say that applying LLMs to high-stakes projects is tough. These initiatives are typically complicated, demanding risk mitigation and a deep subject matter knowledge. They also have a wide blast radius if they go wrong. But that is starting to change. I have seen teams run through a migration that would have taken more than 8 months in the past in a matter of weeks now. This shift doesn’t make estimations obsolete, but it does change their purpose. Instead of being a tool for long-range prediction, estimation becomes part of the discovery process. Through each iteration, teams learn their true capacity, finding where they can take on more ambitious goals and where it is necessary to slow down for careful implementation.
So, stop trying to nail down a perfect six-month plan. The number you come up with is probably the least important part of the exercise. The real win is the conversation you have while trying to estimate. It forces everyone to get on the same page about what you’re building, what the unknowns are, and what “done” actually looks like. The goal isn’t to predict the future anymore; it’s to build a shared understanding so you can move faster, together.
Made with BlueTip 🦋 (https://www.bluetip.ai)