Software Engineering After the Headcount Era
The industry is moving from labor expansion to leverage expansion
For roughly two decades, software engineering expanded because almost every sector needed new digital systems, new interfaces, new integrations, new testing, and new maintenance layers. That demand has not vanished. According to U.S. Bureau of Labor Statistics, overall employment for software developers, QA analysts, and testers is still projected to grow 15% from 2024 to 2034, with about 129,200 openings per year on average; the agency explicitly links future demand to AI, the internet of things, robotics, automation, and cybersecurity related software. The World Economic Forum likewise places software and application developers among the fastest growing roles globally, even as broader labor markets are being reshaped by AI and information processing technologies.
The better way to understand the shift, then, is not “software work disappears,” but “the production logic of software changes.” The old model scaled output mainly by adding engineers, testers, and coordinators. The new model increasingly tries to scale output by combining smaller teams with AI assisted coding, AI assisted testing, platform engineering, and reusable internal tooling. That is consistent with labor market evidence from PwC, which notes that in the ICT sector, the share of total job postings has nearly halved over the past 12 years even though total jobs are still growing in real terms. In other words, software work can remain large while becoming less dominant as a simple volume hiring machine.
Why firms now feel forced to become AI driven
The competitive pressure is real. Microsoft and LinkedIn reported in their 2024 Work Trend Index that 75% of knowledge workers globally already use generative AI at work, 78% of AI users are bringing their own tools to work, 79% of leaders believe their company needs to adopt AI to stay competitive, and 66% of leaders say they would not hire someone without AI skills. LinkedIn’s 2025 Work Change Report adds that by 2030, 70% of the skills used in most jobs will change, and professionals entering the workforce are on pace to hold twice as many jobs over their careers compared with workers who entered 15 years earlier.
This is also visible at the organizational level. McKinsey & Company found in its 2025 State of AI survey that 78% of respondents said their organizations use AI in at least one business function, up from 72% earlier in 2024 and 55% a year before. The IT function saw the largest recent increase in adoption, and organizations are now most often using generative AI in marketing and sales, product and service development, service operations, software engineering, and IT. At the same time, more than 80% of respondents said their organizations were not yet seeing a tangible enterprise level EBIT impact from generative AI. That is a crucial point: AI adoption is spreading faster than measured financial returns.
The spending environment explains why. Gartner forecast worldwide generative AI spending of $644 billion in 2025, up 76.4% from 2024, and said expectations are simultaneously being pressured by high proof of concept failure rates and dissatisfaction with current outputs. At the same time, big platform firms are pouring capital into infrastructure: Microsoft said it was on track to invest about $80 billion in AI enabled datacenters in fiscal 2025; Alphabet reaffirmed roughly $75 billion in capital spending for 2025; Meta reported $72.22 billion in 2025 capital expenditures; and Amazon projected about $200 billion in 2026 capex after approximately $131 billion in 2025. This is why so many firms now present themselves as AI first or AI driven: capital markets, clients, and internal strategy all increasingly treat AI capability as a baseline requirement, even before governance and returns are fully mature.
That “forced adoption” story becomes even sharper when looking at foundations rather than slogans. Gartner said in April 2026 that organizations with successful AI initiatives invest up to four times more, as a share of revenue, in data quality, governance, AI ready people, and change management than organizations with poor AI outcomes. Yet only 39% of technology leaders in Gartner’s survey were confident current AI investments would positively affect financial performance. The signal is clear: companies do not simply need model access; they need architecture, governance, and capability design.
The layoff story is real but more complex than the hype
The layoffs are not imaginary, but the simplest story about them is still wrong. According to a 2026 note from the Federal Reserve Board, there is so far no evidence that industries or firms with higher AI adoption are posting fewer jobs overall, and the post pandemic slowdown in national job postings does not appear to be driven even modestly by AI. The same note also points to more granular evidence that entry level employment is falling in occupations where AI automates work, while experienced workers in those occupations are seeing more stable outcomes. That distinction matters: the first effect is often slower junior hiring and more selective staffing, not an immediate collapse in total demand.
At the same time, company level cost actions are increasingly being paired with AI investment. Reuters reported that Microsoft was laying off around 6,000 workers in May 2025 while funneling billions into AI and aiming roughly $80 billion in fiscal year capital spending at datacenter expansion. Reuters also reported that Alphabet’s Google cut about 200 staff in its global business unit while big tech redirected spending toward data centers and AI development. In April 2026, Reuters further noted that economists and investors were becoming more concerned that AI was beginning to generate measurable job losses in highly exposed sectors, and Challenger data linked AI to 7% of total planned U.S. layoffs announced in January 2026.
The stronger research signal is that the first major displacement effects appear concentrated at the entry level. A 2025 study from the Stanford Digital Economy Lab found substantial declines for early career workers ages 22–25 in occupations most exposed to AI, including software development and customer support. The paper reports that employment for 22–25 year olds in the most AI exposed occupations fell 6% from late 2022 to September 2025, while older workers in the same occupations saw growth; for software developers ages 22–25 specifically, employment was down nearly 20% from its late 2022 peak. After controlling for firm time shocks, the most exposed young workers still showed about a 16% relative employment decline.
So the right interpretation is not “AI is already wiping out software jobs everywhere.” It is closer to this: AI is making companies less willing to hire large junior cohorts for routine coding, routine QA, and repetitive support tasks, while still preserving or even increasing demand for higher context, higher trust, and more domain rich roles. McKinsey’s survey captures that nuance well: respondents expect lower headcounts in some functions, but in software engineering and product development they actually anticipate higher employee counts over the next three years.
AI coding tools are improving fast, but not uniformly
There is real productivity upside in certain coding settings. In a controlled experiment on GitHub Copilot, developers given access to the tool completed a JavaScript HTTP server task 55.8% faster than the control group. Meanwhile, the 2024 DORA report from Google Cloud found that more than 75% of respondents relied on AI for at least one daily professional responsibility, and that a 25% increase in AI adoption was associated with a 7.5% increase in documentation quality, a 3.4% increase in code quality, and a 3.1% increase in code review speed. These results help explain why so many engineers feel AI tools are already indispensable for drafting, summarizing, refactoring, or accelerating smaller tasks.
But those gains are not universal, and they shrink as realism increases. The same DORA research found that increased AI adoption was associated with a 1.5% decrease in delivery throughput and a 7.2% decrease in delivery stability, while 39% of respondents reported little or no trust in AI generated code. Even more strikingly, a 2025 randomized controlled trial from METR found that experienced open source developers working on their own repositories took 19% longer when using early 2025 AI tools than when working without them. In other words, coding agents and copilots can be very effective in bounded tasks, but can still be net negative in high context environments with implicit standards, unfamiliar edge cases, and strong review requirements.
The benchmark story shows why the hype persists. In the original 2023 SWE bench paper, the best performing model solved only 1.96% of real world GitHub issues. When OpenAI introduced SWE bench Verified in August 2024, top agents were still only solving around 20% of SWE bench and 43% of SWE bench Lite tasks. By February 2026, however, OpenAI wrote that state of the art progress on SWE bench Verified had climbed from 74.9% to 80.9% in just six months. That sounds like a near breakthrough in autonomous software engineering. But OpenAI also argued that SWE bench Verified had become increasingly contaminated and flawed: many tasks rejected functionally correct answers, and frontier models showed evidence of having seen benchmark specific problem and solution information during training. So the progress is real, but the cleanest simple reading of those benchmark numbers is no longer reliable.
For a blog argument, that is one of the strongest points available: AI coding has clearly advanced from toy auto complete into something much closer to partial software execution, but there is still a large gap between benchmark competence and dependable autonomy inside live enterprise systems. The frontier can already do meaningful bug fixing and code repair in constrained settings; what it still struggles with is context, trust, unstated requirements, and delivery accountability.
Outsourcing and facilitator firms face a business model reset
This is where the labor heavy services model becomes especially vulnerable. Tata Consultancy Services told staff in February 2026 to use AI to deliver work faster and cheaper even if it cannibalized the company’s own revenue, while investors worried that AI was disrupting the Indian IT sector’s traditional labor-heavy operating model. Reuters reported the same month that market concerns around AI disruption had erased about $68.6 billion in value from Indian IT stocks during February. NASSCOM simultaneously said that AI was now a fundamental part of every proposal in tech services, that traditional work was being compressed even as new work was opening, and that AI related services revenue would continue rising sharply.
That combination is the key structural shift for facilitator firms that historically scaled by adding seats, layers, and billable hours. If a customer can get the same output from fewer developers because coding, testing, summarization, debugging, and first pass documentation are increasingly automated, then the economics of labor arbitrage weaken. Reuters’ later reporting on TCS made that concern more explicit, noting expert expectations that AI could eliminate large numbers of jobs in the Indian outsourcing sector over the next few years, with testing, bug identification, and middle management coordination among the most exposed functions. Whether those exact forecasts prove right or not, the business model pressure is already obvious.
That does not mean the services sector disappears. Reuters also reported that NASSCOM still expected India’s IT industry to reach $315 billion in revenue in fiscal 2026 and add a net 135,000 jobs, even amid AI driven disruption. The more plausible outcome is a re rating of what kind of services matter. Firms that survive and grow are likely to be the ones that move from selling labor volume to selling reusable IP, domain specific AI workflows, governed data layers, AI security, platform engineering, orchestration, and business outcome delivery. That is strongly consistent with 2025 survey work from Boston Consulting Group, which found companies are explicitly rebalancing IT budgets toward AI, cloud, security, and analytics while using outsourcing and vendor consolidation as cost levers in the rest of the stack.
What self correcting software can realistically do next
The user’s hardest question is also the most important one: can we build systems that detect bugs, fix them, verify the fix, and adapt continuously without waiting for a human sprint cycle? The evidence suggests a qualified yes, but only in bounded domains. Research on self healing software describes architectures where observability tools act as sensory inputs, AI models diagnose problems, and automated agents apply code or test modifications. IBM is already building toward this in operations, positioning AIOps tools to diagnose root causes, propose remedies, and in some cases automatically resolve incidents or trigger remediation workflows before users notice problems. IBM’s Intelligent Remediation pushes this idea further by combining incident diagnosis, recommended actions, and automatable runbooks.
The critical limitation is that runtime remediation is easier than open ended product evolution. Restarting a failed service, rolling back a change, or applying a well tested patch inside a visible operational environment is not the same as deciding what a product should become, which tradeoff matters most, which customer segment to prioritize, or how to redesign workflows based on shifting market conditions. Current evidence from DORA, METR, and the benchmark contamination problem suggests that AI is strongest where the system has explicit goals, strong telemetry, stable evaluation criteria, and limited ambiguity. It is weaker where requirements are underspecified, success is socially interpreted, or the codebase contains a large amount of tacit institutional knowledge.
That means the most realistic next decade is one of supervised autonomy rather than full strategic autonomy. AI will increasingly handle first pass coding, test generation, regression investigation, incident triage, documentation, code review assistance, and narrow classes of bug repair. It will also likely reduce the amount of junior labor needed to maintain routine pipelines. But systems that truly “feed themselves” on market trends and continuously refactor themselves without robust human oversight still run into problems of trust, governance, hidden context, and accountability. Gartner’s latest work makes that point from another angle: successful AI depends disproportionately on governed context, trustworthy data, AI ready people, and connected engineering practices.
Skills are changing faster than job titles — Work Change Report: AI is Coming to Work, LinkedIn.
Knowledge-work AI adoption is already mainstream — 2024 Work Trend Index, Microsoft and LinkedIn.
Broad enterprise AI adoption, but weak enterprise-level ROI so far — The State of AI 2025, McKinsey.
Why firms keep increasing AI budgets despite weak POCs — 2025 GenAI spending forecast, Gartner.

