One of the most common concerns about AI is the risk that it takes a meaningful portion of jobs that humans currently do, leading to major economic dislocation. Often these headlines come out of economic studies that look at various job functions and estimate the impact that AI could have on these roles, and then extrapolates the resulting labor impact. What these reports generally get wrong is the analysis is done in a vacuum, explicitly ignoring the decisions that companies actually make when presented with productivity gains introduced by a new technology -- especially given the competitive nature of most industries. The thinking generally goes that if a company could, say, be 50% more productive in a particular function, it would mean a commensurate reduction of jobs in that area. For instance, if a certain function (like engineering or sales) required 10 units of labor before, then with a 50% gain in productivity, in the future that same function would now only need ~7 units of labor. The challenge with this type of thinking is that it assumes that companies have maximized the amount of labor they wish they had for a particular function, when in reality many functions are only staffed at the level the company can afford. Further, it assumes that a company is not in a competitive field, and that the company would be complacent and happy about generating the same output as before, just with less costs. Finally, it ignores the fact that productivity gains in a market will lead to increased response from competition, which companies equally have to respond to with more productivity not necessarily more profit. Time and time again this is the type of flawed thinking that we tend to get out of broad economic studies on the labor needs in the economy. To break this down and make it practical, I thought I'd illustrate the point with the example of an engineering function -- one that already is seeing the benefits of AI starting to roll out. The numbers will all be kept simple, but you can change almost any variable and the point will remain the same. The key to thinking through job impacts is to think through what happens a step or two *after* the productivity gain of AI is experienced. So, imagine you're a software company that can afford to employee 10 engineers based on your current revenue. By default, those 10 engineers produce a certain amount of output of product that you then sell to customers. If you're like almost any company on the planet, the list of things your customers want from your product far exceeds your ability to deliver those features any time soon with those 10 engineers. But the challenge, again, is that you can only afford those 10 engineers at today's revenue level. So, you decide to implement AI, and the absolute best case scenario happens: each engineer becomes magically 50% more productive. Overnight, you now have the equivalent of 15 engineers working in your company, for the previous cost of 10. Finally, you can now build the next set of things on your product roadmap that your customers have been asking for. We can't assume it will be 50% more because there are new points of friction and coordination tax that emerge as you have 15 equivalent engineers, but let's say your output goes up meaningfully. Assuming you're acting in your best interests as a company, the features you build make your product that much more compelling, which means at some point (sooner or later) they should result in an incremental gain in revenue. Let's be somewhat conservative on what impact these new features will have on your product, but let's say they generate an incremental 10% of revenue over time or keep customers retained at a 10% greater rate (roughly the same financial benefit). Now let's assess the downstream impact. Firstly, any growth of revenue will often lead to some functions in the business growing as well to support these new customers, which will directly create new jobs. But further, the company now has to decide whether it remains satisfied with its 10 engineers that have the output of 15, or with their incremental revenue should they hire even more engineers to build the *next* set of features that will make them even more compelling to customers. Unless this company is in some rare monopoly position, they likely will want to build the next set of features even faster than the last set to grow even more quickly. This then means AI has caused the company --counterintuitively-- to hire more engineers than before, because the productivity of each engineer is much higher, allowing them to generate more return per engineer, and thus more revenue. What's interesting is this analogy works similarly for most functions in a business. In sales, if you could make sales reps 10% more productive (i.e. they sell 10% more of your products/services for the same cost), almost every company in the world would prefer to hire even more sales reps, instead of merely banking the incremental profit. That incremental sales productivity again would lead to downstream implications, like the need to deliver more features to customers, and thus more R&D hiring! Even back-office functions that don't as directly tie to revenue growth, often are a bottleneck to growth . If you can reduce the bottleneck -- say lawyers reviewing contracts, or people processing invoices-- cycle time in businesses accelerates, which almost always lets you serve more customers faster or grow more quickly, again letting a company reinvest those dollars. In the end, when you step out of the vacuum of just the specific productivity gain of a particular job function, and look at how the whole system will adapt and improve due to that productivity gain, a very different picture of AI's impact on jobs will emerge. Yes there will absolutely be changes to what jobs become more or less in demand in the future, but the competitive nature of companies inevitably ensures that across the whole system companies will be focused on leveraging AI to become more productive.
Redefine metrics to truly assess AI's impact on human capital. The unit of measurement is off that economists and accountants use. To define AI's real impact, we must ethically measure human resources in a totally new way on the balance sheet.
I've been thinking about this framing for awhile. While I agree with much of your thinking, I think we should differentiate between areas where AI increases human productivity (like your engineering example) and areas where tech eliminates the need for humans to do certain tasks. There clearly are a ton of applications where AI can improve productivity of a human by 25-50% -- and companies will choose the massive productivity gain (resulting from keeping their headcount consistent) over the cost savings of reducing headcount. I think the example of ATM machines is a good proxy. Initially experts feared that ATM machines would massively reduce the number of bank tellers, that's not what happened. While ATMs shrank the number of tellers per branch from 21 to 13, the advent of the ATM made it cheaper to open branches which resulted in a net increase in the number of tellers employed. Yet there are also a number of applications where AI will be able to go from copilot to pilot....ultimately resulting in human job loss. GPS resulted in the elimination of the navigator from airplane cockpits. Direct-dial telephony resulted in the elimination of almost 500,000 switchboard operators. RFID toll collecting is reducing the number of toll collectors. While I'm clearly an optimist, I think we do need to acknowledge that while technologies such as AI will likely yield a massive productivity gain that can benefit society as a whole, the benefits will likely be uneven (and painful to certain very large sectors of the population).
Your analysis is true for some cases. I’ve heard of actual situations where managers use the AI to replace jobs and work, fire some less productive workers, and send HC back to the CFO as their ROI bar is now higher - that capex can be better spent on GPU capacity. My point is you’ll see all types of decisions: full replacement; augmentation and more hiring; & some tasks and jobs that take longer to replace than many expect.
@levie Nicely said. My theory is that role reductions will primarily impact government, think the IRS or DMV.
@levie Good summary. Jon Stewart got this one very wrong. But I suppose it’s mostly his writers defending their threatened jobs.
This is accurate but incomplete. Of course the CEO of Box is focused on enterprise software and AI impact thereto. We need to focus on AI for the people, not just businesses. Where are all the consumer apps?! Unfortunately, the current AI revolution is lead by people wearing khaki pants and a golf handicap (eg Cloud companies). Once the average person can benefit from AI, he/she will be able to address the upcoming job dislocation.
I completely get the engineering / software example. But what about all the industries and jobs where there just isn’t demand for more output? For example, let’s say a professional services firm (marketing agency, law firm, accounting firm, etc) increases productivity by 20%, but demand remains flat. They still produce the same output, but just need fewer humans doing the work. There are a lot of industries and companies where that will be the case, and boards and execs will have to make a choice of profits vs people.
This counter intuitive pattern happened 50 years ago with the bank teller job. When ATM machines were invented, all analysts predicted the bank teller job would be automated away. However, bank teller jobs actually grew as a result of ATMs adoption. I wrote this long article on that: remote-work.io/newsletter/a-s…