TD Magazine Article
In an age of intelligent machines, work design must become a priority for L&D to secure the bond between experts and novices.
Tue Apr 01 2025
Bookmark
Think about your most valuable skill—the thing you can reliably do under pressure that delivers results and looks like magic to those nearby. How did you learn it? Decades of research predicts your answer: by working with someone who knew more than you did. More specifically, it's the result of watching an expert for a bit; getting involved in easy, safe parts of the work; progressing to harder, riskier tasks with their guidance; and then finally starting to guide others.
In surgery, the term for that process is "see one, do one, teach one." But no matter what anyone calls it, it entails the same method, whether in pipefitting, midwifery, or carpentry or in an elementary school classroom or a high-energy physics lab.
The research is evident: Formal learning, at best, is a necessary foundation, enabling an individual to figuratively start playing the game. But that's different from being able to do the work under pressure. To get there, most people still primarily rely on collaboration with experts. That relationship shapes their work so they slowly, incrementally build layers of know-how that enables them to get results when it counts.
L&D has been able to maintain its focus on training because the engine for skill has kept humming along. Experts need novices to get the job done. They hire novices and add them to projects, and the novices help. The novices learn, get ahead, and become experts themselves. Training, at best, is a healthy accelerant to that process. And as has occurred for millennia, L&D and leaders focus on the accelerant because they have taken learning by doing for granted.
They can't anymore. The advent of intelligent machines means that learning by doing is under threat.
Since 2012, I've conducted direct, longitudinal research on how people work and build skills in work involving intelligent machines such as robots and artificial intelligence. No matter what other forms of data I collected, I have always done field research, getting data from the real world by personally watching how and why businesses and organizations work—and, often, don't work. I've asked countless questions of everyone involved and pulled the data together systematically. Think of me as an embedded journalist who records and codes his observations and focuses on jobs that depend on intelligent technologies.
During that work, I've discovered a deeply troubling, subtle fact: In millions of workplaces, we are blocking the ability for individuals to master new skills because we are separating junior workers from senior workers, novices from experts, by inserting technology between them. Who is the we? All of us: leaders, managers, experts, technologists, and even novices. In a grail-like quest to optimize productivity, we are inadvertently disrupting the components of the skill code, taking for granted the necessary bundling of challenge, complexity, and connection that could help individuals build the skill they need to work with intelligent machines. How?
Intelligent technologies enable an expert to do more work with less help. Large language models such as ChatGPT, Gemini, and Claude are good examples. An expert can pick them up and solve bigger, more complex problems, essentially outsourcing the grunt work to automatic reasoning-like capabilities that run in the background.
Think of your own expertise. You've built it through solving increasingly complex problems, and you relish the chance to stretch yourself so you can expand your impact. Organizations support that deal because their experts are that much more productive. That leads to better results, sooner, and sometimes for lower cost. In every corner of the economy, across many occupations (I've examined 31), industries, and specific technologies, experts and companies are taking that productivity-enhancing deal and generally winning out.
By now you know who loses out in that deal: the novice. Where once they could have learned by helping an expert with small or beginner tasks, now they're left on the sidelines. When's the right time for that expert to electively involve them in bigger projects, knowing they'll be slower and make more mistakes? Never.
Companies and professions engaging with generative AI are starting to discover that. A recent example from within software development is a Sourcegraph blog post titled "The Death of the Junior Developer" by Steve Yegge, an accomplished leader and software engineer. He found—anecdotally—that novices were submitting "terrible code that worked," as Yegge described during a webinar, and couldn't understand the subtle quality issues involved in using generative AI for enterprise software. It was more straightforward not to involve novices.
That "novice optional" dynamic leaves the next generation of talent subtly hamstrung, which will ultimately compromise professional and organizational readiness. The surgical profession went early in that regard, and I've been keeping tabs on the consequences: remedial training, taking experts offline to mentor people who should be job-ready, and longer procedures (which means more cost to the hospital and risk to the patient), just to name a few. We do not want—and cannot accept—such outcomes for the global economy.
The science and realities above offer one clear implication: L&D professionals now have a strong imperative—and a huge strategic opportunity—to refocus their expertise, tools, and investment on the problem of work design. L&D practitioners have spent more than 50 years mastering the fine art of designing training to maximize key learning outcomes. Instructional design emerged as a subdiscipline, for example, and competent professionals draw on a rich palette of instructional frameworks and tools to tailor training programs to the needs of the learner and the business.
It's time to turn that design ethos toward the world's most effective training experience: work. What might an L&D function's work look like if it redirected more than half of its budget to work design? Action so bold is necessary to ensure that the fundamental engine of skill development—learning through doing—continues to function in an age of intelligent machines.
Those with prior responsibility for work design are not attending to the novice-optional problem. Yes, operations managers, leaders, and continuous improvement professionals have focused on work design for at least a century, but they do not think of work as producing skill. Rather, they think of work as requiring it. If we leave them to their own devices, they will select new work designs to extract productivity gains from the tsunami of intelligent automation, severing the expert-novice bond as they go. They have neither the incentives nor the professional training to see that problem, let alone intervene in it.
L&D can do both. It needs to lead the way and partner with stewards of work process effectiveness.
The first step is to precisely apply the science of skill development to work design. L&D professionals need clear answers to these questions: What kinds of work boost skill? What kinds of work degrade it? And why? The first third of my book, The Skill Code, provides a research-backed answer, so I hope will be of some service there. As you develop a clearer picture, you can bring it together with insights from your operationally focused colleagues on their equivalent questions: What kinds of work increase or decrease productivity? Together, you can start to determine when those categories overlap and when they don't. The final step is to ask:
What happens if we (or our employees) drop generative AI into this work?
What, precisely, will become automated?
How will that affect the design of the work and its effects on skill and productivity?
If you take this opportunity seriously, you'll be eons ahead of the competition, and you can create systematic upskilling opportunities not possible before AI became available.
Precise design requires precise understanding of how task designs drive or compromise on the job skill development and productivity, so that business leaders, workers, and L&D professionals can make targeted, research-backed decisions about where and how to put AI to use.
That will require breaking down work into smaller, interdependent chunks—what computer scientists call task decomposition. No one does that with enough precision to counter the challenge at hand. Process engineers, continuous improvement professionals, robotic process automation firms, and consultancies all dissect work to the simplest level so that tasks can improve and task sequences can become more efficient.
It would be OK for L&D professionals to start at that level and produce frameworks and predictive models to let leaders know how a work design may compromise or enhance skill. But that would be like trying to study microbes with the naked eye—ultimately, a frustrating exercise. The advent of generative AI both demands and enables L&D to go deeper. Rather than viewing work processes as bundles of tasks, start seeing tasks as bundles of subtasks, each with distinct learning potential.
Consider an assessment of a financial analyst's work. Traditional task decomposition may identify activities such as data collection, analysis, and report writing. A learning-focused decomposition would go further, identifying the specific skills a financial analyst developed through each subtask: pattern recognition in data collection, analytical reasoning in analysis, and communication skills in report writing, for instance.
You would need an explicit answer as to why those subtasks develop those skills, so you can redesign work processes to optimize productivity and learning at the same time. For example, you may find that a new, generative-AI-driven work pattern decreases communication skills because it decreases collaboration between novices and experts.
Regardless of your findings, predictions, and experiments, companies need such granular understanding to design work processes that preserve critical learning opportunities even as AI tools automate certain components. And L&D holds the science-backed understanding of skill development that can unlock progress on both fronts at the same time.
To lead a focus on work design, L&D practitioners should:
Audit work to identify critical skill development opportunities that may be at risk from AI implementation. Consider mission-critical tasks and processes in your company—those that rely on highly valuable talent. Identify how using AI there may compromise healthy challenge, complexity, and connection for learners at all levels. The audit should be systematic, incorporating input from both expert practitioners and developing talent. Look particularly at tasks where AI may eliminate starter work that has traditionally served as an entry point for skill development. Assess not just the tasks themselves, but also the informal learning relationships and knowledge transfer that occurs around them.
Develop frameworks for assessing the learning potential of different task configurations. What kinds of situations, process flows, and role complements best ensure challenge, complexity, and connection are in healthy supply, for instance? Your organization's answer will be unique, and you're best equipped to find it. Consider starting by mapping the current state of learning networks in high-performing teams. Use those insights to create assessment tools that can evaluate new work designs against proven patterns.
Partner with business leaders to design AI implementation strategies. Operationally focused professionals are already accountable for changes to business-critical work processes to accommodate AI use and can contribute complementary expertise, resources, and credibility. And you have a new, urgent reason to be at the table—without you, they will likely choose implementation strategies that compromise workforce readiness. While you should aim for joint optimization of skill and productivity, be ready to acknowledge when that won't be cost-effective or even possible.
Create new work metrics that capture both productivity and skill development. Right now, no one has a clear, cost-effective way of assessing the productivity gains or losses associated with the use of generative AI, let alone the effects of different usage patterns on skill. Find or create preexisting work process data that provides the evidence you need, settle on a way to regularly gather it, and present your analysis to workers and key decision makers. Consider buying or building dashboards that track both traditional productivity metrics and new indicators of learning effectiveness.
Test work designs that jointly optimize productivity with challenge, complexity, and connection. Like any organizational experiment, the efforts should focus on important areas, involve falsifiable hypotheses and controls, and enroll key stakeholders such as workers. That will ensure experiments are credible and will provide the necessary insights to guide next steps for teams, functions, and the company. Design the experiments so they encompass both immediate performance and long-term capability building.
The organizations that recognize the imperative and empower their L&D and operational functions to tackle it will be best positioned to create systematic upskilling opportunities that weren't possible before AI. Rather than allowing technology implementation to inadvertently disrupt the ancient and effective model of expert-novice collaboration, they can use it to enhance and scale that precious relationship that has been at the heart of skill development since the dawn of civilization.
For example, companies could design hybrid work processes where AI handles routine, isolated tasks while preserving human collaboration on complex problem solving. They could create structured opportunities for experts to guide novices in using AI tools and even vice versa, turning technology from a barrier into a bridge for skill development. Some organizations are already experimenting with AI-enabled simulation environments that enable novices to practice complex tasks safely. The key will be to integrate those experiences with work designs that maintain productive, problem-solving interaction with experts.
If we do nothing, the future of work will be a zero-sum game where productivity wins and skill loses. Success in that endeavor will require a rethinking of how we measure and value workplace learning. Leaders, managers, and other process stewards must move beyond simple productivity metrics to consider the long-term implications of work design choices on their talent pipeline and organizational readiness. On the other side, L&D professionals must develop new expertise in work analysis and design while building stronger partnerships with business leaders and technology implementers.
As AI implementation continues to transform the workplace, companies that fail to preserve and enhance their skill development capabilities risk creating a workforce that is increasingly dependent on technology but increasingly unable to use it effectively or innovate beyond its current capabilities. Those that succeed, however, will build workforces that are more capable, more adaptable, and better prepared for the continuing evolution of work in the age of intelligent machines.