Category Archives: New Product Development

Engineering Productivity

Engineering leaders interested in improving their profitability need to understand how the Theory of Constraints can improve engineering productivity and perhaps most importantly under what business conditions. This post reviews the evolution of throughput methodologies from where they were first applied in the manufacturing environment to newer approaches evolving for engineering, new product development, and R&D where the business goals are improved profitability and new value creation for growth and competitiveness.

Theory of Constraints

Eli Goldratt’s book The Goal (1984) helped a generation of manufacturers understand the operational principles underlying the Toyota Production System and Lean Manufacturing.  Goldratt defined the goal as improved profits and clarified the operational rules for running a plant to be in order of priority throughput, inventory, and operational expense as opposed to pure cost cutting that lead to localized optimums and poor profitability results. He explained how the Theory of Constraints (TOCs) when applied through these operational rules can improve the profitability of a manufacturing operation with stable input demand. The TOC was first applied to manufacturing operations that can be characterized as a repeatable network of dependent events with processes that are subject to statistical fluctuations.  The TOC focusses on system constraints to improve throughput, inventory, and operational expenses in the total production system.  The key conditions that enable the TOC to achieve results in manufacturing are stable demand, moderate to high volumerepeatable processes, and a small range of products.

Unstable Production Environments

Eli Goldratt’s paper Standing on the Shoulders of Giants (published with The Goal) went on to clarify how certain production environments and conditions can become unstable leading to marginal improvement gains from applying TOC.   In this paper Goldratt described the Hitachi Tool Engineering case where the firm had limited success with lean manufacturing because of their unstable production environment conditions.

The three general conditions Goldratt identified that lead to unstable production environments are:

  1. Unstable Demand Per Product
  2. Unstable Overall Load On The Entire Production System
  3. Short Product Life

The first two unstable production environments fall within the means of a manufacturing company to manage because the production system can still be characterized as a network of dependent events with processes that are subject to statistical fluctuations.  Full productivity gains are not achieved because of how the production system throughput reacts to the unstable input demand due to dynamic mix of products, too many different products, or how dynamically the input demand of different types of products results in unstable overall load on the system. Goldratt explains how a time-based application of supply chain approach of TOC in a method called Drum-Buffer-Rope system can achieve improved performance for the first two conditions. Goldratt observed that low touch time production environments (Touch Time <<< Lead Time) provide enough margin to still exploit TOC benefits.

The third unstable production environment, short product life, emerged in the 1980s from the increased pace of technological change on manufacturing operations. The turn time (lead time) performance of engineering, product development, and R&D became a factor for product companies bringing attention to knowledge worker productivity.  Goldratt observed that product development systems do not exhibit processes that are ‘network of dependent events with processes that are subject to statistical fluctuations’. Each new product develop effort tend to have a unique network of dependent events with high variability which is consistent with a a project environment. Goldratt also observed that the project environments also exhibits time compression where touch time approaches lead time (lead time ~ 2 to 3 times touch time) of the project which degrades project environment throughput.

Unstable Project Environments

To solve the unstable project environment problem Eli Goldratt went on to develop the Critical Chain method in the 1990s.  The Critical Chain method adapts the TOC to unstable project environments with a particular emphasis on engineering development projects. In much the same format as The Goal his book Critical Chain (1997) explains how the Critical Chain method achieves improved project performance over Critical Path methods. The goal of Critical Chain method is to improve the flow (throughput) in project environments for stable and unstable project demand.  The mental jump from manufacturing production environments to project environments is helped when one considers that most project environments are multi-project environments.  Throughput in a project environment is understood to be the flow of projects (and their activities) of various degree of: sizes, durations, complexity, uncertainty and novelty.

The Critical Chain method seeks to maximize project environment throughput by managing feeding buffers and capacity buffers within the project and drum buffers and capacity buffers between projects.  The Critical Chain methods use of buffers (time & resource) to improve productivity by reducing Work In Process (Design in Process), manage bottleneck resources, not allowing multi-tasking of resources, staggering projects along the constraints, prioritizing projects, and resolve resource conflicts on the system level.

An interesting aspect of Goldratt’s Critical Chain method was how to consider behavioural issues in multi-project engineering environments. The Critical Chain method addresses:

  1. Tendency for engineers to ‘pad their estimates’ to give local safety margins that degrade the efficiency of the project environment by use lumped buffers (rather than activity-by-activity risk buffers) and focussing less on individual activity time performance.
  2. Overcome the tendency to think locally (within the project or a work area) by encouraging global thinking by avoiding multitasking.
  3. Manage ‘student syndrome’, the tendency for humans with time buffers to start their tasks later and waste safety margins.
  4. Manage ‘Parkinson’s law’, the tendency not to finish tasks ahead of time even they have a chance to by removing activity padding.
  5. Minimize the individual project owners pressure to execute first (local optimization at the expense of the global performance) by adopting a priority system.

An excellent review of the Critical Chain method can be found in a 2005 paper by Lechler, Ronan, and Stohr with some useful simplifications that make the method more practical.

Product Development Flow

Donald Reinertsen developed a parallel set of work to Goldratt that explored and clarified much of the underlying principles of lean product development from the perspective of achieving faster time-to-market in the project production environment. Reinertsen’s books Developing Products in Half The Time (1991) co-authored with Preston Smith, Managing the Design Factory (1997), and The Principles of Product Development Flow (2009) explored an economic model for design, queues in product development work, management systems, managing risk, lean engineering principles, and performance metrics more appropriate for the paradigm shift from the traditional utilization based management paradigm to a throughput management paradigm for engineering, product development, and R&D.

Reinertsen also defines Design in Process (DIP) in the project production environment since inventory is measured in terms of information in the knowledge work space. The abstract nature of information inventory and visualizing how it flows through a knowledge based work environment has probably been the single largest factor holding back the broader adoption of lean product development.

Reinertsen clarifies how the project production environment differs from the manufacturing production environment with repeatable network of dependent events with processes that are subject to statistical fluctuations to one with high variability (uncertainty, learning, experimentation), non-repetitive (every project network is different, sometimes completely), and non-homogeneous task durations (most tasks slightly different each time). Reinertsen’s most recent book Principles of Product Development Flow in particular explores the themes of cadence, synchronization, flow control, WIP constraints, batch size, exploiting variability, queue size, fast feedback, and decentralized control to maximized throughput.  Although these works provide a vast array of tools it is difficult to see the big picture framework suitable for practical implementation.

Lean Product Development

Ronald Mascitelli, Timothy Schipper and Mark Swets went onto develop fully integrated lean product development frameworks that operationalized the principles for engineering leaders who are responsible for new product development.  Most importantly they describe how to fully implement a multi-project production environment based on the all the preceding methods but appropriate for actual business environment.

Ronald Mascitelli’s Mastering Lean Product Development (2011) is perhaps the best integrated framework for the engineering, product development, and R&D leader to establish a throughput managed multi-project production environment. Mascitelli’s framework is an event-driven process incorporating practical lean methods to achieve the goals of improved profitability and new value creation for growth and competitiveness.

Timothy Schipper and Mark Swets published Innovative Lean Development (2010) to describe an equally powerful integrated framework that leverages fast learning cycles and rapid prototyping for project production environments with high uncertainty.

Agile Scrum

In the digital information age as products have become software driven and in many cases entirely software based the agile scrum methodologies have operationalized software product development emerging in early 2000s. The abstract nature of software development defied reliable engineering management methodologies before the emergence of agile scrum. With agile scrum software productivity is more manageable, efficient, and effective. Software driven products require the integration of the agile scrum methodologies within the project production environment framework just described.

The Lean Start-Up

Up to this point in the post we have looked at how established companies with existing demand can exploit the TOC for improving throughput, inventory, and operational expenses to improve profitability in knowledge work. Finally Eric Ries operationalized new-to-the world lean product development (particularly digital offerings) for start-up founders in his book The Lean Start-Up (2011). This is the extreme unstable demand case.  Ries describes how to measure productivity as validated learning for fast iteration and customer insight to find the scalable business model before cash runs out.  Application of lean principles such as small batch size in the form of minimum viable product, build-measure-learn loop for fast feedback, metrics, and adaptability to find product/market fit. Ries observes that The Lean Start-Up is also applicable within existing companies for use by intrapreneurs who may be creating new value with new-to-the-world products because this is also the extreme unstable demand case.

Productivity Methodology Selection Based On Business Environment

Selecting the right methodology to drive business productivity requires leaders to understand their business environment and the stability of their demand environment. The diagram below helps to characterize application & business environments.

Engineering Productivity TOC

The diagram illustrates that in both manufacturing and engineering that the nature of the work can fall into a range of demand conditions.

A key lesson from this review is that leaders should seek to throttle/smoothen (WIP Constrain) the input demand conditions if productivity improvements results are to be achieved.  All the available methodologies are based on the concept of flow and maximizing throughput and managing inventory (physical or information), and operational expense to achieve business the goals of improved profitability and new value creation for growth and competitiveness. As Goldratt emphasized time and again effectiveness of these methods depend the key underlying condition of stable input demand or constraining the process input demand to ensure stable flow. As demand conditions become unstable lean engineering methods have been developed by Goldratt, Reinertsen, Mascitelli, Schipper, and Swets. Ries has described how new cash flow streams can be created in a lean fashion in the extreme case where demand does not yet exist.

Finally a common theme throughout these works is the fact that cost accounting methods and data tools are ill suited to measure throughput, inventory, and operational expenses to achieve business the goals of improved profitability and new value creation for growth and competitiveness. Goldratt explores this issue at length in The Goal why a blind focus on cost reduction leads to bad performance.  This problem has continued as throughput and TOC methods have evolved in the information age as pointed out by Reinertsen of the invisibility of DIP because of how R&D expenses are recognized at the time the money is spent. Information inventory and intangible assets remain as a problem for cost accounting and business performance management. This will be a topic of future posts.

The Fuzzy Front End of New Value Creation

The ‘Fuzzy Front End’ of business is a firm’s new value creation nursery. The ‘Fuzzy Front End’ is the process that starts with the identification of an unmet customer need and the convergence on the optimum solution that a firm can repeatably produce and sell profitably in new or competitive markets. It is also the least understood, most unpredictable, and uncertain business operating process. Firm’s that do this well exploit the new value creation process for sustained growth and new sources of competitive advantage. Firm’s that don’t have an effective new value creation process struggle to survive. Risk adverse managers avoid strategic options that involve business investments in the ‘Fuzzy Front End’.

A key question for management then is how to setup and efficiently/effectively operate a new value nursery that reliability generates sustained growth and new sources of competitive advantage for the firm?

The Fuzzy Front End

The ‘Fuzzy Front End’ is where new opportunities are born, developed, assessed, nurtured, and begin their life as a source of value for the firm. New opportunities are born when an unmet customer need is identified. Often vague or poorly articulated ideas, the unmet customer need requires further development to clarify the new opportunity. Once clarified, a multi-functional team of specialists comprising marketing, product engineering, and designers set about to develop a solution to satisfy the unmet customer need in terms of price, quality, performance, and other appropriate characteristics. The ‘Fuzzy Front End’ is fuelled by creativity, innovation, insight, and customer awareness.

An efficient/effective ‘Fuzzy Front End’ requires the integration of marketing, product development, and business processes. While marketing processes are well understood product development and engineering is often not well understood. The lean engineering framework provides a repeatable process for product engineering to align with the marketing process. Together integrated marketing/lean engineering framework forms an innovation process.  The challenge in achieving an efficient/effective ‘Fuzzy Front End’ rests in the fact that the start and end points are subject to ambiguity. The ambiguity in start and end points is what differentiates the ‘Fuzzy Front End’ process from all other repeatable business processes. Understanding the nature of the start and end points is a critical first step in setting up an efficient/effective new value nursery.

Ambiguous Start Point

Viewed in the context of the lean engineering framework the start point for the ‘Fuzzy Front End’, the unmet customer need, is subject to ambiguity in that a priori the firm can’t be certain that the need is valid or even exists. Sources of ambiguity in the unmet customer need include unstated wants, values, or needs that the customer did not even know that had because no product exists currently in the market today.

Timothy Schipper and Mark Swets in their book Innovative Lean Development say that the goal at the starting point is to express stated/unstated customer needs “accurately and in a form that the design team can understand and directly apply to the project….and this requires a method that allows the team to use the same vocabulary as the users when expressing the values that the solution must apply. The method must also expose the gaps between the problems and potential solutions.”  Schipper and Swets see the ‘Fuzzy Front End’ as a process of closing the user gaps.

Ambiguous End Point

The end point, convergence on an optimum solution, involves decisions, trade-offs, and selection from amongst multiple (if-not infinite) alternatives. The resulting optimum solution is also subject to ambiguity in that a priori the firm can’t sure that the solution with be desired by customers. Sources of ambiguity leading to the convergence on an optimum solution include what price the customer is willing to pay, what combination or set of features hits the customer’s sweat spot, what technologies and building blocks should be selected to form the product, how the product should be manufactured, and how the product should be delivered and services along the entire product life-cycle.

The Process In-Between

The ‘Fuzzy Front End’ process between the ambiguous start and end points is knowledge based work that involves risk, uncertainty, novelty, experimentation, complexity, creativity, and non-routine work. As much as possible the goal is to establish an effective/efficient process although at the detail level may not be as repeatable as operations execution processes that exist in production or service. Various lean product development methods are available for an effective/efficient ‘Fuzzy Front End’ process.

Product Design For Uncertainty and Transient Advantage

Managing uncertainty in new product development is difficult in a rapidly changing world. Firms need to adopt strategies for transient advantage in turbulent markets as recently observed by Rita McGrath.  Product developers can’t wait though for all the answers and absolute certainty that risks missing market opportunities. Firms need to capture as much value as possible from new products yet product development cycles can be long and potentially beyond the timeframe required for a short wave of transient advantage.

Product developers need to give their firms maximum flexibility to exploit transient advantages so to mitigate and exploit uncertainty product developers need to design-in a higher degree of reliability (in uncertain conditions), robustness, versatility, flexibility, evolvability, and interoperability in their product platform and product lines. To be successful product developers need to understand uncertainty and clarify design strategies available to them during the ‘fuzzy front end’ of design.

What are the varieties of uncertainty and what design strategies can be used to manage a diversity of uncertainties?

Uncertainty Continuum

The simple model of known-knowns, known-unknowns, and unknown-unknowns is a useful starting point to understand the varieties of uncertainty but it is not detailed enough for product development. Schlensinger, Kiefer, & Brown’s uncertainty continuum provides a deeper look at varieties of uncertainty mapping along a scale of predictability from the known to the unknown. Their uncertainty continuum maps from the known along a scale of increasing unpredictability as follows:

  • Completely Predictable – You can say with certainty what the outcome of a given situation will be such as with physical laws.
  • Predictable Through Probability – The outcome can be defined to a particular confidence level using statistics but extremes may exceed bounds.
  • Predictable Through Other Analytic Methods – The outcome might be predicted through chaos theory, computer modelling, which is less precise.
  • Predictable Through Pattern Recognition, Experience, and The Like – The outcome might also be predicted based on limited prior experience or from patterns. The emerging world of big data.
  • Not Predictable At All But You Can Say What Can’t Happen – The outcome is not predictable but certain cases can be ruled out.
  • Completely Unpredictable – The outcome is completely unpredictable.

A linear scale is useful to model the range of predictability to classify variables according to how well the value can be predicted for design but it does not provide insight into the severity of events that is important for risk mitigation / opportunity exploitation in product development.

Uncertainty Framework

Another excellent framework for broadly understanding uncertainty of complex systems was proposed by McManus and Hastings (based largely from experience in the US space program) and one of the best I have seen at capturing a holistic view for managing uncertainty in product development. This framework links categories of uncertainties through risks and mitigations/exploitations to system outcomes to be more useful to engineers.

The framework provides a top-down model to structure uncertainty and risk taxonomies to illustrate cause and effect through the relationship – <uncertainty> causes <risk/opportunity> handled by <mitigation/exploitation> resulting in <outcome>. See the paper for excellent cases to understand the framework. I particularly like this framework because it does not just frame effects of uncertainty as a downside risk but upside opportunity that firms can exploit for transient advantage. The framework is also general in nature allowing it to be applied/tailored to any application.

Varieties of uncertainty used by McManus and Hastings are:

  • Lack of Knowledge –  Facts that are not known, or are known only imprecisely, that are needed to complete the system architecture in a rational way. Knowledge in this case may just need to be collected (because it exists somewhere already) or created.
  • Lack of Definition – Things about the system in question that have not been decided or specified.
  • Statistically Characterized (random) variables/Phenomena – Things that cannot always be known precisely, but which can be statistically characterized, or at least bounded.
  • Known Unknowns – Things that it is known are not known. Things are at best bounded, and may have entirely unknown values.
  • Unknown Unknowns – Things that are gotchas that we cannot contemplate occurring with our current understanding.

An improvement combines the uncertainty continuum defined by Schlensinger, Kiefer, & Brown with the front end of McManus and Hastings’ uncertainty framework to more clearly understand how uncertainty maps to risks.

Design Strategies For Uncertainty

Both models provide guidance for design strategy to give firms flexibility for transient advantage.  The uncertainty continuum suggests at the extreme of completely predictable proven design heuristics are appropriate. At the higher extreme of unpredictability a short horizon learning experimental approach such as creaction is appropriate.

The McManus and Hastings’ uncertainty framework is more powerful for designers by linking uncertainty to levers of design.  McManus and Hastings provides a useful list of risk mitigation and exploitation strategies for new product developers to consider. These design strategies help to fill in the middle zone of the uncertainty continuum. McManus and Hastings identify nine strategies:

  1. Margins – Designing systems to be more capable, to withstand worse environments, and to last longer than ‘necessary’.
  2. Redundancy – Including multiple copies of subsystems (or multiple copies of entire systems) to assure at least one works.
  3. Design Choices – Choosing design strategies, technologies, and/or subsystems that are not vulnerable to a known risk.
  4. Verification and Testing – Testing after production to drive out known variation, bound known unknowns, and surface unknown unknowns.
  5. Generality – Using multiple-function (sub)systems and interfaces, rather than specialized ones.
  6. Upgradeability – (sub)systems that can be modified to improve or change function.
  7. Modularity, Open Architecture, and Standard Interfaces – Functions grouped into modules and connected by standard interfaces in such a way that they can ‘plug and play’.
  8. Trade Space Exploration – Analyzing or simulating many possible solutions under many possible conditions.
  9. Portfolios and Real Options – Carrying various design options forward and trimming options in a rational way as more information becomes available and/or market conditions change.

Although several of these strategies most engineers would naturally use but the list may provide some suggested approaches that are not often considered. These nine strategies help to realize a new product design with system outcomes for reliability, robustness, versatility, flexibility, evolvability, and interoperability.

Most of these design strategies add cost to the new product development project and the product itself but the benefit is flexibility. Firms need to weigh the cost benefit of product flexibility in an uncertain world to support a strategy of transient advantage.