Great progress on Lean in Program Management

There is a joint working group between MIT, INCOSE and PMI that I initiated and currently run. It is the first collaboration between INCOSE and PMI with the goal of better integrating systems engineering and program management. The management of large engineering programs is still a massive challenge, and cost and schedule overruns the sad norm. The US Department of Defense alone is struggling with a cost overrun in their major programs of $300 billion (!), which is close to their yearly budget for these activities.

Our group uses the six principles of Lean Management to develop and identify proven best practices to improve the management of programs: value, value stream, flow, pull, perfection and respect for people.

The group develops two major deliverables: First, a list of the 10 major themes of program management challenges, based on the prioritization and aggregation of 160 challenges. The are:

  1. Reactive Program Execution
  2. Lack of stability, clarity and completeness of requirements
  3. Insufficient alignment and coordination of the program enterprise
  4. Value stream not optimized throughout the enterprise
  5. Unclear roles, responsibilities and accountability
  6. Mismanagement of team competency and knowledge
  7. Insufficient Program Planning
  8. Improper Metrics, Metric Systems and KPIs
  9. Lack of Active Program Risk Management
  10. Poor Program Acquisition and Contracting Practices

The second, major, deliverable is a list of lean enablers or best practices to overcome these challenges. The counter currently stands at over 100 identified best practices.

The results will be presented to the public at the annual INCOSE workshop at the end of January 2012 and feedback sought from the user community, both systems engineers and program managers. We plan to wrap up the final deliverable, a joint MIT-INCOSE-PMI report, in the following weeks.

On October 22 2010, we had our latest face-to-face workshop at the PMI Global Congress. The workshop was booked out weeks in advance and a great success. It allowed us to collect with our PMI stakeholders, as well as collect their input and feedback.

If you are interested in joining the group, either as a subject matter expert or as a general member to receive regular monthly updates on our progress, please contact me.

You can find some additional information on the group at

“Reviewers’ Favourite” Award at ICED11

Our paper “Requirements for Product Development Self-Assessment Tools” (see publication list) has received the “Reviewers’ Favourite” award at the 18th International Conference on Engineering Design. I wrote the paper with Christoph Knoblinger and Katharina Helten from TU Munich, as well as two colleagues from MIT, Eric Rebentisch and Warren Seering.

The paper investigates the requirements regarding self-assessment tools for PD organizations. It summarizes the current literature on PD-related self-assessment tools and derives tool requirements from an industry focus group (US aerospace and defense industry) as well as from interviews at a major American defense contractor. The resulting requirements are: 1. Focus on proven PD best practices; 2. Formalized implementation process; 3. Tool customization guidelines; and 4. Integration with other process improvement approaches. A gap analysis comparing these requirements to the previously identified tools is performed (see table). The biggest weakness in existing approaches is a lack of customization guidelines and integration with existing process improvement

We have also developed a prototype of an engineering self assessment tool based on these requirements. We particularly focus on the two weaknesses mentioned above, as well as dividing “best practice” into two categories: the competence to execute the job at hand, as well as the ability to change and adapt.

Contact me if you are interested in more details.

5 article series on Lean PD

We have begun publishing a series of five articles on Lean Product Development in the German-language Swiss management and leadership journal io management. io  management is jointly published by the ETH Center for Industrial Management and Springer. It is roughly comparable to MIT’s Sloan Management Review, in the sense that it makes current research results accessible to a management audience. Articles are usually quite concise, 3-4 pages.

I begun to organize the series late last year after a visit from a research colleague from the Chair of Technology and Innovation Management at ETH Zurich that has an active research branch in Lean Development. The articles are written by both researchers from MIT’s Lean Advancement Initiative and the ETH. The five articles address the following topics:

  • Introduction to Lean PD Management Principles (by Jörn Hoppmann from the ETH and myself) – just published
  • Continuous improvement in Lean PD (ETH researchers) – June issue
  • Value creation in Lean PD (ETH researchers) – July / August issue
  • Waste in Lean PD (myself with my colleague Eric Rebentisch from LAI) – September / October issue
  • Flexibility and Set-Based Design in Lean PD (ETH Researchers) – November / December issue

A number of articles from each issue are available online; if one of our articles should become available this way, I will link from here.

Overview of Design Research

Ma Dexiu, Chairperson of Shanghai Jiao Tong University

Last Thursday, Prof. Ma Dexiu, Chairperson of Shanghai Jiao Tong University, met with my group at MIT. She wanted to learn more about research and education in product design and development. Warren Seering, Maria Yang and I had put together a presentation, outlining some of the main issues policy makers should think about (from our point of view) when “designing” design research programs. The meeting was started off by two great presentations of two of our students, Ming Leong on our 2.009 class, and Lennon Rodgers on “Enlight“, the product that emerged from a student design project that he was part of.

Although the mere slides hardly do justice to Warren’s excellent presentation, you can download them here for a first idea of what we think of when we say “design”.

Among a number of other topics, we discussed possible future design research topics:

  • How to make products environmentally sustainable?
  • How to make each product a success?
  • How to improve design process through virtual prototyping?
  • How to integrate a higher innovation density in new products?
  • How to improve the performance of global design teams?
  • How to measure & improve performance of design projects?
  • How to understand and teach relevant design skills?

When talking about measuring progress and success of design research, we advocated that universities should not only focus on classic metrics such as publicatoins, patents and citations, but particularly on the more difficult to measure factors such as:

  • “Impact” of the research
  • Industry involvement and perception
  • Adoption and dissemination
  • Influence on teaching

And no discussion of design and product development at MIT would be complete without appropriate reference to the Kauffman Report, talking about the $2 trillion in yearly sales and 1 million jobs created by MIT alumni.

Lean Product Development Whitepaper Series

I recently gave an update to the LAI consortium on the status of our Lean Product Development Whitepapers.

The goal of this whitepaper series is to make the 15 years of research on Lean Product Development at LAI more accessible to our industry partners. The current whitepaper topics include:

  • Risk Management in Lean Product Development
  • Waste in Lean Product Development
  • Lean Program Management
  • Lean Product Development Practices
  • Lean Product Development Metrics and Self Assessment

The presentation I gave includes some of the highlights of the whitepapers, and can be downloaded here.

The future whitepaper topics that are in the pipeline include:

  • Stakeholder needs generation
  • Trade space exploration & decision making
  • Product architecture & commonality management
  • IT systems in PD
  • HR development & intellectual capital
  • Teams in PD
  • Core PD process principles

Let me know if you have any ideas or suggestions.

Would I have believed myself? On evaluating the quality of reports on topics that one does not know a whole lot about

I posted a guest column on Barry Brook’s blog Brave New Climate on March 29 that I reproduce below (visit Barry’s site for his introduction). Also see the page on my 15 minutes of fame for background.

On Sunday, March 13, my cousin in Japan posted an email I had written to him on his blog in the early morning at 3am EST. The email explained the context of nuclear physics and engineering, as well as discussed the events at the Daiichi-1 reactor until that point. It also featured my very strong opinion that they are safe. By lunchtime, it was the second most twittered site on the internet (you can read the whole story at At the end of the day, it had been translated into more than 9 languages (often multiple times), and after 48 hours had been read by several million people. Two weeks into my unwanted and luckily rapidly cooling off Web 2.0 stardom, I have begun working through the trauma and reflecting. Thanks for sharing, you might think. But one question in particular came up that also has some general relevance:

Would I have believed myself if I came across that blog and had no prior knowledge of nuclear physics and engineering? Or asked another way: How do you judge the quality of TV, radio, print and internet news reporting on topics that you are only superficially familiar with?

Read the answer below. And like everything I write, it is rather lengthy!
Working in an interdisciplinary field as an academic, it is often necessary for me to judge the quality of information from areas outside my core expertise and decide whether they are reliable sources worth studying. Also, when you work with students, you start to develop little antennas when you read to judge if the student really got what she or he is writing about, and ultimately the quality of the students work (although you as the supervisor of course know everything better, well, you might notalways be familiar with all the details).

So let’s take the example of my email-turned-blog, imagine I was living in Japan, had no idea about nuclear science and engineering (not too big a stretch someone just said), was looking for some info on Fukushima and came across Jason’s blog. Do I read it? All of it? What do I do then?

My approach to evaluating any sort of reports on the internet (and elsewhere) consists of 5 elements.: 2 regarding trustworthiness, 2 regarding the style (as a measure of effort put into a piece, but also a good indicator of the level of understanding of the author of the subject that he/she write about) and 1 element for content (arguably the most difficult to judge if you are not already familiar with the field). I will have to give myself credit on some of the dimensions, so I am asking you ahead of time for your forgiveness of some literary narcissism in the following.

1. Judging obvious fishiness (Trust)

When you surf the web, you come across a lot of stuff that you can safely disregard immediately. So I have two criteria for an immediate go/no-go decision at the onset:

a. Context: What is the context of the information? Blogs can be places where people put great stuff, but also incredibly stupid things (as I said, just Google my name these days). In the case of Jason’s blog, no points for great existing content, but also no minus points for tons of conspiracy theories and UFO posts. 0 points

b. Hoax potential: Would I have believed the whole story, cousin at MIT writing an email, setting up a blog to share it? Probably yes. Story looks interesting enough at first glance and setting up a blog is little enough work. Testing the opposite hypothesis: Why would anyone go through that much trouble of writing such a long text; invent such a boring cover story; and then assign the authorship to a total nobody in nuclear engineering, and not some expert in the field? So again, nothing major in favor, but also not a deal killer, 0 points.

2. Trustworthiness of the author (Trust)

Again, we have two criteria:

a. Past experience in the field. Is the author an authority in the field? Google clears that one up pretty quickly, certainly not. -1 point.

b. Bias, agenda, background: Checks out, engineering guy, MIT, probably has done his homework. 1 point.

3. Style and presentation (Style)

Is the narrative and style appealing? Again, I usually use this as an indicator of effort and level of understanding on the side of the author. Before I send the original email of to Jason, I scanned it one more time and thought to myself “Hm, this has actually turned into a nice piece of writing.” I probably would have had the same reaction scanning the text – well structured, flowing narrative, clear reasoning. 1 point.

4. Quality of the structure of the work (Style)

Does the article follow a logical structure? The article does seem well structured. It introduces the fundamentals, then progresses to describe what happened in Fukushima so far and drawing on these fundamentals. Seems to make sense. However it is not an academic treatise and strongly opinionated. Still, 1 point.

5. Content quality of the work (Content)

Here, since this is the most important category for me, I use a number of criteria:

a. Are the general fundamentals right? Are general engineering and physics fundamentals right that are used in the writing? Are the terms correctly used? Yes, 1 point.

b. Are specifics right? Are specific fundamental facts (e.g. half-life, types of elements etc.) and specific facts (sizes, amounts, temperatures, events) correct to the extent that I can verify them? Yes, 1 point.

c. Is there an uninterrupted logical flow from context and facts to interpretation? For the most part, yes. There are no logical breaks between the context, the facts being discussed in that context and the conclusions that are drawn. In its own little universe, it makes sense, no conclusions falling out of nowhere, no contradictions. However, again, the writing is not objective and strongly opinionated. But still, 1 point.

d. Are the sources given? Does the article contain sources so I could verify the claims and facts presented by the author? No, not in the narrative, not as footnotes. -1 point.

6. Possible next actions:

So, what should I do with what I just learned from reading the document? If we tally up the points for a first impression, we get 4 out of 10 points. And looking at the critical points, one of them is a biggy: No sources so I could easily verify if what the author claims is true or not. So what to do with it?

a. Disregard. This would mean thinking “oh my god, what a load of junk and a waste of time”. No, that is not what I would have done.

b. Use it to build mental model of the problem and investigate further. This means I use my newly acquired knowledge to build a mental model of the problem. What is the relevant context? What are the critical facts I need to know or monitor? That mental model is then tested (can I confirm what was said about the context, can I confirm what facts were presented?), and once that is done, run with it to grow the context (i.e. integrating understanding of spent fuel ponds) and interpret incoming facts (i.e. how dangerous is the latest venting of steam)?

c. Believe and be done with it. The information I just acquired solves my problem. I believe everything and am done with it (in this case, worrying about Fukushima).

As you can probably tell by the length of discussion of the different points above, I would have gone with b. That concludes my therapeutic reflections. And maybe you find the assessment process useful to make a more conscious choice of the news programs in TV, radio, press and internet you decide to support (I did, and that is why I love Barry and his site bravenewclimate).

Where does that leave us?

1. Help people understand the context. If you help people to understand the context, you help them to help themselves in the future. My hope is that the email made a small contribution to helping the general public, as well as some journalists, in building the context to make a better informed assessment of new facts as they come in. Do your part with your family and friends (as I had originally intended…)

2. Take a stand against mass hysteria. The email I wrote contains both an introduction to some relevant physics and engineering, as well as strong opinions about the safety of the plant you may or may not share. One part lives on on the MIT website that was created to provide some more of the same, fact-based and understandable context information; the other part has hopefully inspired a couple of people to also speak their mind in a general atmosphere of panic.

3. Demand balanced and quality reporting. Demand discussions of “possible” and “most likely” scenarios in the news. Call the newspaper editor, TV station and radio station and complain about the garbage that is still put out there. Make a conscious choice regarding your news viewing, reading and listening habits. News shows are out there to produce viewers, listeners and readers that they can sell to advertisers, not quality news. If you don’t demand it, it won’t happen.