PLM Failure: You Didn’t See Anything

You didn't see anything...Enterprise software, especially Product Lifecycle Management (PLM) Enterprise Resource Planning (ERP) is complex both from a functionality and integration perspective.  Whether such software must necessarily be complex is a topic for another time.  Success in any Enterprise software implementation often requires dedicated resources, careful planning, technical expertise, executive sponsorship, and a receptive culture, among other things.    Sometimes the results of such efforts are transformational, producing both measurable and significant business benefit.  In other instances, however, enterprise software implementation attempts can and do fail, due to a variety of possible factors.  My last post regarding PLM Startups instigated a bit of unexpected controversy with regard to PLM failure.  In case you’re not up to speed catch yourself up with the last post here: Why We Need More PLM Epic Fails.  The point of controversy, surprisingly, is contention about whether PLM failure exists at all.  Despite the fact that all other Enterprise software implementation is known to fail, is PLM somehow –perhaps magically- immune?

From the outside PLM failure in specific may seem like an enigma – if it occurs at all, why then is there a dearth of readily available forensic evidence?  Some contend that PLM failure, unless otherwise proven with discrete evidence, must likely be a fantasy.  ERP failures are certainly more visible – several examples of rather substantial ERP litigation make that fact quite clear.  If PLM is subject to similar failure, why are there no corresponding lawsuits or at least specific case studies on what went wrong at Company X, Y or Z?

Michael Grieves, Florida Institute of Technology Research professor and NASA consultant, takes an interesting twist on this premise on LinkedIn by likening the situation to Fermi’s paradox:

“A common theme on these PLM threads is the common occurrence of PLM failures. So I would like to proposes the Grieves PLM Paradox. This is based on the Fermi Paradox, which is the apparent contradiction between high estimates of the probability of the existence of extraterrestrial civilization and humanity’s lack of contact with, or evidence for, such civilizations.

So the Grieves PLM paradox is that there is an apparent contradiction between the high estimates of PLM failures and the lack of evidence of such failures.”

So let’s talk about failure for a bit.  From what I have observed, PLM fails in different, often subtle ways.  Total instantaneous failure is not a common failure mode.  You’re not going to see a PLM system going online, flash-igniting the server room, and deleting design data while people run away in terror.  Well maybe except that one time.  PLM tends to die quietly.  It’s very much like drowning.  Sometimes an over-scoped and under supported initiative runs clear out of money or time.  Sometimes cultural resistance or organizational change overturns any momentum.  Maybe a technical glitch or integration shortfall previously overlooked becomes a deal breaker.  All of these have happened before and will happen again.  The quiet death is a result of the environment.  These days many PLM implementations are a replacement for something else, a legacy system, whether automated or manual.  Any deficiency in the new implementation can be quickly mitigated by extending the life of a legacy solution.  That happens often, which is why many companies end up asking Why Build One When you Can Build Two at Twice the Price?

But what of evidence?  When it comes to case studies, you’re not going to hear about them.  Most any company will not be too keen on admitting their own foul ups – especially when it’s related to internal process.  Any expectation of such disclosure is unreasonable.  Implementation staff and/or consultants are keenly aware of when projects fail – which is why any talk about enterprise software failure does indeed resonate in circles of expertise.  But to expect that those individuals will disclose the identity of those companies, is asking too much.  Those employees have a responsibility to keep the internal affairs of their employers and their clients confidential.  Any public action to the contrary can be problematic at best, litigious at worst.  So the “I can tell you, but I’d have to kill you“ line, yes, there’s a reason for that.  You didn’t see anything.  At industry events, people are more amenable to discussing aspects of failure in private, but they won’t present such information.  Most if not all events are vendor sponsored and highly focused on marketing; as such many sessions can sometimes appear as an exercise in success theater.  I think that’s one of the many reasons individuals like Yoann Maingon for example legitimately call for a PLM FailCon.  (I really identify well with that post if you haven’t noticed already).  There’s a tremendous learning opportunity that’s going unrealized, albeit for arguably good reasons.  Failure should be an option.

So if no direct disclosure then, what about liability?  Failure of ERP, due to its manufacturing focus, tends to creates a more visceral and immediate impact.   When order and manufacturing systems break down, lines stop.  People start twiddling their thumbs and deliveries are missed.  Customers and income are immediately lost.  PLM, as I mentioned previously, has a softer failure mode.  Project schedules slip, budgets are blown, defects compound, but largely production output does not immediately stop – because most of the time there’s an ERP already in place and a pipeline of previously developed products.  So workarounds are invented, the lives of prior systems are extended, and work continues.  That doesn’t make it any less of a tragedy from both an effort and cost standpoint – but you can imagine tracing the root cause to revenues and ultimately liability seems like a reach for anything but the simplest of implementations.   Quite frankly, it’s not the same ball game.

As for Fermi’s paradox (which incidentally makes for a catchy song), many possible theories have been proposed.  My favorite: space-faring civilizations have their own sort of a prime directive, and don’t have their equivalent of James T Kirk to mess that all up.

  • Pingback: You need PLM project to fail… to start lifecycle()

  • Pingback: You need PLM project to fail… to start lifecycle | Daily PLM Think Tank Blog()

  • jIM

    Most businesses do not understand software, and any vendor trying to do business will need to minimize their liablity, which means minimize their risk. Software is iterative, very few businesses integrate software well, and those which do have to weigh, is it better to make $X with the integrations we deliver now, or develop a new integration to “possibly” make $X+$Y. Most of the people making those decisions are conservative decision makers. Work with people you like, know and trust, with a proven track record of doing this others thought impossible or risky.

    • Hi Jim, your point about integration risk certainly resonates. What are your thoughts about the market changing in recent years? Integration risk was certainly understandable when software suites were intended to be both centralized and vertically integrated. Now market competitiveness demands integration options among a federation of solutions. Will that change traditional vendor market approaches?

      • I get more projects these days where the integration is given to independant integration platforms like ESB. And at the same time, I exchange with techies consultant who still want to build all the integration by customizing the PLM solution !