The Price of a Thousand Functionalities

1000functionalitiesFellow blogger Andreas Lindenthal recently highlighted a rather sobering point.  Analyzing Product Lifecycle Management (PLM) implementations at clients over the last ten years, he attempted to measure the difference in deployed capability versus available capability.  His conclusion:

“Most companies do not use much more today than what was available 20 years ago.”

Ouch.

While the issue here is highlighted specifically in a PLM context, the problem is characteristic of all major enterprise software platforms ranging from Enterprise Resource Planning (ERP) to Business Process Management (BPM).  However it seems to particularly sting with PLM.  Andreas goes on to rightly explain that there’s a huge opportunity loss.  But if everyone’s just blatantly staring at untapped value in the face there has to be a compelling reason why.

Some observant types reason that overlapping capability with other systems (notably ERP and CRM) creates barriers to adoption.   So a PLM system ends up being used for its perceived strengths, which traditionally lie in the core Product Data Management (PDM) functionalities.  In many cases the overlapping platforms are entrenched across departmental or functional boundaries.  Very rarely is there one system to rule them all.

While competition from overlapping systems is true to an extent, a substantial portion of the untapped value tends not to be in another enterprise platform, but rather in legacy systems, Excel spreadsheets, Dropbox, and (hold your breath) reams upon reams of flattened tree slices (paper).

Now the opportunity loss seems particularly egregious… so there has to be a simpler explanation for not using all that untapped capability.  There is:  it’s simply too hard to make incremental progress.

The wisdom from thought leaders is accurate: PLM is a journey.  Should be, anyway.  But sadly in practice, often PLM ends up as a one shot effort – a flash in the pan.  Most of the achieved capability comes from the first strike, when all attention is turned toward the revolution.  Teams are energized and primed, consultants are everywhere, management support is focused and money flows like a fountain.  Every project has a roadmap, but as priorities change, budgets are reallocated, and business objectives transform, team longevity fades.

You could pay consultants to come in and work a quick miracle, but even the meatiest organizations have only so much money for that.  Smaller companies have even less.  Regardless of Return on Investment (ROI) real or perceived, sometimes the funding just isn’t there.

Consequently, changes after the initial strike have to do more with less.  But making dramatic changes to the system periodically becomes too much of a business disruption; subsequent changes must be evolutionary rather than revolutionary.  And therein lies the problem.

The tools just don’t do well in that regard – chiefly due to all the complexity introduced from all that extra unused functionality.   Making a change in a running, functional production environment becomes such a crucible that people are rightly afraid to pull the trigger.   You probably will never have a chance to properly undeploy the change, and if you’re really not careful you’ll rip a hole in the fabric of space-time.  Some of this danger could be mitigated with robust testing, but, adding insult to injury, many testing environments end up inadequate or not fully representative due to resource limitations.  So then teams sit and wait for an upgrade interval instead, because when things go south, there’s a convenient scapegoat!  But the upgrade gets delayed… and the system plows on as is.

So if you’re paying good money for a system that you can only begin to tap into, and there are fleeting hopes of pushing it much further for quite some time, what kind of situation is this?  If you are using a system where you’re not using 90% of the functionality, how efficient is the system for the 10% you are using?  And how’s that user experience?  Don’t get me started, that’s another post.

I’ve got two words for you:  Maximum Overkill.  You needed a golf cart, but you’re driving your clubs around in a monster truck, and boy is the thing expensive and hard to deal with.  It’s the price of a thousand unused functionalities.

Breaking this vicious circle involves overcoming the paradigm of software being sold exclusively on functionality, so that efforts aren’t focused on an endless parade of limited-value bolt-ons, but rather very strong and robust core functionality that is flexible, accessible, and amenable to evolutionary change.  For the companies suffering through 20 years of this is one thing, but younger companies no doubt should demand better.

  • Mahesh Beri

    “..price of a thousand unused functionalities”.
    There are two perspectives. First COTS/OOTB solutions do not contain what the customers’ process demands. Second what capabilites COTS/OOTB posses is something that users do not require.
    In a particular example, sourcing solution I saw many customers had highly evolved business practices which honestly were completely absent with any PLM. So net result use of complex spreadsheets and legacy interfaces.
    Why this happens .. (my thoughts)
    One part of problem is way Product development happens is more of evolutionary process unlike manufacturing which is repetitive, less dynamic. So every OEM have their own set of practices that makes it unique to address with a single tool.
    PLM products have evolved invariable as combination of features that were abstracted from “a particular customer at a point of time” (first customer who used that solution)… Obviously most part of it is irrelvant to other customers, especially since with passage of time practices evolve.
    So question is – Can PLM ever succed as canned solution loaded with features ready for use across multiple customers? or PLM is more of robust platform combined with flexibility to do alignment (read customizations) meeting the particular industry/business practice need ..

    • Mahesh, thanks for you thoughtful comments. You have highlighted a crucial point: that a good portion of the feature set and/or canned behaviors are originally derived either from first-adopter customers and/or directed development. And of course, we find those features become increasingly irrelevant for more customers as time passes.

      And the more irrelevant these features are, the more futile resistance will be. See what I did there?

      I don’t think a canned solution can ever succeed – simply due to the fact that every business is unique. And we see that time and time again in the field – companies balk and just start going off the deep end of customization. Or they just refuse to use the product. Neither is a very good option.

      So are we stuck behind the invariability of canned solutions that no one wants and the bottomless pit of deep customization? Perhaps in today’s systems, yes. But I think the system of tomorrow needs to have an option halfway in between – a level of configurability clearly accessible to the business, without IT involvement, that can stand up to the evolutionary product development process. That requires abandoning some of the outright rigidity that up to this point has been a hallmark of PLM. 20 years ago, this was unthinkable. Now, it most certainly is an achievable goal.