There’s a growing divide lurking in the heart of today’s system of systems product development, as system complexity continues to steadily increase. Left uncorrected, the dysfunction stands to topple some of the most elaborate (and colossally expensive) enterprise software solutions to date by rendering their perceived benefits moot. All those nice, warm promises of accelerated time to market and improved execution might get tossed right out the nearest window. What could possibly do such a thing? Some kind of inter-dimensional space-potato-monster thing? Ben Affleck Batman? No, the truth is less horrifying but equally devastating: it’s the growing digital divide between hardware and software development, between Product Lifecycle Management (PLM) and Software source control.
Up to this point the challenge has been met mostly with the strategy of not worrying and being happy – which works great for a reggae cover but not so much for actually overcoming obstacles. Let’s look at the situation at the epicenter of product development, where the most complex products meet many millions of lines of code. We’re talking about the very source and pillar of most PLM technologies, where requirements management and complex source control is paramount: that would be Aerospace & Defense. But for Defense especially, the prognosis isn’t so hot. An article in the December 2 issue of Aviation Week, has some rather humbling revelations about the health of defense projects:
“Citing a 2011 Government Accountability Office (GAO) report that identified $402 Billion in budget overruns and schedule slippages of up to 22 months for the largest acquisition programs, [US Air Force Lt. Gen (ret.) George] Muellner says that “most major weapons systems development cost overruns are in excess of 30 percent, and because of that, several major defense acquisition programs fail.” At the Society of Experimental Test Pilots symposium in Anaheim, California in September, Muellner said the common thread to many of the issues is the increasing complexity and integrations of systems. “Software flight test has become enormous.”
Excuse me, I think my wallet is burning. But is it just testing that’s the problem? Of course not. Hold on to your butts:
“The GAO report concluded that testers had not in fact played a significant part in the endemic problems. However, it did uncover major issues, including weak alignment of the requirement, development and test communities. “All three treat it as a serial process and as a result, all the trades that need to occur don’t get done. It’s a key part of the problem,” he says. The report also noted serious flaws in systems engineering and the obvious point that without improvements in this overarching discipline, problems with inadequate software will persist.”
Mr. Wizard. Get me the hell out of here.
But why? Despite the fact that many PLM systems have long retained some type of source control capability, the truth is the two are very different universes. The very nature of software development, i.e. the continuous branching, forking, and merging inherent in any codebase has proven rather unnatural for PLM. When code spans across product lines in highly non-linear ways, capturing a configuration across a hierarchical product structure becomes especially daunting, if not practically impossible. English translation: you can’t draw straight lines between hardware and software. The reaction to this problem, for the most part, has been a cheat. The most common approach is that software development is managed in a wholly separate system with design cycles completely independent of the hardware development lifecycle. In others, certain software milestones are represented in the hardware structure. But such representations are only symbolic, if only to keep project schedules aligned to arbitrary Work Breakdown (WBS) structure and keep the bean-counters satiated, but doing nothing to address the fundamental design alignment. And how effective has that cheat been? Well it looks like at least $402 Billion less than it should be in one example.
But why hasn’t this problem toppled product development entirely? For one, we humans are really good at delivering one principal in business: the brute force methodology. Otherwise known as the throw more warm bodies at it philosophy. With reasonable time, the million monkeys at their typewriters will poop out the all the works of Shakespeare, marginally stealthy fighter jets, and probably-good-enough warp drive in approximately that order. But as complexity continues to increase, the monkeys are starting to think about jumping back to the trees.
Is there anywhere else to go? Perhaps.
“As systems become more complex and integrated, developers and testers must identify and implement new approaches to software development and testing to reduce cost and schedule impact.”
The solution lies in eliminating the serial handoff between software and hardware such that system of system design occurs in parallel across all disciplines. And that will require embracing that nonlinearity. That is exceptionally hard. Such an approach likely requires all new methodologies and all new software tools, whether as one monolithic system or a collection of highly specialized, yet highly integrated federated systems. Perhaps it may seem like tumbling down the rabbit hole, but it sounds like it’s time to slam down that red pill and get started. Well, unless you happen to have an extra $402 billion lying around.