Pop quiz, hotshot. There’s PLM data on a bus. Once more than 5 people interact with that data it becomes important. But if that data isn’t integrated properly, it blows up. What do you do? What do you do? We’ve long lamented about PLM complexity as a limiting factor to adoption, including the requisite wrangling to get product data moving efficiently across an enterprise in a loose federation of heterogeneous IT systems. For the most part, this has been the sole domain of data architects and development teams, wrangling robust SOA services onto ESB highways among a bewildering hodgepodge of enterprise data platforms, APIs, and disparate data models. But hopefully things get easier from here. One of the more interesting promises of moving PLM technology forward and into the cloud is handing over the integration data modeling keys to the nontechnical business side. Let’s try to get through this without shooting any hostages.
As much confusion as there is about Cloud PLM in general and differentiating between currently offered platforms, there’s even more confusion about cloud PLM integrations. Cloud PLM presumably is about increasing the accessibility and viability of PLM by reducing barriers of entry. No one system can do it all; you need connectivity with other stuff to achieve a true PLM vision. But as much as CAD/PLM must transform to adapt to the cloud and to simpler, more robust, and more accessible implementations, so too must integration evolve.
Integration is notoriously complicated. Two of the most common integration concepts used in the enterprise space (including PLM) include Service Oriented Architecture (SOA) and Enterprise Service Buses (ESBs). We could spend the length of a book getting into the esoteric aspects of each, and if you’d like a deeper dive wrap your head around this white paper from DataXtend.
For the rest of you, here’s the (kinda) short-short version: imagine each of your enterprise applications, be it PLM, ERP, CRM, etc. as individual cities with all kinds of internal transportation infrastructure to transport your data. Let’s say your data is bananas. Maybe you send bananas out of your city in a truck. Who provides the truck, who drives it and how the truck gets to another city is another matter entirely, you just know bananas go out in the truck. The path to each city is not clear, one city might have incoming infrastructure only for trains while another city only accommodates polka-dotted Volkswagen beetles. Worse still, cities are connected in a mish-mash of various different ways (roads, railways, water slides, zip lines) and there may be all sorts of obstacles in between. The description of each these modes of transportation (the truck in your case) is analogous to a service in the SOA philosophy. Getting from one city to another will be an ad hoc combination of a variety of services. Working together it’s how your bananas move from a truck to a barge to an airplane to another truck to Keanu Reaves on a skateboard. The point is to get your bananas from city to city.
If all that sounds like a mess you’re right to think so, especially as the number of systems and services go up, well-meaning SOA can get out of hand without some underlying infrastructure, which is why you typically implement SOA on an ESB. Think of an ESB as a highway running past several cities with entrance and exit ramps, and the highway accepts all kind of vehicles – from buses driven by Sandra Bullock to baby carriages and unicycles, all without anyone running each other over. Oh, thank you for the tip, Ortiz.
So let’s refocus on cloud PLM specifically. What’s the primary challenge with SOA and ESB? From the white paper mentioned above:
The complexity that threatens many teams’ success with SOA stems from reconciling the differences in the schemas used in service interface definitions and their implied semantics. Successfully building large SOA applications requires resolving semantic differences using design processes. These processes create data transforms to convert one document format to another.
In English: you need developers. What do you NOT have if you’re a firm sold on the lower impact of cloud PLM? Developers. Sure you could outsource, offshore, offworld, or just go to the outhouse. But most of that costs lots of dinero, for which you have even less. Don’t get dead.
Recent reveals at Audodesk’s Accelerate PLM 2015, tries to address this problem via introducing an “evented web” concept powered by Jitterbit middleware where event triggers in one system results in actions taken in another. This led to interesting online discussion among thought leaders about the subtleties between technologies like Jitterbit, and If This Then That for driving PLM integration:
IFTTT is like scripting on steroids because it’s event driven and linear (cause and effect), making it more appropriate for something like a workflow than for actually mapping bidirectional data models between two different systems. But for simple systems it might be just good enough, and since it’s consumer facing, my dog can probably make use of it. But it has its limitations.
Jitterbit, on the other hand is like data modeling for dummies, specifically with Design Studio, where they have created a GUI to encapsulate what an enterprise data architect typically has to deal with in reconciling disparate data models. Consequently, Jitterbit is more ESB-like, but with serious training wheels and fabulous graphical goodness, hence for dummies.
The 64,000 dollar question is if Autodesk is implementing Jitterbit like it was IFTTT for PLM360 why not open the door to IFTTT itself? Referring back to the SOA/ESB discussion at the beginning of this post (if was in there for some reason, you know), it’s important to note that point to point solutions just don’t scale. Once integration gets complicated enough IFTTT is just not going to cut it. And while things might be simple enough at the outset, it gets messy quick. You don’t want the wheels to come off and the bomb to explode at the first bump in the road. Best to found the system on what you’re really going to need in the end: a lightweight, business-facing ESB. But there’s no need to open that door completely at the outset, because very early stage a point-to-point event driven system might be just right. It’s accommodating present needs while simultaneously planning for the future needs. There are of course many more ways to slice this pie than just Jitterbit, chime in the comments below.