Wednesday, October 27, 2010

Failure and success on process modelling


Once in a while, it can be challenging to share some experience on a topic with others besides college's. Process modelling in the context of IT is such a topic. Where process modelling rules big time in the automotive and food industry, it is still a lot of jargon and disappointments in the IT market. How can that be?

There are plenty of tools, consultants and books written about it and still, I have experienced the following in several companies:
  • The stack of process modelling tools is too high; why can't we stick to one tool?
  • New business processes are being modelled without reusing bottom up process parts; why do we keep on reinventing the wheel?
  • Business processes are being modelled with every realization bits and pieces in it; why can't we model the core essence of our business as a core process layer?
  • Business processes are being modelled without being aware of separate data and control flow; why can't we model the control flow and the data flow in separate flows in the same model?
  • Business processes are being modelled without being aware of the semantics of the data flow; why can't we integrate data semantics in our process models?
Of course, to answer these questions, it is necessary first to be aware of the problems.

So, what's wrong with using more then one process modelling tool? Two simple reasons:
  • Besides all industry standards (XPDL, BPMN, etc.), only a few tools are able to integrate (share a common repository) or even synchronize. Business processes exist on different levels, e.g. the execution level and the simulation level. Different tooling create problems and constrains on the full round trip between the different levels. The result is like building a tower in Babylon...
  • The licence and maintenance costs and the costs of knowledge will simply be a waist of money.

What's wrong with pure (and only) top down process modelling? Two simple reasons:
  • The model will be unaware of current available process parts on execution level. Not reusing those process parts will cause unnecessary complexity and maintenance costs on the execution level by having business processes who basically are the same but with a slight difference. Should those processes be different? Perhaps so, but there are much more effective ways in dealing with this, like polymorphism.
  • The business case will not be challenged without reusing existing automation parts. In general, doing something without a counter force will lead to solutions with 'too much fat'. Besides, we should be challenged by delivering the business proposition at first with the most simple and cost effective IT-implementation. If this will lead to a quick and efficient business success, we have reached ROI fast and efficient; after that, further optimisation is always possible.

What's wrong with modelling the business process with all the implementation bits and pieces in it? Two simple reasons:
  • Business processes that are aware of all the implementation bits and pieces will lead to unnecessary complexity and maintenance costs on the execution level. Business processes that model the core essence can be made polymorphic and implementation aware simply by using rule engines. Well, almost simple, because not all rule engines are accessible by business product specialist without XPath skills.
  • The business case will not be challenged with standardized process solutions; which will lead to 'too much fat'.

What's wrong with modelling the data flow and the control flow apart? It's the good old 'separation of concerns' pattern. On the execution level, it is possible to reuse the data flow but with a different control flow. For example, in control flow A there is hardly any need for knowledge workers; a typical STP. On the other hand on control flow B – with potential the same data flow – there is a high need for implementing knowledge workers. Without separation, it will lead to unnecessary complexity and maintenance costs on the execution level.

What's wrong with modelling a business process without being aware of the semantics of the data flow? Well consider you own a chocolate factory, and suppose you want to model a new business process for manufacturing your new 'delicious dark' label. It would save you a lot of manufacturing problems if you know up front if your 'delicious darks' can be produced by the same machine that is processing your 'wonderful whites'. So without integrating XML-schemas on your business processes, you will not be aware of any constrains until your budget over exceeding IT project tells you so (and by then, you probably won't be amused if they tell you that your process model had a flaw).

2 comments:

  1. Hi Gio

    Living almost next door to the Lindt chocolate factory in Switzerland I like your case study of dark and white chocolate. Something like this must be going on within these sacred halls!

    I understand your point that business people can describe control flows but may have trouble when it comes to data flows. These pertain to such nasty things as XML schema and so on. I once worked on a project where business people could modify the control flow -- and the data flow was a complete (and very large) XML document that contained all data relevant to this particular transaction. Hardly very efficient but effective for what the business people needed. The individual actions or process steps just took a section of the overall payload as necessary to that particular processing step.

    I think we also see a slow progression from the business model vs. execution model happening in current tools. Lombardi (acquired by IBM and possibly replacing IBM's WebSphere Process Server as the strategic tool of choice in the future) has a much more consistent mapping between the two. They use a Web-based tool called Blueprint that business people use for process modelling. Blueprint offers a high-level GUI (built around GWT) and doesn't require any desktop installation. Developers and process implementers can then take this process models and put more flesh on them.

    ReplyDelete
  2. Hi Thomas, thanks for your comment. I had a look at Bleuprint (looks cool all-right). The fact that it's based on GWT enforces coolness all together :)

    However, I'm not sure if this will enforce the use of production line capabilities (without altering them all the time). Suppose I would want to model a new commercial product (the delicious darks :). In this case I would like to set up a new distribution line based on the capabilities from my product lines (melting, filling, packaging, etc). If I don't treat the dataflow of my product lines as sacred assets, every commercial product will cost me heavily on changing the product lines.

    Quite honest, I don't think that a business person is that interested in control flow (it should go fast without any intervention, that's it). I think a business person is interested in getting the projected yield as fast as possible. And suppose the actual yield is disappointing, he wants to alter the commercial product with some extra/modified features; this is altering the dataflow of the distribution line (without altering the capabilities of the product lines) as fast and flexible as posible.

    So, isn't altering dataflow (and having autonomy) in the distribution line far more important then altering some decision and control points in the control flow of the distribution line? And for doing this, isn't it so that it is mandatory to be aware of the semantics of the dataflow of the business process (based on a proper data model aka CDM) and sacred and sustainable dataflow in the product lines?

    ReplyDelete