Complexity versus Difficulty

On software development projects people have to estimate the effort involved. In doing so, one of the things analysts are asked to consider is the complexity of a process, but it surprises me how often analysts confuse complexity with difficulty.

The two concepts are mutually exclusive.  A process can be simple, but difficult to implement. Another process could be complex, but simple to implement. Yet another, simple and easy or complex and difficult.

Complexity is a measure of how many possible paths there are through a process. The terms of reference will be specific to each organisation, but the least complex process is one that has a single path (regardless of the number of steps along that path). The measure of complexity increases with the number of possible paths and possible outcomes. Based on whatever measurement applies within their organisation, it is the Business Analyst who evaluates the complexity of a process.

Difficulty, on the other hand, is a measure of how hard it is to implement a process. Difficulty often arises because one or more steps in a process require one system to interface with another. As such, difficulty is for the Technical Architect to measure, not the Business Analyst.

However, if you are a Pega Business Architect documenting specifications for Pega implementations, you should note that the Specification template has an attribute of ‘Complexity’ which is used to mean ‘estimated level of effort’. The Business Architect Essentials Student Guide for Pega 7.2 says on page 155:

‘Select the estimated level of effort ( high, medium, or low) to implement the specification. This helps you plan your project where you can focus on high complexity tasks earlier on in development since they will likely take more time to create.’

PegaSpecification

Since there isn’t a field in the Specification template for ‘level of effort’, the implication here is that ‘Complexity’ in Pega means level of effort (i.e., difficulty). Any Pega Lead Systems Architect I have discussed this with has confirmed the ‘Complexity’ field in the Pega Specification template is used to indicate technical difficulty, rather than complexity.

Kind regards.

Declan Chellar

LinkedIn
Follow by Email
RSS

Introduction to the BPMN 2.0 Level 1 Palette

The slide deck below is an introduction for process modelers to the Level 1 Palette of shapes for Business Process Modelling Notation 2.0.

When I was new to the BPMN palette years ago, I used it as I had used any other process flowcharting palette previously, i.e., I used the shapes as I saw fit. I did not realise that each shape has a specific semantic and that there is a specification behind the notation that is managed by the Object Management Group. Once I realised that I couldn’t just make it up as I went along, I sought out training and certification, which I achieved with Bruce Silver, BPMN Yoda and author of “BPMN Method and Style“.

If you are new to BPMN, I hope this slide deck will be useful as an introduction to how to use the shapes of the Level 1 Palette.

Note that I have updated the slide deck (it’s now version 1.3), so if you’ve seen it before, you might like to have another look.

Kind regards.

Declan Chellar

Related posts:

LinkedIn
Follow by Email
RSS

Data modelling as a BA technique is in decline

In my experience, very few business analysts produce models of the things a business cares about.

Of course my experience is that of one person across the whole IT industry, so my view is a thin slice through a very big cake. What’s more, my experience is limited to process-driven projects. I have no experience in, for example, Data Warehousing.

By “data model” I mean a representation of data needs. For the purposes of business architecture, such a model must be technology-agnostic and easily understood by the people who operate the business (both the BIZBOK and the BABOK use the term ‘information model’). Lately, I have started using the term “business taxonomy” but it’s important to state that this is not just a stenographed glossary of terms as dictated by the business (see ‘Things aren’t the same just because you treat them the same‘).

I started in IT in 1996 as a graduate trainee ‘Systems Engineer’ with Electronic Data Systems (EDS). The first technical skill everyone on the training programme was taught was modelling a business’s data needs using an Entity Relationship Diagram. On my first big project a few years later, we also modelled data needs in the form of a Logical Class Model (which, although technology-agnostic, did not provide an adequate taxonomy of the business). However, once I got into the BPM world, I found the emphasis was on implementing not business processes but what amounted to screen flows. In the BPM world there was pressure to implement flows as quickly as possible and the business’s data needs were not analysed, they were merely documented on a screen-by-screen basis. What’s more, it was not unusual for the same data to appear as fields on multiple screens in several functional areas, often labelled differently. The result was that fields which were logically the same were being implemented multiple times in physical data bases. Outcome: the new systems passed function tests and user acceptance tests but over time became clogged with inconsistent and redundant data that caused technical performance problems as well as business problems.

By the way, there is no “tweaking” a database whose underlying data architecture does not reflect business reality. There is no avoiding a costly refactoring and data migration exercise.

Along with most of my colleagues, the skill of data modelling had withered because we weren’t allowed to use it and so we forgot its importance in describing the very nature of a business. However, over the past ten years, I have come to see data modelling as a basic skill not just for business analysts but also for software developers and testers. As a freelance consultant, I have rarely had the pleasure of working on a greenfields project. I am usually hired for BPM projects where a system is being rebuilt because the first implementation didn’t work, or for projects where the physical data architecture has already been laid down. In every case, there was no data architecture on the business side and the physical data architecture was based on screen requirements alone. Yet none of the highly experienced people on those projects saw that as a problem.

To be clear, I am not criticising those people, after all, until 2009 I had been working along the same lines. I’m criticising what I see as characteristics of BPM projects:

  • It’s all about process (with little or no attention to data or decision modelling)
  • Build screen flows (instead of realising business processes)
  • Get the solution into production as quickly as possible (delivery, instead of adoption by the business users, is
  • the measure of success)
  • You can always refine the process later (but you can’t refine a flawed physical data architecture)

Over the years, I’ve come to realise that a clear and unambiguous business taxonomy is key to getting the right physical data architecture for a software solution. It is the very foundation upon which a solution is built. I have realised this more with each project and since 2009, I have emphasised data modelling more and more in my analysis of a business. Let me be clear: I am not saying that in my role as a business architect I should be involved in designing the physical architecture. Far from it. However, in my experience, the absence of a clear, unambiguous, technology-agnostic business taxonomy leads to a physical data architecture that does not represent the business reality.

Example:
In the UK, if you wish to make a complaint to a business which is regulated by the Financial Conduct Authority (FCA), there are essentially three stages:

  1. You make a complaint directly to the business
  2. If you’re not satisfied with the response, you can take your complaint to the Financial Ombudsman’s Office (FOS)
  3. If you’re not satisfied with the outcome of that, you can take it to court

Note that businesses only have to report complaints to the FCA if they have not resolved them within a certain time limit.

Even that small amount of information tells you to start modelling a single business entity called “Complaint” and have the following attributes against it:

  • Date and time received (for determining the age of the Complaint, e.g., for identifying Complaints to be reported to the FCA)
  • Stage

The data held against an instance of the Complaint entity will change or be added to as it passes through the stages above but it remains a single thing. Some years ago I was involved as a Pega Business Architect in the complete rebuilding of a complaints handling BPM system (built using Pega). The previous version (which had gone into production two years earlier but was not deemed fit for purpose – see my blog post “Software delivery does not equal success“) had four different Complaint entities in its physical data architecture:

  • A Complaint that has been received but is not yet reportable to the FCA
  • A Complaint that is reportable to the FCA
  • A Complaint that is being addressed by the FOS
  • A Complaint that has gone to court

In the original BPM solution, as the process passed from one stage to the next, an instance of the next type of Complaint was created and the data from the previous type was copied into it, resulting in duplication of data across four Complaint Cases that were actually just a single case. The result of this (and several other problems caused by the data architecture’s not matching the business reality) was that after two years in production, the database became bloated and data was inconsistent and unreliable. Bear in mind that there was no business taxonomy in place for Version 1, the data needs were defined only in screen designs.

The root cause of the problem was that the business described it as four different complaints, instead of a single complaint that could pass through four stages, and the Lead Systems Architect built exactly what was described to him. Unfortunately, the tendency in software development is to assume that what the business says is correct, when in reality most business people do not have the linguistic rigour to clearly and unambiguously define their taxonomy. I have quoted John Ciardi in a previous blog post:

‘The language of experience is not the language of classification.’

Business people have not been trained in such rigour simply because the day-to-day operational needs of their business do not require it but the needs of business architecture and software development do require such rigour. Business analysts (in particular), software developers and testers should be so rigorous but they don’t seem to be. While they are not, businesses will continue to spend millions on rebuilds of software which they already spent millions on. A further problem is that staff churn means people move on from one project to another and are not around to learn from their mistakes. What mistakes? They delivered didn’t they? The software went into production, didn’t it? Oh, wait: Software delivery does not equal success.

I’ll write more on business taxonomy and data modelling in my next post.

Kind regards.

Declan Chellar

This post attracted a lot of comments on LinkedIn. Click here to read them.

LinkedIn
Follow by Email
RSS

KYC is primarily about decision making

Know Your Customer (KYC) is primarily about the decisions a financial institution needs to make about a (potential) customer.

However, I have yet to find a definition online that focuses on decisions. Wikipedia’s definition focuses on the process1, while Investopedia2 and The Free Dictionary3 place emphasis on the form that has to be completed. Although the latter makes reference to decision making, it does so almost in passing.

My own definition would be as follows:

“Know Your Customer (KYC) involves the decisions a business makes to ensure due diligence is observed in evaluating risk in relation to both new and existing customers.”

Feel free to challenge my definition, as that would help me improve it.

In my experience, most business analysts are very process-centric but pay little attention to data modelling and none at all to decision modelling. In the case of KYC, the process really only serves to sequence the decisions being made and to provide activities which ensure all the necessary input data are in place before each decision is made.

The fundamental question being asked by KYC is: “Is it safe to do business with this customer?” That question might be formally documented as the name of a decision (model) thus: “Determine Customer Risk Level”. Any other decisions being made prior to asking that are really just leading up to that final question. If it were feasible to gather all the input data up front, then you’d actually only be making that one business decision (which may well consist of a thousand individual business rules) and the process itself would be very short. However, in reality it’s useful to break it up into stages of decision-making to eliminate risky customers earlier without spending too much time gathering data you don’t need.

As you iterate your decision models, the data needs of each statement of logic will either trace back to attributes in the business’s taxonomy, or the taxonomy will have to be updated to reflect the new needs. This assumes, of course, that the business already has a clear and consistent taxonomy in place. Your job is much harder and success is much less likely if it doesn’t.

Many BPM tools, such as Pega4, come with built-in implementations of standard KYC decision logic. Of course, such out-of-the-box solutions need to be customised to suit the needs of each business. Before customisation, however, the Pega Business Architect needs to collaborate with the Pega Systems Architect to do a gap analysis between what the business needs and what the tool can do and before that, you need to model and test what the business needs.

In summary, KYC is primarily about decision making, then about the data needed in order to be able to make those decisions and finally about the process that sequences the decisions and ensures the data is in place each time a decision needs to be made. My recommendation to business analysts working on KYC is that you tackle it with those priorities in mind.

BA techniques needed:

  • Decision modelling
  • Logical data modelling
  • Business process modelling (ideally using BPMN)

Kind regards.

Declan Chellar

LinkedIn
Follow by Email
RSS

Software delivery does not equal success

I’ve done work for many software delivery organisations and they all have one thing in common: they think delivery of software to the client is success. I beg to differ.

Several years ago, one such organisation delivered a software solution to its client on time and within budget at a cost of roughly £15 million. I imagine backs were clapped all round and champagne corks flew. However, only about 11% of the target user population ever adopted the software, despite the fact that the software met the requirements (which tells me that the requirements were incorrect in the first place, but that’s a topic for another blog post). The rest reverted to manual processes using spreadsheets because they found the new software made their job harder and slower. I won’t go into detail about the causes other than to say that the business provided a handful of “subject matter experts” to represent a user base of thousands and their requirements were taken as gospel by analysts (who acted as little more than stenographers), developers and testers, none of whom could explain so much as the business significance of a field on a screen and none of whom could cite the strategic business goals of the project, nor had read the business case. Incidentally, the 11% who did use the software, mainly did so because they were ordered to by their managers.

About two years after Version 1 of that software went into production, the business decided to address the issues and launched a project to improve it. Version 2 cost a further £10 million and added no new functionality at all, it merely attempted to make version 1 less bad. Version 2 went into production on time and within budget and I imagine that, again, there was much slapping of backs and uncorking of champagne. The adoption of Version 2 went up to 18% of the target user population, despite the fact that the software met the requirements. The same issues I mentioned above had repeated themselves. Nobody had learned anything and the same mistakes were made because everyone within the delivery organisation still thought of Version 1 as a successful delivery.

Several years after kicking off the development of Version 1, and £25 million later, 88% of users were still doing their work in MS Excel!

Delivery is not success. Adoption is success. Realisation of strategic business objectives is success. The sooner delivery organisations learn this and operate accordingly, the better.

Kind regards.

Declan Chellar

Related posts:

LinkedIn
Follow by Email
RSS

Logical ERD for Pega’s Financial Services Industry Foundation 7.21

In principle, any physical model should have a corresponding logical model that represents a business’s own view of its needs in a technology-agnostic way. When it comes to data, that ideally consists of a business glossary and some sort of visual representation of the relationships between the business concepts defined in the glossary.

In my experience, most businesses don’t have a well-structured and up-to-date logical data model in place, if they have one at all. In fact, expertise in running a business does not equate to expertise in describing the taxonomy of the business and many subject matter experts are poorly trained in how to articulate their data needs.

What’s more, I have never worked on a Pega project where the physical data model was based on a logical one (before I explained the need for it, that is). I think that’s partly because Pega’s approach does not seem to pay any attention to logical data modelling (see my appraisal of Pega’s Direct Capture of Objectives approach). However, if you don’t have a meaningful taxonomy that describes clearly and unambiguously the nature of the concepts your business cares about, how can developers possibly build you a software solution that accurately represents those concepts?

When you are using a Pega Framework, such as Financial Services Industry Foundation 7.21, it comes with a ready-built class structure. However, as with any “off the shelf” solution, if you deploy it without checking to see whether it fits your needs, you are likely to run into problems. Adjusting process flows in a framework after the solution goes into production is relatively painless. However, adjusting the very structure of the underlying physical data architecture is costly once the solution goes into production and the longer you leave it, the more incorrectly-structured data it accumulates. Once a physical data architecture is in place, there is no “tweaking” it if it is wrong because it is the very foundation of the application. Fixing it involves significant refactoring and data migration. It is far more expensive than taking the time to get your data model right in the first place.

The ideal way to do a gap analysis between your business’s data needs and the capability provided by a particular technology solution is to compare your logical data model with the logical data model that underpins the technology solution. However, while Pega refers to its Financial Services Industry Foundation 7.21 entity relationship diagram as a “logical” model, Pega’s model is specific to its own technology and is therefore a physical model. Logical models are technology-agnostic.

I have created an A2 size entity relationship diagram (in Visio 2013 – download it here) that represents the kind of logical concepts that underpin the Pega FSIF 7.12 physical model. If you are a Pega Business Architect feel free to use it and adapt it to your needs. This version 0.1 focuses on the core concepts of Party, Account and Asset and in order to produce it I drew on my experience in modelling for financial services clients. You may have some questions regarding the cardinality of some of the relationships (e.g., why a particular relationship is “zero-to-many” instead of “one-to-many”). If so, feel free to contact me for an explanation.

I have attempted to normalise this model as much as possible but when you add all the attributes needed by your organisation, you may find it normalises further. You will notice that several entity types on this diagram correspond to a single data class in Pega. This is because physical data models are often de-normalised for performance reasons.

As usual, challenges and comments are welcome.

Kind regards.

Declan Chellar

LinkedIn
Follow by Email
RSS

Things aren’t the same just because you treat them the same

Several years ago a major software consultancy built a Pega solution for a client and within two years of deploying it, the client was having to rebuild it themselves. It’s not an uncommon scenario.

The client brought me in, because of my background as a Pega Business Architect, to take a look at the business architecture underlying the original solution and that is where I spotted the problems, which were almost entirely about the business’s inability to properly understand and articulate the nature of the things it cares about and the relationships between them. In other words, their inability to model their own data. This is a common issue which comes down to the following:

“The language of experience is not the language of classification” John Ciardi

The fact that a person is an expert in operating a business does not automatically make them expert in articulating the nature of that business or how it operates. In fact, their experience may even blind them to how that business should be operating.

Business subject matter experts (SMEs) often speak directly to software developers (or to requirements stenographers) using the language of experience, which is usually imprecise and inconsistent. However, computers operate using ones and zeroes. There is no room in binary code for: “You know what I mean” (which I have actually heard clients say during analysis workshops).

For example, on the project I mentioned above, I spotted that two intimately related, but quite different, business concepts were modelled as a single class in Pega. I challenged this and the main business SME’s response was that they modelled them the same because the business treats them the same. I needed to explain that the nature of a thing (data definition) and the way in which you treat it (process definition) were not the same and, moreover, defining things as different allows flexibility while still allowing them to be treated the same way.

I used the following analogy: Imagine you make fruit pies using both apples and pears. Two different, although similar, types of fruit, each with its own characteristics. However, they are treated the same way. They are washed the same way, peeled the same way, sliced the same way and put into the same pie. Because of this, you simply define them as “fruit”, with no distinction as to their taxonomy. Then one day there is a fungal epidemic which dramatically reduces the pear crop among your suppliers and the price of pears doubles. You need to know how this is going to impact the cost of producing your pies. However, you cannot say because you don’t know how many pears you buy versus apples, because you record it all under “fruit”.

On the other hand, taking care to define apples as apples and pears as pears gives you more control of the information about them while still allowing you to put them into the same, or even different, pies. You can now report that because of the ratio of apple to pear in your pie, the production cost will go up by 30%.

By the way, the corrollary is also true. Things are not different just because you treat them differently. A pear is still a pear whether you put it in a pie or a pudding or throw it at someone’s head.

To swing away from the analogy and back to the language of business architecture, this problem stems partly from the fact that a lot of people who work on BPM projects only understand the world in terms of process and everything is seen only through that lens. Separation of concerns is needed. The classification of a thing belongs to a separate domain to the activities that you can perform on that thing, or the decisions you can make about that thing, or the events that affect that thing.

This is something you need to bear in mind when working on Pega projects, because a Case Type in Pega represents what a Case is (data) and what you can do to it (process). I explore this further here.

Learn to see things for what they are and not just for how you treat them.

Kind regards.

Declan Chellar

LinkedIn
Follow by Email
RSS

Collaboration does not mean my shouldering your responsibilities

It’s not news to you that collaboration is key to success in many fields, so it might shock you, as it shocks Software Delivery Managers and Scrum Masters when I say: “I don’t care whether you deliver your software.”
 
OK, you got me, I do care because ultimately we are striving towards the same goal: realising some defined business benefit. There is an important caveat to my shocking statement: It only applies when I am working purely on the business architecture and not as a BA on the software development team, in which case I care a lot.
 
However, we all have to focus on playing our own part, so when I say: “I don’t care whether your deliver your software”, it’s mainly to shock some people into understanding that the realisation of a defined business benefit does not necessarily come down to the success about of one particular software project.
 
When I work on a technology-agnostic business architecture, I help the business understand and articulate what it needs, largely by modelling and testing those needs. I focus my attention on that purpose so that others who depend on understanding how the business needs to change can focus on their job. I do not concern myself with whether the change actually gets adopted – I leave that to the Change Manager. I do not concern myself with whether any consequent software gets delivered – I leave that to the Software Delivery Manager.
 
Business architecture is not about any one particular software project. Business architecture is essentially a set of models that describe how a business currently operates (As Is) or should be operating (I prefer the term “Should Be” over “To Be”). Implementing that Operating Model may require several software development projects (or even none); it may require a public relations campaign – in 2015 Ryanair’s changes to how it wants to operate involved a significant PR campaign; it may require a programme to train staff in new procedures and policies; it may require a programme of real estate sales and purchases and the consequent effects on staff, as at HMRC in 2016. Business architecture is bigger than any one software development project and it is certainly not about documenting software requirements, contrary to what many Product Owners and Scrum Masters assume (see “Business Architecture is about more than software requirements“).
busarchisaboutmoreThat said, I have never met a Product Owner, Scrum Master or Software Delivery Manager who had ever seen a coherent set of business architecture models before, so their assumption that business analysis is nothing more than requirements elicitation is understandable.
 
My responsibility as a business architect ends once I have helped the business articulate and test its needs. I cannot also shoulder the responsibilities of other roles, such as to deliver the software that satisfies those needs. I make the same argument to change managers. It is up to them to deliver the needed changes that I have helped the business articulate.
 
Some see this as an unwillingness to collaborate. However, this is based on a misunderstanding of “collaboration”. Think of the medical staff who collaborate in an operating theatre: everybody has to focus on their own tasks in order for the whole team to succeed. However, the anaesthetist is not failing to collaborate for not grabbing a scalpel and mucking in with the surgeon.
 
It’s not just about understanding the boundaries of one’s own responsibilities, it’s also making sure you have the time to to fulfil those responsibilities.
 
Collaboration: know your job, do your job and thus help others to do theirs but let them worry about doing their own job.
 
Kind regards.
 
Declan Chellar
LinkedIn
Follow by Email
RSS

We win together or lose together

I’ve worked on so many IT projects that this video could almost be an analogy for. Except in an IT project, leave people alone to get on with what they’re good at and everyone wins. Try to win by forcing others to fail and everyone falls over in a heap. In an IT project, there is no clever Aussie in the background. Everyone loses when one person tries to win glory for themselves.

This video contains profanity. Don’t watch it if that bothers you.

LinkedIn
Follow by Email
RSS

Process Model versus Process Map

When it comes to describing business processes BPMN 2.0 is my notation of choice.

This post is about what I mean when I say “process model” and “process map”, so let’s not get hung up on the terms themselves. If you are an advocate of and expert in BPMN and you choose to call your models “maps”, I’ll die a little inside, but I won’t argue with you over your choice of term. Here I want to talk about rich language versus simplistic.

I’ve had many business analysts argue against using BPMN for describing business processes. I have always found it interesting that the ones who are argue against it, don’t know how to use it. One senior BA I interviewed for a client expressed great confidence in his ability to model processes using BPMN (which was a mandatory skill for the role) until he made a complete mess of an exercise I gave him. Suddenly he started arguing why the client should not use BPMN.

Most BAs I have met produce process maps rather than process models, so what’s the difference? In my experience, a process map is a simplistic flowchart that is not capable of expressing all aspects of the process. As I shall explain below, process maps simply lack the vocabulary to be adequately expressive. For me, one of the key points of a model is that you can test its robustness against a thorough set of likely scenarios. You model so that you can test before you implement. Bear in mind that implementation of a process does not necessarily involve software.

If you search online for images using the search term “process map”, you’ll see examples that contain as few as two different shapes. The most expressive I have seen contained 7 different shapes. The shapes are vocabulary. Make no mistake, such limited vocabulary does not make process maps “simple”, it makes them simplistic. The fact that something is easy to produce does not necessarily make it good. The fact that the limited vocabulary of process mapping allows anyone in a business to do it does not necessarily make it good. In my experience, business processes tend to be subtle and layered. The lack of expressiveness in process mapping cannot represent the true nature of the process. Against the advice of Albert Einstein, process mapping, in its attempt to be simple, surrenders the adequate representation of the business process.

Some BAs argue in favour of UML for modelling business processes. However:

“The OMG’s Unified Modeling Language® (UML®) helps you specify, visualize, and document models of software systems” (source)

I used to use UML activity diagrams for modelling business processes and stopped when I learned BPMN for several reasons. Firstly, because (per the OMG’s statement above) that’s not what UML is for; secondly, because they suffer from the same problem of restricted vocabulary; thirdly, because there is no standard vocabulary and grammar for activity diagrams.

Note, however, that I resisted learning BPMN at first. I suspect I wanted my experience up to that point to remain valid and relevant. Then I decided that the only way to truly choose was to learn both, so I did. Sadly, I have the impression that a lot of people these days would rather remain expert in an old way of doing things than invest in learning something new.

Unlike process mapping, BPMN 2.0 has a broad vocabulary of shapes and shape types that allows us to express the richness and sophistication of business processes as simply as possible, while ensuring the consistency of the use of that language through a standard grammar and syntax. BPMN doesn’t merely allow you to model the steps of a process, but also the nature of the steps (manual, user, automated, etc), the nature of what triggers the process (a timer, the receipt of a message, a human choice, etc), how the process must respond to different types of events that take place in the world, what to do if one of many parallel paths does not complete. These are just a few examples. What’s more, BPMN also allows us to build hierarchical models in three dimensions, as opposed to the two-dimensional nature of process maps. This is analogous to having a model of a town, showing the detail of every building, rather than just a map of it. One of the key things about BPMN is that it allows you to test the process.

In Waterfall software developments, business analysts often never find out that their simplistic process maps are not adequately descriptive because by the time everyone realises that the software doesn’t actually reflect the real business process, the people who produced the process maps have moved on and don’t learn the truth.

In Agile software developments, they can flesh out any lack of clarity in conversations but in my experience the clarifications get reflected in the screen flows of the software design while the original process maps are never refined. The result is that after the software goes into production, it and the process maps do not match, so the maps are useful neither for training nor as a reflection of the new “As Is”. This is largely down to people’s misinterpreting the Agile principle “Value conversation over documentation” as “Value conversation instead of documentation”.

Any modelling technique is a language and business processes are usually quite sophisticated. Would you rather learn and use a rich language to describe that sophistication, or would you rather stick with 2 to 7 shapes? Here’s the hidden bonus: a richer vocabulary enables richer thinking and better problem-solving skills.

BPMN 2.0 has a two palettes of shapes: Level 1 and Level 2. Learning Level 1 first allows process modellers to start getting to grips with the language. It also gives business people enough vocabulary to be able to start drawing initial drafts of models themselves, which can then be refined in collaboration with modellers who are fluent in Level 2.

For a slide deck tutorial on Level 1, click here. Anyone can learn it.

Kind regards.

Declan Chellar

LinkedIn
Follow by Email
RSS