The Pillar of Intelligent Process Mining: Analytics

In my last article, I made the case that Intelligent Processing Mining presents us with the “Easy Button” for generating process intelligence. A key aspect of that argument is the role that process-centric analytics play. Analytic tools help sift, compare, and combine data into an understandable context that facilitates insight into cause and effect. Conventional analytics, when applied to the simpler data sets typically presented by “isolated” process steps, certainly help make information actionable. However, as we shift from simpler use cases to more complex data sets that characterize multi-step business processes, the design and variety of process-optimized analytics become critically important. What may not be as evident is that the underlying methodology used to aggregate and organize the data directly affects how easy it will be for analytics to generate the answers you need.

We all share a common idea of what the word “process” means:  a series of events, steps, or actions that occur over time, with distinct beginning and end points, that share some common contextual relationship (e.g. they relate to the same order, insurance claim, mortgage application, doctor’s office visit, etc.). Given this common understanding, it makes sense that aggregating and displaying those events in a temporal context – as activities on a timeline if you will – is naturally easier for us to interpret as an end-to-end view of process execution.

Until recently, using accurate, granular data to recreate such a comprehensive digital representation (essentially a process digital twin) for any complex process has been virtually impossible. Why? Well, for one thing, the data associated with various steps or events often resides in multiple systems of record. As it turns out, a timeline methodology allows us to overcome that limitation – extracting disparate event data and recombining it in a logical sequence. The result is an end-end process execution view –something never previously possible!  That foundation enables us to build out multiple pillars of unique, sophisticated analytics that don’t require a PhD to understand or use. Timeline Analysis make the magic happen by combining a special sauce of granular activity data, organized across time, in the context of any entity (order, claim, patient visit, etc.) with process-centric analytics. Suddenly answers to questions that use to be very hard (maybe impossible given resource constraints) or time-consuming to obtain become self-evident. That’s Process Intelligence, my friend.

That’s the biggest difference between process mining applications and intelligent processing mining applications – the latter does the heavy lifting for you. What I’m talking about here is that easy button effect I mentioned last time. Creating process intelligence is very similar to creating business intelligence, in that it’s value is directly proportional to how you can slice, dice, and present information to a user. Here’s a specific example: every process mining application can organize event-related data into any number of representative schemas. But comparing those schemas against each other visually to quickly identify or isolate unusual or excessive repetitive steps is not easy. Enter a process intelligence tool called path analysis.

Path analysis employs the same underlying data but enables you to display it in a highly simplified, and completely different way. It combines that view with the frequency/count, duration and cost of these occurrences. Bingo! Now it’s possible for the human eye/brain to immediately recognize unusual patterns. Better yet, with the data presented and organized in this fashion, it’s easy to the next step of simply drilling down into the detail, apply filters to examine very granular specifics of how that instance was executed. I’ve attached a sub 5-minute video snippet that hopefully illustrates my point regarding path analysis:

 

There are many other types of analyses that differentiate an intelligent process mining approach: powerful filtering mechanisms, histograms; drag and drop complex query builders, time interval and measurements, trellis, cohort, and sub-process visualization to name a few. They help us appraise issues such as process execution consistency, queue time between process steps, routing errors, specific protocol violations, and other root causations. Again, however, it’s the application of such tools combined with the timeline-focused, entity-centric data organization that together creates a powerful, elegant tool for exposing and assessing process execution. “Elegant” is the perfect adjective in this context. Consider the definition: “Characterized by minimalism and intuitiveness while preserving exactness and precision”. This is extremely pertinent to my point: intelligent process mining enables the average joe to evaluate information data like a process expert – which promulgates broader adoption of the tool and increases your organizations’ Process IQ(organizational awareness, understanding and metrics of actual process execution data)

Another aspect of your organizations’ Process IQ is the ability to monitor process execution, and alert team members so that they can correct in-flight process protocol deviations or least ameliorate their impact. But that’s a subject worthy of its own blog entry — and my next topic.

About the author

Joseph Rayfield is an accomplished business professional with a proven domestic and international track record of delivery in aggressive growth environments. Joe has an extensive background in technology – ranging from Data & Hosting networks through to Enterprise SaaS Software Solutions, most recently spending time focusing on Business Process and Business Intelligence solutions, providing value for Enterprise customers and partners. Joe has worked in senior management positions in EMEA, Asia Pacific and the US – currently focusing on Global Business Development for TimelinePI.

April 3, 2018