Zylem Process Mining:
A Deep Product Adventure

Intro

What is process mining? That was my question too when I joined Zylem as a UX designer. Essentially, it’s a way to analyze data from various sources like IT systems and databases, to identify patterns and optimize business processes. It’s like putting on a pair of X-ray goggles for your company’s operations – you can see everything that’s happening in your processes, from start to finish.

Zylem is an intelligent process mining software that differentiates itself from other similar products like Celonis, Minit, and Pafnow. As the first team member after the product manager, I dove into the product and created a fast MVP to test.

As a semi-game designer, I love adventure. That’s why I was excited to explore the world of process mining. I’ve worked with neural networks before, so I had a basic understanding of data mining, but process mining was a new challenge for me. In this case study, I’ll take you on a journey through our product development process, which turned out to be quite an adventure. So buckle up and get ready to learn!

The Map

When I joined the Zylem team, I realized I needed to learn the ins and outs of process mining. It wasn’t an old subject, so there weren’t many sources available. But lucky for us, we had not one, not two, not three, but four academic professors as product advisers. They provided me with recorded process mining classes from Germany, and I eagerly began my studies. Think of it as my very own Indiana Jones adventure, except instead of searching for treasure, I was searching for process-mining knowledge.

 

After ten days of intensive studying, I finally felt ready to begin the benchmarking process. There weren’t many products for process mining on the market at the time, with Celonis being the biggest one. So, I decided to conduct a light benchmark on their IA and core features. And you know what? It was during this process that I realized why Zylem’s core values could win the race! It’s like we had our very own secret weapon, except instead of a weapon, it’s a set of values that help us stand out in the crowded process mining market.

The Map Legend

You may be wondering what our core value was. Well, it was the Zylem Connector! During the benchmarking phase, I focused on understanding the main features of a process mining tool and how they were being developed. However, I still didn’t have a clear understanding of our users, the problem we were solving, the time to market, or our plans for expanding the product. That’s why I conducted stakeholder interviews to get some answers. Here are some of the questions I wanted to address:

  • Who exactly is our target user? (Of course, we needed to conduct user research to confirm our assumptions, but I wanted to hear the stakeholders’ ideas first.)
  • What problem are we solving, and why should a company use Zylem instead of Celonis?
  • How are we solving this problem?
  • What’s our vision for the future?
  • What’s the size of the market we’re targeting?

 

After a brief discussion with the stakeholders about our target users, I asked our product advisers to connect me with current users of Minit and Celonis. This user research was started early in the process, even before we began the main development. Why? Because I was new to the domain and needed to understand the users of these types of products better, despite the courses I had taken.

📑 First User Research: Who are you!?

My goal was to conduct exploratory research in a short period of time, just to get a better understanding of users in the domain and to validate the stakeholders’ ideas. This data wasn’t going to be used directly to make decisions about the features or product structure.

 

I conducted interviews with three users of Minit and three users of Celonis. I was a bit disappointed with the number of participants, as I wasn’t sure if the data would be insightful or reliable if each participant had a different pain point. It was a tough situation for me, full of uncertainty.

 

To start the research, I created a screener document and got approval from the stakeholders. It was challenging to convince them to spend even three days on user research, as they were eager to start the product as soon as possible due to our marketing team’s strict time-to-market requirements. But after three days, I was the happiest designer ever! Why, you ask? Because five out of six participants mentioned a main pain point that I had already heard about in the stakeholder interviews.

 

I wrapped up my information and create an Affinity diagram to keep track of other pain points and start the creation of a persona skeleton.

The Monster in the Mountains

In the realm of process mining, there once lurked a daunting beast known as the “Connector”. Allow me to elaborate. Process Mining software requires an input file known as an “Event Log”, which follows a standard template across all products. This file contains crucial information such as case ID, activity, time stamp, and optional columns like duration and income. However, many companies lacked this type of file or table in their databases, with their event logs scattered across different tables. For example, support center logs would be in a separate table from sales department logs, and income/outcome data would be stored in a completely different table. As a result, when they purchased process mining products, engineers had to spend two to three weeks on-site organizing the data and addressing data security and privacy concerns before the event log system was finally ready for use.

 

Not exactly a seamless process, to say the least. Fortunately, Celonis has developed some nifty solutions like templates and connectors for various databases to streamline the process. Nonetheless, the engineers still need to make their way to the company to get their hands on the data and whip it into shape. All in a day’s work for the Connector-slayers of the process mining world!

Secret Weapon: Zylem Connector

Our tech team has been hard at work building a secret weapon that can slay the monster in an instant! We’re talking about an AI that can learn from your database and automatically organize and create an event log file, regardless of the database type or how your data is populated.

 

 

As part of the solution, my role was to create a user flow to provide the information that our neural network needed. Since there was no engineer on-site and the code wasn’t running on our server, there were no privacy issues to worry about. The AI, of course, needed some input information to get started on its work.

Experience Tree: User Flow

I had a clear understanding of the problem statement from the get-go. We needed to gather information from the user to create their customized connector. However, there were some important requirements that I needed to keep in mind while designing the user flow:

 

  • Since there was a lot of information to gather, the flow had to be as smooth as possible.
  • Many users have found other process mining products to be overly complicated and not user-friendly, so we needed to make sure our solution was easy to use.
  • There are different types of users with different permissions when it comes to using process mining products.
  • Companies use different types of databases, so we had to account for this in our design.
  • Finally, we wanted to make sure that users felt confident that Zylem was the right product for them.

The Solution: 3-Step Data Connector

I’ve been having some really informative meetings with our tech team lately to better understand how our model works and determine what information we need to input. The model is quite complex, with a lot of dependencies and a lot of going back and forth. However, I was able to make a valuable contribution by creating a step-by-step process that makes it easy for users to provide all the necessary information without feeling overwhelmed. This IA has been a helpful guide for us and has streamlined our workflow.

The project was vast and intricate. To begin, I had to organize the information from scratch and break it down into coherent groups of steps for the user to follow. Fortunately, there were certain aspects that could be concealed from view until they were needed, which helped to simplify the process. All in all, it was a challenging endeavor that required a lot of effort, but I’m proud of the end result.

Then I needed to prioritize these groups based on the previous research data and our tech team’s concerns. I ended a user flow like this : 

The first flow detailed everything I wanted to happen, while the second flow was designed to present users with grouped information. So, what exactly were these groups? Let me explain how I approached the user interface (UI) design.

 

 

The ultimate solution I came up with was a three-step data connector that could connect to databases of any complexity. Although there were many states and pages behind the scenes, I made sure that the user could see the easiest and most intuitive steps at first glance.

Flow Start: Resources

Our AI can create an event log by merging files or databases of different types that users can add in any combination. To simplify the process, I designed it to start with just the first resource and then users could add more later. Once they complete the first step, they can learn more about how the flow works and add additional resources as needed.

 

While considering different approaches, one idea was to allow users to add all resources simultaneously. However, during the Sketch phase, we decided to reject this idea because each resource type has its own flow. Managing all of the data from various resources in a single step would have made the process unnecessarily complex.

Here I’ll go with one sample file flow.

First Step: File Upload

Users have the ability to add multiple files, with each file being considered a separate resource. To simplify the process, all files can be added in a single step because they follow the same flow. Users can upload files from a URL or their local host, and a preview of their table is available to review before uploading.

 

This approach streamlines the process for users, allowing them to add multiple files at once while still providing a preview of each file’s contents before they are uploaded.

To streamline the process, I grouped all the file-related features together in the IA, so that users can perform all actions related to uploaded files in a single step. This includes managing files, editing them, removing them, and adding new ones.

 

By consolidating these features into a single step, users can easily navigate and manage their uploaded files without having to switch between different sections of the UI

Second Step: Event log Creation

This is where the magic happens! Our secret weapon needs bullets obviously. User needs to specify what type of event log they want and they can customize columns to achieve their desired event log. 

The second step is the most complicated step that needed a layer-based data visualization. what do I mean? well, we wanted to have these features on the Event Log Group that were made in IA in the previous phase:

  • Select a template.
  • Edit columns based on what they want to monitor.
  • Edit the result based on different resources given to the algorithm.
  • Remove, Add, Edit, or change the type of a column.

 

The challenge was managing all these features and showing them exactly when it’s needed. That’s where the detailed version of user flow(Mentioned before) was helpful and I used it to achieve sth like this:

After all the tunings the adventure starts…

Third Step: Load Data

The final step was checking the results and loading the data. Simple as possible.

This Case Study is being updated….