Business Activity Monitoring in Camunda

Camunda is an open-source platform for workflow and decision automation that helps organizations streamline and optimize their business processes. Organizations worldwide are turning to workflow management solutions like Camunda to streamline and orchestrate complex business processes.

However, Camunda’s expertise in process automation offers more than meets the eye – it holds valuable insights within orchestrated workflows, waiting to be discovered.

You can use one of the many tools available to anylise workflow data (Anaplan, PowerBI, Google Analytics or even something created in house). Depending on your industry and use case there will be a perfect tool to cranch your workflow data. The only remaining part is to free up the treasured Data held inside the Camunda. We use Kassette for this.


Open-Source iPaas – Kassette


Kassette is a new open-source iPaas which provides an easy way to extract Data from Camunda and use it for business activity monitoring.

Kassette uses a client-server model and is intended to be used in highly secure on-prem and private cloud environments.

It consists of several components:

kassette-server – manages configuration and provides an API for the Data processing

kassette-transformer – User interface for monitoring and configuration of the kassette-server

kassette agents and workers – it’s a number of lightweight independent applications which provide integration to various external systems.

Kassette requires Postgres Database for storing configuration and transient data.

More information on the use and deployment models can be found in our GitHub repo.


Demo


The best way to see what it can do is to run a small demo we have created. To run it you need Linux or Mac machine with Docker installed.

Checkout source code of the kassette-server and run “docker compose up” in the examples/ folder:


git clone git@github.com:kassette-ai/kassette-server.git

cd examples/camunda2postgres

docker compose up


This will start several Docker containers:

camunda-diy

Sample Java app for simulating the fast-food restaurant ordering process. It uses Camunda Workflow which includes 3 process definitions. When the application starts, Camunda Admin page can be accessed in a web browser via http://localhost:8090 with username/password set to default demo/demo.

camunda-poker

Shell script simulating restaurant customer flow and triggering various events via Camunda Rest API

postgres

Postgres DataBase instance which has multiple schemas:
kassette – Kassete configuration and operational Database
workflow – Database used by the Camunda application “camunda-diy” for storing camunda internals and history
warehouse – Database used for simulating a destination Postgres DB used for exporting Camunda history.

kassette-server

Kassette’s main component which is responsible for processing, transformation and submission of the data pulled from Source. API runs on localhost:8088

kassette-transformer

UI Admin interface allows configuring and controlling kassette-server. Can be accessed via http://localhost:3000

kassette-camunda-agent

Kassette Camunda Agent which polls Camunda Database waiting for new historical events and submits them to the kassette-server.

grafana

Configured instance of grafana which will show a simple graph analysing the Data captured from Camunda events. Can be accessed via http://localhost:3001

After a few minutes when all services have started, you can see that Camunda-diy APP have successfully deployed and workflow is running by accessing Camunda Cockpit:



In the console, you should be able to see messages indicating that Camunda workflow is in progress:


camunda2postgres-camunda-poker-1 | CUSTOMER: customer3 --- starting employee order processing activiti
camunda2postgres-camunda-poker-1 | CUSTOMER: customer3 --- inform chef and preptime will take 716
camunda2postgres-camunda-poker-1 | CUSTOMER: customer4 --- walking into a restaurant
camunda2postgres-camunda-poker-1 | CUSTOMER: customer4 --- Getting task id which should be completed manually


The next step is to check the configuration of the Kassette by accessing its web UI on http://localhost:3000/



It’s already pre-configured to use Camunda agent as a Source and Destination is set to Postgres Database, eventlog table.

After a few seconds, the Data will start flowing from the Camunda Database via kassette-camunda-agent to kassette-server and stored in a different Postgres Database in the format we can define via kassette-transformer UI.

Results can be seen either in Postgres DB directly or the same can be explored via Grafana on http://localhost:3001 with the default username/password set to admin/admin



The above Demo shows how simple is it to “unhinge” Camunda history Data with Kassette. Postgres Database is used as an operational Datastore.

Have you had similar challenges? Have you got any feedback? Do not hesitate to contact us at info@metaops.solutions

Camunda’s expertise in process automation offers more than meets the eye – it holds valuable insights within orchestrated workflows, waiting to be discovered.

Camunda’s expertise in process automation offers more than meets the eye – it holds valuable insights within orchestrated workflows, waiting to be discovered.

Camunda’s expertise in process automation offers more than meets the eye – it holds valuable insights within orchestrated workflows, waiting to be discovered.

Andrey Kozichev

Subscribe for the latest blogs and news updates!

Related Posts

camunda

Mar 14, 2024

Beware: Neglecting the importance of correct timestamp format in incremental sync can lead to data chaos, system errors, and operational disruptions!

camunda

Mar 28, 2023

Companies that have bought into the light-touch workflow approach are inevitably going to be in trouble. There is a ticking technical debt clock that sits in your code that will need to be mitigated.

© MetaOps 2024

© MetaOps 2024

© MetaOps 2024