Friday, April 24, 2020

Virtual Red Hat Summit 2020, April 28-29

Next week Red Hat Summit 2020 will be held, not in San Francisco as we were hoping, but as a virtual event.  While this unfortunately won't give us the possibility to meet in person, a lot of the keynotes and breakout sessions will be held online.

Virtual Red Hat Summit is completely FREE, so if you haven't done so yet, register today!

Below is an overview of various sessions around business automation.  So if you're looking for the latest news on Kogito, our next gen cloud-native business automation toolkit, or how to leverage Red Hat Process Automation Manager and Decision Manager for use cases that involve microservice orchestration or machine learning, or to hear from our customers.  But take a look at the full agenda as well.

There will also be an opportunity to come and chat with us in the community area.  After signing in, click Explore and open up the "Middleware & cloud applications" Community Central chat room to ask questions!  Or you can just join our KIE chat channels we announced recently anytime.

Below is the list of presentations around business automation that I am aware of !

The state-of-the-art of developer tools to build business-intelligent apps for RHPAM v7 and Kogito
Eder Ignatowicz (Red Hat), Alex Porcelli (Red Hat)

Empowering Amadeus’ competitive advantage with cloud-native decision making on Quarkus
Matteo Casalino (Amadeus), Giacomo Margaria (Amadeus), Mario Fusco (Red Hat)

Modern business workflows as microservices: How we won with Red Hat Process Automation Manager
Mauro Vocale (Red Hat), Giovanni Marigi (Red Hat)

Why building intelligent cloud-native business applications is easier with Kogito
Kris Verlaenen (Red Hat)

Cloud, sweet cloud: Feeling at home with serverless decision making using Kogito and Camel-K
Daniele Zonca (Red Hat), Edoardo Vacchi (Red Hat), Luca Burgazzoli (Red Hat)

Integrating scalable machine learning into business workflows
Rui Vieira (Red Hat)

Solve the unsolvable: Why artificial intelligent systems can solve planning problems better than humans
Satish Kale (Red Hat), Geoffrey De Smet (Red Hat)

Transforming decision automation to be cloud-based and FaaS-like at BBVA
Antonio Valle Gutierrez (BBVA), Beatriz Alzola (BBVA), Marcos Regidor (Red Hat)
This is available on demand so no specific timing.

Wednesday, April 22, 2020

Kogito 0.9.1 released

We are glad to announce the Kogito 0.9.1 release is now available!  This goes hand in hand with the Kogito Tooling 0.3.1 release.

From a feature point of view there are only minor changes compared to 0.9.0, but on top of bug fixing we have also spent quite some time on following areas:
This is a milestone for us as we wanted to bring an end-to-end story and focus on our documentation and getting started experience to help you with your first steps on Kogito.  Take a look and let us know if you have further questions or recommendations!

New to Kogito? 
Check out our website
Click on the "Get Started" button.

All artefacts are available now:
  • Kogito runtime artefacts are available on Maven Central
  • Kogito examples can be found here
  • Kogito images are available on quay
  • Kogito operator is available in the OperatorHub in OpenShift
  • Kogito tooling 0.3.1 artefacts are available here
As announced last week we've also introduced a chat channel where you can reach the core team or interface with the community, so hope to see you all there !

Detailed release notes for 0.9.1 in JIRA can be found here.

Thursday, April 16, 2020

New community channels on Zulip Chat

We're happy to announce the immediate availability of a new public channel for all projects under the KIE umbrella, i.e. the Kogito, Drools, jBPM and Optaplanner communities ! 
Zulip Chat channels:

Inside our KIE organization you will find various streams where you can follow any of the topic discussions, create your own topic to ask a question or even help out others.  Since most of the developers use this for their day to day discussions as well, you will find a lot of experts there, and a ton of information.

Please join our community of Kogito, Drools, jBPM, and OptaPlanner experts, hang out, learn and become part of the next generation of cloud-native business automation!

Wednesday, November 20, 2019

Kogito deep dive video from Devoxx

This year at Devoxx Belgium, Maciej, Edoardo and Mario held a 3h deep dive on Kogito.  Since Devoxx is so awesome to share the recordings of all their presentation online, wanted to give everyone the opportunity to go and watch this!

I also had the opportunity to help out at the Red Hat booth for 2 days, and it was a great opportunity to sync up with a lot of people and do some Kogito evangelization.  And was there live for the big announcement of Quarkus doing its 1.0 release !

Wednesday, September 18, 2019

Etymology of Kogito

After writing up an introduction to our Kogito effort, it seems people are interested in hearing a little but more about the name, where it comes from, what the logo means, and (what seems to be the most important question) how to pronounce Kogito?  Yes, there even was a JIRA issue [KOGITO-284] opened to address this issue!

First, the name Kogito itself comes from:
"Cogito, ergo sum"
a Latin philosophical proposition by René Descartes usually translated into English as "I think, therefore I am" [Wikipedia].  So Kogito simply means "I think", and refers to how users are encoding business knowledge using various formats (processes, rules, constraints, etc.).  The 'c' was replaced with a 'k' as a reference to Kubernetes, our target cloud platform, and Kie where the k stands for knowledge.

"Kogito, ergo automate" therefore means, "I think, therefore I automate" and refers to the use of business automation to encode business knowledge.

Our logo is a reference to Odin, the Norse God that gave up an eye for wisdom [Wikipedia].
“According to mythology, Odin ventured to the mystical Well of Urd at the base of the world-tree that holds the cosmos together. The well was guarded by Mimir, a shadowy being who becomes all knowing by drinking the magical waters. Odin asked for a drink and Mimir replied that Odin must sacrifice an eye for a drink. Odin gouged out his own eye, dropped it into the well, and was allowed to drink from the waters of cosmic knowledge.”
Finally, how do I pronounce Kogito?  Since it comes from the Latin phrase "Cogito, ergo sum", the obvious first question could be, how do I pronounce that?  As it turns out, not an easy question to answer, but in the end the Italians in our team proclaimed this to be the only correct pronunciation:
so that's with the emphasis on the first syllable, and the 'g' pronounced as 'dji', or (if you're not skilled in phonetic language at all like me ;)) just listen to the video below:

Some good news though, because it seems no mortal person is able to consistently pronounce it this way, other pronunciations are completely fine too!

Monday, September 16, 2019

An intro to Kogito

The KIE team has been working for quite a few months on the Kogito project, our next-gen solution leveraging processes and rules for building intelligent cloud-native applications.


What are we trying to achieve?  Basically, when you as a developer or team are trying to build intelligent cloud-native applications, Kogito wants to help you with that by letting you use processes or rules in this context in a way that matches that ecosystem (!).  Kogito is focusing on make it as easy as possible for developers to turn a set of processes and/or rules in your own domain-specific cloud-native (set of) service(s).

This is a continuation of the efforts of the KIE team (including the Drools, jBPM, Optaplanner and AppFormer teams) to offer pure open-source solutions for business rules, business processes and constraint solving.  The KIE team however decided to have a new effort targeting specifically this goal, for the following reasons:
  • Technology-driven: As you will see below, there's a lot of great technology available for building cloud-native applications, but to be able to fully leverage these technologies in the context of business automation, we had to make a few radical changes.

  • Focus and innovation: We wanted to focus specifically on what is needed to build next-gen cloud-native applications, and how you can leverage processes and rules in this context.  This allows us to offer something that really fits this ecosystem and doesn't bring in additional baggage that isn't relevant.
So while this effort builds on years of experience and battle-tested capabilities, this also allowed us to leave some baggage behind and focus 100% on the problem at hand.

Kogito, ergo cloud
When you're building cloud-native applications, there's a lot of great technology out there (some of it you're probably already using).  Kogito is closely aligned and leveraging these technologies, so you can build highly scalable cloud-native services, with extremely quick startup times and low footprint. Picking up some of these technologies and truly taking advantage of them sometimes required quite radical changes (so this definitely not a lift-and-shift of our existing engines but built from the ground up).

For example:
  • Kubernetes is our target platform for building and managing containerized applications at scale.
  • Quarkus is the new native Java stack for Kubernetes that you can leverage when you build Kogito applications and it's a game changer.  But don't worry, if you are building your applications with Spring Boot, we will help you with that as well!
  • GraalVM allows you to use native compilation, resulting in extremely quick startup times (a native Kogito service start about 100x faster ~ 0.003ms) and minimal footprint, which is almost a necessity in this ecosystem nowadays, especially if you are looking at small serverless applications.  If you're interested in what's behind this, I would recommend to read Mario's blog about this.
  • Building serverless applications? Leverage Knative and Kogito together so your applications can scale up or down to zero based on the need.
  • Kogito applications behave like any other service you build, so you can instantly leverage technologies like Prometheus and Grafana for monitoring and analytics with optional extensions.
  • Internally we leverage quite a lot of other core middleware technogies like Kafka, Infinispan, KeyCloak, etc. This means we take care of setting these up (on demand, for our internal messaging, persistence and security requirements for example) but we strongly encourage you to start leveraging these technologies for your own use cases as well.

Kogito, ergo developer

We want to make the life of developers easy, by offering them instant productivity and making sure we integrate well with how they are building their applications.  So rather than asking developers to come to us with their requirements, we are coming to them !
  • The tooling required to build your processes and rules needs to be closely integrated with the  workflow the developer is already using to build cloud-native services.  Therefore we have spent a lot of time on allowing this tooling to be embeddable.  For example, we just released the first alpha release of our VSCode extension (see video below, credits to Alex) which allows you to edit your processes (still using BPMN 2.0 standard) from within VSCode, next to your other application code.  We're working on a similar experience for Eclipse Che.
  • Instant productivity means it should be trivial to develop, build and deploy your service locally so you can test and debug without delay.  Both Quarkus and Spring Boot offer a dev mode to achieve this, Quarkus even offering live reload of your processes and rules in your running application (extremely useful in combination with the advanced debug capabilities).
  • Once you're ready to start deploying your service into the cloud, we take advantage of the Operator Framework to guide you through every steps.  The operator automates a lot of the steps for you.  For example, you can just give it a link to where your application code lives in git, and the operator can check it out, build it (if necessary including native compilation) and deploy the resulting service.  We are working on extending this to also provision (on demand) more of the optional services that you might need (like for example a KeyCloak instance for security, or Infinispan for your persistence requirements).  We also offer a Command Line Interface (CLI) to simplify some of these tasks.

Kogito, ergo domain

Kogito has a strong focus on building your own domain-specific services.  While we hope you can leverage our technology to significantly help with that, we want developers to be able to build the service they need, exactly how they want it.  As a result, the fact that Kogito is leveraged to do a lot of the hard work is typically hidden and your service exposes itself as any other with its own domain-specific APIs.
To achieve this, Kogito relies a lot on code generation.  By doing so we can take care of 80% of the work, as we can generate a domain-specific service (or services) for you, based on the process(es) and/or rule(s) you have written.  For example, a process for onboaring employees could result in a remote REST api endpoints being generated that you can use to onboard new employees or get information on their status (all using domain-specific JSON data).

Additionally, domain-specific data can also be exposed (through events or in a data index) so it can easily be consumed and queried by other services.


When using Kogito, you're still building a cloud-native application as a set of independent domain-specific services, collaborating to achieve some business value.  The processes and/or rules you use to describe the behavior are executed as part of the services you create, highly distributed and scalable (no centralized orchestration service).  But (by using this additional compilation step) the runtime your service uses is completely optimized for what your service needs, nothing more.

If you need long-lived processes, runtime state can be persisted externally in a data grid like Infinispan.  Each service also produces events that can be consumed.  For example using Apache Kafka these event can be aggregated and indexed in a data index service, offering advanced query capabilities (using GraphQL).

What's coming next?

At this point, Kogito 0.3.0 is the latest release (from August 23rd), but we have much more coming on our roadmap before our 1.0.0 release which is targeted towards the end of the year. 

Get started

And now I believe you are ready to give it a try yourself, so please do and let us know! You can start with building one of the out-of-the-box examples, or by creating your first project from scratch.  Follow our getting started documentation here !  You will see you can build your own domain-specific service in minutes.

Or if you want to watch a small presentation (and demo!) from Maciej, check out his latest DevNation Live talk here.

Wednesday, April 24, 2019

bpmNEXT 2019 impressions, day 3

This is part of a 5-part blog series on bpmNEXT 2019:
Day 1
Day 1 (part 2)
Day 2
Day 2 (part 2)
Day 3

Last (half) day where I have to present myself as well (as 3rd of the day).

A Well-Mixed Cocktail: Blending Decision and RPA Technologies in 1st Gen Design Patterns
Lloyd Dugan

Lloyd introduced an RPA-enabled case mgmt platform that is used in the context of a use case to determine eligibility for Affordable Care Act. Using Sapiens for decisions and Appian for BPM, approximately 4000 people are using this as a work mgmt application (where work is assigned to people so they can work through this).  To be able to achieve higher throughput, they however combined this with RPA that emulate the behavoir of the users.  He showed (unfortunately in a prerecorded video, not a live demo) how they implemented the robots to perform some of the work (up to 50% of the total work done by the users !). The robots learned how to soft fail if there were issues (in which case the work would go back into the queue), needed to accomodate for latency, etc.

Emergent Synthetic Process
Keith Swenson - Fujitsu

Keith presented a way to customize processes to different contexts (for example slightly different regulations / approaches in different countries) by being able to generate a customized process for your specific context (when you start the process).  Rather than encoding processes in a procedural manner (after A do B), he is using "service descriptions" to define the tasks and the preconditions. You can then generate a process from this by specifying your goal and context and working backwards to create a customized process from this.  This allows you to add new tasks to these processes easily (as this is much more declarative logic and therefore additive).
The demo showed a travel application with approval by different people. Service descriptions can have required tasks, required data, etc.  The process is generated by working backwards from the goal, adding required steps one by one.  Different countries can add their own steps, leading to small customizations in the generated process.

Automating Human-Centric Processes with Machine Learning
Kris Verlaenen - Red Hat

I was up next !  I presented on how to combine Process Automation and Machine Learning (ML), to create a platform that combines the benefits of encoding business logic using a combination of business processes, rules etc. but at the same time can become more intelligent over time by observing and learning from the data during execution.  The focus was on introducing "non-intrusive" ways of combining processes with ML, to assist users with performing their tasks rather than to try and replace them.
The demo was using the it-orders application (one of our out-of-the-box case management demos that employees can use to order laptops) that focused on 3 main use cases:
  • Augmenting task data:  While human actors are performing tasks in your processes or cases, we can observe the data and try to predict task outcomes based on task inputs.  Once the ML algorithm (using Random Forest algorithm, with the SMILE library as the implementation) has been trained a little, it can start augmenting the data with possible predictions, but also with a confidence it has on that prediction, the relative importance of the input parameters, etc.  In this case, the manager approving the order would be able to see this augmented data in his task form and use it to make the right decision.
  • Recommending tasks:  Case management allows users to add addition dynamic tasks to running cases (even though they weren't modeled in the case upfront) in specific situations.  Similarly, these can be monitored and ML could be used to detect patterns.  These could be turned into recommendations, where a user is presented with a recommendation to do (or assign) a task based on what the ML algorithm has learned.  This can help the users significantly to not forget things or to assist them by preparing most of the work (they simply have to accept the recommendation).
  • Optimizing processes based on ML: One of the advantages of the Random Forest algorithm is that you can inspect the decision trees that are being trained to see what they have learned so far.  Since ML also has disadvantages (that it can be biased or that it is simply learning from what is being done, which is not necessarily correct behavior), analyzing what was learned so far and integrating this back into the process (and/or rules etc.) has significant advantages as well.   We extended the existing case with additional logic (like for example an additional decision service to determine whether some manager approvals could be automated, or additional ad-hoc tasks included in the case that would be triggered under certain circumstances), so that some of the patterns detected by ML would be encoded and enforced by the case logic itself.
These non-introsive ways of combining processes with ML is very complementary (as it allows us to take advantage of both approaches which mitigates some of the disadvantages of ML) and allows users to start getting advantages of ML and build up confidence in small and incremental steps.

ML, Conversational UX, and Intelligence in BPM
Andre Hofeditz, Seshadri Sreeniva - SAP SE

SAP is presenting "live processes" that are created by combining predefined building blocks, running on their platform with support for conversational user experience, decision management, task inbox, etc.  

SAP API Business Hub has been extended to also include live processes. Using an employee onboarding scenario, they show how a running instance can be "configured" (only in specific situations, which you can define during authoring) after which you can change the template and generate a new variant.  The process visibility workbench allows to generate a customizable UI for monitoring progress of your processes.
Next, they show how you can extend the platform by using recipes, which can be imported in SAP web IDE and deployed into the platform, adding additional capabilities that will be available in your live processes from that point forward.
Finally, they showed an intelligent assistant that is a sort of chatbot that can respond to voice.  It can give an aggregated view of your tasks, complete the tasks through the conversational UI, etc.  They showed how the chatbot can be programmed by defining tasks with triggers, requirements and actions, which can then be deployed as a microservice on the SAP cloud.

Keith Swenson 

Keith explained the efforts that are going into the DMN TCK, a set of tests to verify the compliance of DMN engines.  When running these tests, it takes a large number of models and test cases (currently over a thousand but still growing) and check the results.  He explained some of the challenges and opportunities in this context (e.g. error handling).
While many vendors claim DMN compatibility, Red Hat is one of the few vendors that actually has the results to prove it !

That concludes bpmNEXT 2019!  As previous years, I very much enjoyed the presentations, but probably even more the discussions during the breakouts and evenings.