digital-restaurant

DDD. Event sourcing. CQRS. REST. Modular. Microservices. Kotlin. Spring. Axonframework. Apache Kafka. RabbitMQ

View the Project on GitHub idugalic/digital-restaurant

projects/digital-restaurant Build Status GitPitch

‘d-restaurant’ is an example of an application that is built using Event Sourcing and CQRS. The application is written in Kotlin, and uses Spring Boot. It is built using Axonframework, which is an application framework based on event sourcing and CQRS.

Customers use the website application to place food orders at local restaurants. Application coordinates a network of couriers who deliver the orders.

Table of Contents

Domain layer

This layer contains information about the domain. This is the heart of the business software. The state of business objects is held here. Persistence of the business objects and possibly their state is delegated to the infrastructure layer

Business capabilities of ‘Digital Restaurant’ include:

As you try to model a larger domain, it gets progressively harder to build a single unified model for the entire enterprise. In such a model, there would be, for example, a single definition of each business entity such as customer, order etc. The problem with this kind of modeling is that:

Domain-driven design (DDD) avoids these problems by defining a separate domain model for each subdomain/component.

Subdomains are identified using the same approach as identifying business capabilities: analyze the business and identify the different areas of expertise. The end result is very likely to be subdomains that are similar to the business capabilities. Each sub-domain model belongs to exactly one bounded context.

Core subdomains

Some sub-domains are more important to the business then others. This are the subdomains that you want your most experienced people working on. Those are core subdomains:

The Order (RestaurantOrder, CustomerOrder, CourierOrder) aggregate class in each subdomain model represent different term of the same ‘Order’ business concept.

Event sourcing

We use event sourcing to persist our event sourced aggregates as a sequence of events. Each event represents a state change of the aggregate. An application rebuild the current state of an aggregate by replaying the events.

Event sourcing has several important benefits:

Event sourcing also has drawbacks:

Consider using event sourcing within ‘core subdomain’ only!

Snapshoting

By use of evensourcing pattern the application rebuild the current state of an aggregate by replaying the events. This can be bad for performances if you have a long living aggregate that is replayed by big amount of events.

Each aggregate defines a snapshot trigger:

Generic subdomains

Other subdomains facilitate the business, but are not core to the business. In general, these types of pieces can be purchased from a vendor or outsourced. Those are generic subdomains:

Event sourcing is probably not needed within your ‘generic subdomain’.

As Eric puts it into numbers, the ‘core domain’ should deliver about 20% of the total value of the entire system, be about 5% of the code base, and take about 80% of the effort.

Organisation vs encapsulation

When you make all types in your application public, the packages are simply an organisation mechanism (a grouping, like folders) rather than being used for encapsulation. Since public types can be used from anywhere in a codebase, you can effectively ignore the packages.

The way Java types are placed into packages (components) can actually make a huge difference to how accessible (or inaccessible) those types can be when Java’s access modifiers are applied appropriately. Bundling the types into a smaller number of packages allows for something a little more radical. Since there are fewer inter-package dependencies, you can start to restrict the access modifiers. Kotlin language doesn’t have ‘package’ modifier as Java has. It has ‘internal’ modifier which restricts accessiblity of the class to the whole module (compile unit, jar file…). This makes a difference, and you have more freedom to structure your source code, and provide good public API of the component.

For example, our Customer component classes are placed in one com.drestaurant.customer.domain package, with all classes marked as ‘internal’. Public classes are placed in com.drestaurant.customer.domain.api and they are forming an API for this component. This API consist of commands and events.

Application/s layer

This is a thin layer which coordinates the application activity. It does not contain business logic. It does not hold the state of the business objects

We have created more ‘web’ applications (standalone Spring Boot applications) to demonstrate the use of different architectural styles, API designs and deployment strategies by utilizing components from the domain layer in different way:

Monolithic

Microservices

Monolith 1 (HTTP and WebSockets API by segregating Command and Query)

Source code: https://github.com/idugalic/digital-restaurant/tree/master/drestaurant-apps/drestaurant-monolith

A recurring question with CQRS and EventSourcing is how to put a synchronous HTTP front-end on top of an asynchronous CQRS back-end.

In general there are two approaches:

This application is using the first approach (‘segregating Command and Query’) by exposing capabilities of our ‘domain’ via the HTTP/REST API components that are responsible for

There is no one-to-one relation between a Command resource and a Query Model resource. This makes easier to implement multiple representations of the same underlying domain entity as separate resources.

Event handler is a central component. It consumes events, and creates ‘query models’ (materialized views) of aggregates. This makes querying of event-sourced aggregates easy.

Event handler is publishing a WebSocket events on every update of a query model. This can be useful on the front-end to re-fetch the data via HTTP/REST endpoints.

Each event handler allows ‘reply’ of events. Please note that ‘reset handler’ will be called before replay/reset starts to clear out the query model tables. AdminController expose endpoints for reseting tracking event processors/handlers.

‘Command’ HTTP API

Create new Restaurant
curl -X POST --header 'Content-Type: application/json' --header 'Accept: */*' -d '{
  "menuItems": [
    {
      "id": "id1",
      "name": "name1",
      "price": 100
    }
  ],
  "name": "Fancy"
}' 'http://localhost:8080/api/command/restaurant/createcommand'
Create/Register new Customer
curl -X POST --header 'Content-Type: application/json' --header 'Accept: */*' -d '{
  "firstName": "Ivan",
  "lastName": "Dugalic",
  "orderLimit": 1000
}' 'http://localhost:8080/api/command/customer/createcommand'
Create/Hire new Courier
curl -X POST --header 'Content-Type: application/json' --header 'Accept: */*' -d '{
  "firstName": "John",
  "lastName": "Doe",
  "maxNumberOfActiveOrders": 20
}' 'http://localhost:8080/api/command/courier/createcommand'
Create/Place the Order
curl -X POST --header 'Content-Type: application/json' --header 'Accept: */*' -d '{
  "customerId": "CUSTOMER_ID",
  "orderItems": [
    {
      "id": "id1",
      "name": "name1",
      "price": 100,
      "quantity": 0
    }
  ],
  "restaurantId": "RESTAURANT_ID"
}' 'http://localhost:8080/api/command/order/createcommand'

Note: Replace CUSTOMER_ID and RESTAURANT_ID with concrete values.

Restaurant marks the Order as prepared
curl -X POST --header 'Content-Type: application/json' --header 'Accept: */*' 'http://localhost:8080/api/command/restaurant/order/RESTAURANT_ORDER_ID/markpreparedcommand'

Note: Replace RESTAURANT_ORDER_ID with concrete value.

Courier takes/claims the Order that is ready for delivery (prepared)
curl -X POST --header 'Content-Type: application/json' --header 'Accept: */*' 'http://localhost:8080/api/command/courier/COURIER_ID/order/COURIER_ORDER_ID/assigncommand'

Note: Replace COURIER_ID and COURIER_ORDER_ID with concrete values.

Courier marks the Order as delivered
curl -X POST --header 'Content-Type: application/json' --header 'Accept: */*' 'http://localhost:8080/api/command/courier/order/COURIER_ORDER_ID/markdeliveredcommand'

‘Query’ HTTP API

Application is using an event handler to subscribe to all interested domain events. Events are materialized in SQL database schema.

HTTP/REST API for browsing the materialized data:

curl http://localhost:8080/api/query

Administration

Read all event processors
curl http://localhost:8080/api/administration/eventprocessors
Event processors reset

In cases when you want to rebuild projections (query models), replaying past events comes in handy. The idea is to start from the beginning of time and invoke all event handlers.

curl -i -X POST 'http://localhost:8080/api/administration/eventprocessors/{EVENT PROCESSOR NAME}/reply'
Event processor status

Returns a map where the key is the segment identifier, and the value is the event processing status. Based on this status we can determine whether the Processor is caught up and/or is replaying. This can be used for Blue-Green deployment. You don’t want to send queries to ‘query model’ if processor is not caught up and/or is replaying.

curl http://localhost:8080/api/administration/eventprocessors/{EVENT PROCESSOR NAME}/status

WebSocket (STOMP) API

WebSocket API (ws://localhost:8080/drestaurant/websocket) topics:

Frontend part of the solution is available here http://idugalic.github.io/digital-restaurant-angular

Monolith 2 (REST API by not segregating Command and Query)

Source code: https://github.com/idugalic/digital-restaurant/tree/master/drestaurant-apps/drestaurant-monolith-rest

This application is using the second approach (‘NOT segregating Command and Query’) by exposing capabilities of our ‘domain’ via the REST API components that are responsible for

We create one-to-one relation between a Command Model resource and a Query Model (materialized view) resource. We are using Spring Rest Data project to implement REST API, which will position us on the third level of Richardson Maturity Model

Event handler is a central component. It consumes events, and creates Query Model / materialized views of aggregates. Additionally, it will emit ‘any change on Query Model’ to Axon subscription queries, and let us subscribe on them within our CommandController keeping our architecture clean.

Each event handler allows ‘reply’ of events. Please note that ‘reset handler’ will be called before replay/reset starts to clear out the query model tables. AdminController expose endpoints for reseting tracking event processors/handlers.

Although fully asynchronous designs may be preferable for a number of reasons, it is a common scenario that back-end teams are forced to provide a synchronous REST API on top of asynchronous CQRS+ES back-ends.

Restaurant management

Read all restaurants
curl http://localhost:8080/restaurants
Create new restaurant
curl -i -X POST --header 'Content-Type: application/json' --header 'Accept: */*' -d '{
"menuItems": [
 {
   "id": "id1",
   "name": "name1",
   "price": 100
 }
],
"name": "Fancy"
}' 'http://localhost:8080/restaurants'
Mark restaurant order as prepared
curl -i -X PUT --header 'Content-Type: application/json' --header 'Accept: */*' 'http://localhost:8080/restaurants/RESTAURANT_ID/orders/RESTAURANT_ORDER_ID/markprepared'

Customer management

Read all customers
curl http://localhost:8080/customers
Create/Register new Customer
curl -i -X POST --header 'Content-Type: application/json' --header 'Accept: */*' -d '{
"firstName": "Ivan",
"lastName": "Dugalic",
"orderLimit": 1000
}' 'http://localhost:8080/customers'

Courier management

Read all couriers
curl http://localhost:8080/couriers
Create/Hire new Courier
curl -i -X POST --header 'Content-Type: application/json' --header 'Accept: */*' -d '{
"firstName": "John",
"lastName": "Doe",
"maxNumberOfActiveOrders": 20
}' 'http://localhost:8080/couriers'
Courier takes/claims the Order that is ready for delivery (prepared)
curl -i -X PUT --header 'Content-Type: application/json' --header 'Accept: */*' 'http://localhost:8080/couriers/COURIER_ID/orders/COURIER_ORDER_ID/assign'
Courier marks the order as delivered
curl -i -X PUT --header 'Content-Type: application/json' --header 'Accept: */*' 'http://localhost:8080/couriers/COURIER_ID/orders/COURIER_ORDER_ID/markdelivered'

Order management

Read all orders
 curl http://localhost:8080/orders
Create/Place the Order
curl -i -X POST --header 'Content-Type: application/json' --header 'Accept: */*' -d '{
"customerId": "CUSTOMER_ID",
"orderItems": [
 {
   "id": "id1",
   "name": "name1",
   "price": 100,
   "quantity": 0
 }
],
"restaurantId": "RESTAURANT_ID"
}' 'http://localhost:8080/orders'

Note: Replace CUSTOMER_ID and RESTAURANT_ID with concrete values.

Administration

Read all event processors
curl http://localhost:8080/administration/eventprocessors
Event processors reset

In cases when you want to rebuild projections (query models), replaying past events comes in handy. The idea is to start from the beginning of time and invoke all event handlers.

curl -i -X POST 'http://localhost:8080/administration/eventprocessors/{EVENT PROCESSOR NAME}/reply'
Event processor status

Returns a map where the key is the segment identifier, and the value is the event processing status. Based on this status we can determine whether the Processor is caught up and/or is replaying. This can be used for Blue-Green deployment. You don’t want to send queries to ‘query model’ if processor is not caught up and/or is replaying.

curl http://localhost:8080/administration/eventprocessors/{EVENT PROCESSOR NAME}/status

Monolith 3 (STOMP over WebSockets API. We are async all the way)

Source code: https://github.com/idugalic/digital-restaurant/tree/master/drestaurant-apps/drestaurant-websockets

The WebSocket protocol (RFC 6455) defines an important new capability for web applications: full-duplex, two-way communication between client and server. It is an exciting new capability on the heels of a long history of techniques to make the web more interactive including Java Applets, XMLHttpRequest, Adobe Flash, ActiveXObject, various Comet techniques, server-sent events, and others.

This application is utilizing STOMP over Websockets protocol to expose capabilities of our ‘domain’ via components:

STOMP over WebSockets API

WebSocket SockJS endpoint: ws://localhost:8080/drestaurant/websocket

Topics:
Message endpoints:

Microservices 1 (HTTP, Websockets, Apache Kafka)

We designed and structured our domain components in a modular way, and that enable us to choose different deployment strategy and decompose Monolith 1 to microservices.

Each microservice:

Apache Kafka

Apache Kafka is a distributed streaming platform.

Order of events (kafka topics & partitions)

The order of events matters in our scenario (eventsourcing). For example, we might expect that a customer is created before anything else can happen to a customer. When using Kafka, you can preserve the order of those events by putting them all in the same Kafka partition. They must be in the same Kafka topic because different topics mean different partitions.

We configured our Kafka instance to crate only one topic (axon-events) with one partition initially.

Queue vs publish-subscribe (kafka groups)

If all consumers are from the same group, the Kafka model functions as a traditional message queue would. All the records and processing is then load balanced. Each message would be consumed by one consumer of the group only. Each partition is connected to at most one consumer from a group.

When multiple consumer groups exist, the flow of the data consumption model aligns with the traditional publish-subscribe model. The messages are broadcast to all consumer groups.

We configured our (micro)services to use publish-subscribe model, by setting unique consumer group id for each (micro)service.

‘Command’ HTTP API

Create new Restaurant
curl -X POST --header 'Content-Type: application/json' --header 'Accept: */*' -d '{
  "menuItems": [
    {
      "id": "id1",
      "name": "name1",
      "price": 100
    }
  ],
  "name": "Fancy"
}' 'http://localhost:8084/api/command/restaurant/createcommand'
Create/Register new Customer
curl -X POST --header 'Content-Type: application/json' --header 'Accept: */*' -d '{
  "firstName": "Ivan",
  "lastName": "Dugalic",
  "orderLimit": 1000
}' 'http://localhost:8082/api/command/customer/createcommand'
Create/Hire new Courier
curl -X POST --header 'Content-Type: application/json' --header 'Accept: */*' -d '{
  "firstName": "John",
  "lastName": "Doe",
  "maxNumberOfActiveOrders": 20
}' 'http://localhost:8081/api/command/courier/createcommand'
Create/Place the Order
curl -X POST --header 'Content-Type: application/json' --header 'Accept: */*' -d '{
  "customerId": "CUSTOMER_ID",
  "orderItems": [
    {
      "id": "id1",
      "name": "name1",
      "price": 100,
      "quantity": 0
    }
  ],
  "restaurantId": "RESTAURANT_ID"
}' 'http://localhost:8083/api/command/order/createcommand'

Note: Replace CUSTOMER_ID and RESTAURANT_ID with concrete values.

Restaurant marks the Order as prepared
curl -X POST --header 'Content-Type: application/json' --header 'Accept: */*' 'http://localhost:8084/api/command/restaurant/order/RESTAURANT_ORDER_ID/markpreparedcommand'

Note: Replace RESTAURANT_ORDER_ID with concrete value.

Courier takes/claims the Order that is ready for delivery (prepared)
curl -X POST --header 'Content-Type: application/json' --header 'Accept: */*' 'http://localhost:8081/api/command/courier/COURIER_ID/order/COURIER_ORDER_ID/assigncommand'

Note: Replace COURIER_ID and COURIER_ORDER_ID with concrete values.

Courier marks the Order as delivered
curl -X POST --header 'Content-Type: application/json' --header 'Accept: */*' 'http://localhost:8081/api/command/courier/order/COURIER_ORDER_ID/markdeliveredcommand'

‘Query’ HTTP API

Application is using an event handlers to subscribe to interested domain events. Events are materialized in SQL database schema and distributed over Apache Kafka

HTTP/REST API for browsing the materialized data:

curl http://localhost:8085/api/query

Microservices 2 (REST, RabbitMQ)

We designed and structured our domain components in a modular way, and that enable us to choose different deployment strategy and decompose Monolith 2 to microservices.

Each microservice:

RabbitMQ

RabbitMQ is the most popular open source message broker. It supports several messaging protocols, directly and through the use of plugins:

Publish-subscribe

This messaging pattern supports delivering a message to multiple consumers.

We configured our (micro)services to use publish-subscribe model, by setting unique queue for each (micro)service. This queues are bind to one common exchange (events.fanout.exchange).

RabbitMQ allows more sophisticated message routing then Apache Kafka can offer. Having one exchange bind to every service queue covered our scenario, but you can do more if you like.

Restaurant management

Read all restaurants
curl http://localhost:8084/restaurants
Create new restaurant
curl -i -X POST --header 'Content-Type: application/json' --header 'Accept: */*' -d '{
"menuItems": [
 {
   "id": "id1",
   "name": "name1",
   "price": 100
 }
],
"name": "Fancy"
}' 'http://localhost:8084/restaurants'
Mark restaurant order as prepared
curl -i -X PUT --header 'Content-Type: application/json' --header 'Accept: */*' 'http://localhost:8084/restaurants/RESTAURANT_ID/orders/RESTAURANT_ORDER_ID/markprepared'

Customer management

Read all customers
curl http://localhost:8082/customers
Create/Register new Customer
curl -i -X POST --header 'Content-Type: application/json' --header 'Accept: */*' -d '{
"firstName": "Ivan",
"lastName": "Dugalic",
"orderLimit": 1000
}' 'http://localhost:8082/customers'

Courier management

Read all couriers
curl http://localhost:8081/couriers
Create/Hire new Courier
curl -i -X POST --header 'Content-Type: application/json' --header 'Accept: */*' -d '{
"firstName": "John",
"lastName": "Doe",
"maxNumberOfActiveOrders": 20
}' 'http://localhost:8081/couriers'
Courier takes/claims the Order that is ready for delivery (prepared)
curl -i -X PUT --header 'Content-Type: application/json' --header 'Accept: */*' 'http://localhost:8081/couriers/COURIER_ID/orders/COURIER_ORDER_ID/assign'
Courier marks the order as delivered
curl -i -X PUT --header 'Content-Type: application/json' --header 'Accept: */*' 'http://localhost:8081/couriers/COURIER_ID/orders/COURIER_ORDER_ID/markdelivered'

Order management

Read all orders
 curl http://localhost:8083/orders
Create/Place the Order
curl -i -X POST --header 'Content-Type: application/json' --header 'Accept: */*' -d '{
"customerId": "CUSTOMER_ID",
"orderItems": [
 {
   "id": "id1",
   "name": "name1",
   "price": 100,
   "quantity": 0
 }
],
"restaurantId": "RESTAURANT_ID"
}' 'http://localhost:8083/orders'

Note: Replace CUSTOMER_ID and RESTAURANT_ID with concrete values.

Microservices 3 (Websockets, AxonDB and AxonHub)

We designed and structured our domain components in a modular way, and that enable us to choose different deployment strategy and decompose Monolith 3 to microservices.

Each microservice:

AxonHub

AxonHub is a messaging platform specifically built to support distributed Axon Framework applications. It is a drop in replacement for the other CommandBus, EventBus and QueryBus implementations.

The key characteristics for AxonHub are:

AxonDB

AxonDB is a purpose-built database system optimized for the storage of event data of the type that is generated by applications that use the event sourcing architecture pattern. It has been primarily designed with the use case of Axon Framework-based Java applications in mind, although there is nothing in the architecture that restricts its use to these applications only.

AxonHub and AxonDB are commercial software products by AxonIQ B.V. Free ‘developer’ editions are available.

STOMP over WebSockets API

Customer (command side)

WebSocket SockJS endpoint: ws://localhost:8081/customer/websocket

Courier (command side)

WebSocket SockJS endpoint: ws://localhost:8082/courier/websocket

Restaurant (command side)

WebSocket SockJS endpoint: ws://localhost:8084/restaurant/websocket

Order (command side)

WebSocket SockJS endpoint: ws://localhost:8083/order/websocket

Query side

WebSocket SockJS endpoint: ws://localhost:8085/query/websocket

Development

This project is driven using Maven.

Clone

$ git clone https://github.com/idugalic/digital-restaurant

Build

$ cd digital-restaurant
$ mvn clean install

Run monolith 1 (HTTP and WebSockets API by segregating Command and Query)

$ cd digital-restaurant/drestaurant-apps/drestaurant-monolith
$ mvn spring-boot:run

Run monolith 2 (REST API by not segregating Command and Query)

$ cd digital-restaurant/drestaurant-apps/drestaurant-monolith-rest
$ mvn spring-boot:run

Run monolith 3 (STOMP over WebSockets API. We are async all the way)

$ cd digital-restaurant/drestaurant-apps/drestaurant-monolith-websockets
$ mvn spring-boot:run

Run microservices 1 (HTTP, Websockets, Apache Kafka)

NOTE: Docker is required. We use it to start Apache Kafka with Zookeeper

$ cd digital-restaurant/drestaurant-apps/drestaurant-microservices
$ docker-compose up -d
$ cd digital-restaurant/drestaurant-apps/drestaurant-microservices/drestaurant-microservices-discovery-server
$ mvn spring-boot:run
$ cd digital-restaurant/drestaurant-apps/drestaurant-microservices/drestaurant-microservices-command-courier
$ mvn spring-boot:run
$ cd digital-restaurant/drestaurant-apps/drestaurant-microservices/drestaurant-microservices-command-customer
$ mvn spring-boot:run
$ cd digital-restaurant/drestaurant-apps/drestaurant-microservices/drestaurant-microservices-command-restaurant
$ mvn spring-boot:run
$ cd digital-restaurant/drestaurant-apps/drestaurant-microservices/drestaurant-microservices-command-order
$ mvn spring-boot:run
$ cd digital-restaurant/drestaurant-apps/drestaurant-microservices/drestaurant-microservices-query
$ mvn spring-boot:run

Run microservices 2 (REST, RabbitMQ)

NOTE: Docker is required. We use it to start RabbitMQ

$ docker run -d --hostname my-rabbit --name some-rabbit -p 15672:15672 -p 5672:5672 rabbitmq:3-management
$ cd digital-restaurant/drestaurant-apps/drestaurant-microservices-rest/drestaurant-microservices-rest-courier
$ mvn spring-boot:run
$ cd digital-restaurant/drestaurant-apps/drestaurant-microservices-rest/drestaurant-microservices-rest-courier
$ mvn spring-boot:run
$ cd digital-restaurant/drestaurant-apps/drestaurant-microservices-rest/drestaurant-microservices-rest-customer
$ mvn spring-boot:run
$ cd digital-restaurant/drestaurant-apps/drestaurant-microservices-rest/drestaurant-microservices-rest-restaurant
$ mvn spring-boot:run
$ cd digital-restaurant/drestaurant-apps/drestaurant-microservices-rest/drestaurant-microservices-rest-order
$ mvn spring-boot:run

Run microservices 3 (Websockets, AxonDB and AxonHub)

AxonHub and AxonDB are required. Developer editions are available for free, and you should have them up and running before you start the services.

$ cd digital-restaurant/drestaurant-apps/drestaurant-microservices-websockets/drestaurant-microservices-websockets-comand-courier
$ mvn spring-boot:run
$ cd digital-restaurant/drestaurant-apps/drestaurant-microservices-websockets/drestaurant-microservices-websockets-comand-customer
$ mvn spring-boot:run
$ cd digital-restaurant/drestaurant-apps/drestaurant-microservices-websockets/drestaurant-microservices-websockets-comand-restaurant
$ mvn spring-boot:run
$ cd digital-restaurant/drestaurant-apps/drestaurant-microservices-websockets/drestaurant-microservices-websockets-comand-order
$ mvn spring-boot:run
$ cd digital-restaurant/drestaurant-apps/drestaurant-microservices-websockets/drestaurant-microservices-websockets-query
$ mvn spring-boot:run

Continuous delivery

We have one deployment pipeline for all applications and libraries within this repository. In addition, all projects in the repository share the same dependencies. Hence, there are no version conflicts because everyone has to use the same/the latest (SNAPSHOTS) version. And you don’t need to deal with a private NPM (JavaScript) or Maven (Java) registry when you just want to use your own libraries. This setup and project structure is usually addressed as a monorepo.

Technology

Language

Frameworks and Platforms

Continuous Integration and Delivery

Infrastructure and Platform (As A Service)

References and further reading

Inspired by the book “Microservices Patterns” - Chris Richardson


Created by Ivan Dugalic