One of the most discussed software architectures are microservices. In other words, the request to split the monolithic applications of the past into small, autonomous services. However, there are some issues with microservices: They are expensive – as they are required for unlimited scaling – and only work if the underlying culture is embraced in its entirety.
This raises multiple questions. For us in the ERP world, is unlimited scaling a requirement? Also, our applications are expensive enough and a complete shift of the whole application towards a different architecture is often impractical. Nevertheless, SAP has a couple of offerings built on microservices, and we can learn from these examples.
Microservices are autonomous
At its core, the microservice architecture requires each service to be fully autonomous. It has a well-defined network API to get data in and out, no other methods allowed. The decision of which technologies to use to implement the service is up to the responsible team. Sounds simple enough, but the decision has consequences.
A sales order service is trivial in the first cut. The service exposes an API to create, modify and change sales orders. All sales orders are persisted in a database with the sales order header and item table. It isn’t rocket science, right?
A sales order should reference existing customers and material master data, but these are other services and hence not part of our database’s tables. But if they are not part of this database, all features that a database provides for free, including transactions, enforcing foreign key constraints, fast joins and the like, need to be re-implemented on service level.
The obvious solution would be to make the database with its tables one service, and the business logic where sales orders are maintained would be another microservice. Something like a 3-layer service architecture with the UI layer, the application layer and the database layer. The application layer exposes function modules for the various operations and …. Did we just re-invent the R/3 architecture?
When using a single database to store all data, the data model is the monolith. As the data model is the core, a huge part of the application remains monolithic. Including all the negative side effects, like changing the table structures requires all other modules to be analyzed if they will be impacted. Changes need to be applied in the same manner.
The result is the opposite of autonomous. Calling that a microservice instead of a client-server architecture is purely semantic, I would argue.
Eventing and eventual consistency
The software industry’s answer to that problem is to use events as the information backbone. Every service broadcasts all changes made to the data and the interested parties can listen and persist the required data in their own database.
In the example of the sales order database, it would have tables with material master data and business partner records for reference, except that these extra tables cannot be modified. When the material service broadcasts the event, effectively saying, “There is a new material with the following data”, it is modifying the local reference table.
As every event has some delay, it might happen that the user created a new material record and ordered that material immediately after. The order service will get the material master event in a few milliseconds, but triggered right now it returns the error, “Material does not exist”.
Another issue is scaling within the service. In an optimally designed microservices architecture, the performance can be increased linear by starting more instances of the same service. For business logic, this is doable, but the database is the problem.
If all sales order services use the same database, they only scale as much as the weakest link. In a properly designed microservices architecture, each service instance has its own database and it is kept up to date via distributing the change events.
Another downside of each service having its own reference data is the size of the microservices. With every iteration the service requires more reference data until it is a copy of a large portion of the entire ERP system. Just the opposite of SAPs theme of ‘no aggregates, no data duplication’.
In my opinion, it was the right step for the ERP development team to resist embracing a microservices architecture in S/4 Hana.
Other properties of a microservices architecture include:
- well defined APIs;
- version tolerance of the APIs;
- and fault tolerance.
Using those as arguments to favor microservices is reversing cause and effect. Coming back to the R/3 architecture, the SAP ERP system has well-defined APIs to create sales orders, to read material master, and so on. Many of them have additional optional parameters, some exist in different versions. For sure the application is modular. With the client server architecture, there is at least some fault tolerance on application and presentation layers.
Does that make the R/3 architecture a microservices architecture? Of course not, it is just a properly built solution. These requirements are simply not exclusive to microservices.
SAP Cloud Platform’s microservices architecture
One area where SAP is following the microservices architecture is SCP.
Each service deployed is an isolated entity, publishes its APIs in a central registry – the API Hub – and multiple instances can be started to increase throughput.
The CAPM supports even eventing of services but ignores out of the box support of distributing change data across the database instances. SCP does not need that, as the database is used as a service as in a client server model and is not an intrinsic part of the service itself.
One way to look at it is that it combines the worst of both worlds: It scales like a client server architecture and adds the complexities of microservices. Security is maintained outside of the database, services need various bindings, services require defined routes, every service must serialize and deserialize the payload for data exchange, every service needs to reevaluate the user security, etc. In brief, simple things are tedious, complex tasks are extremely difficult.
SAP Data Quality microservices
Another example are the SAP EIM Data Quality services to validate and cleanse various aspects of address data.
For such an offering, a microservice architecture is perfect. The required address reference data is part of each service instance. There is no need to sync the data across instances due to the static of the postal data. The user sends a http request with an in-doubt address payload to the service and gets back the corrected address, the information on what has been corrected, and the confidence level. It scales perfectly and the overhead is little.
Microservices have two use cases as sweet spots. If unlimited scaling is required because the service might grow into millions of requests per hours, there is not much choice and a microservice architecture must be chosen, regardless of costs. If the desired service is stateless (does neither make changes in a database nor relies on a global application consistency), it will be autonomous anyhow. In that case, the result will be something microservice-like and then it makes sense to add the few additional concepts, e.g. documenting the API, versioning the API, etc.
For database applications, especially when one instance is used for a foreseeable number of users only, a microservices architecture is questionable.
This article is shared by www.itechscripts.com | A leading resource of inspired clone scripts. It offers hundreds of popular scripts that are used by thousands of small and medium enterprises.