The Low Code principle aims to transfer as much logic as possible from the application code to application configuration. Configuration settings can be managed by an analyst, not a programmer. It is believed that this approach can make development faster and cheaper and speed up new features delivery. In practice, Low Code is usually implemented with cumbersome platforms and frameworks. They require a specially trained personnel, slow down execution and limit functionality. Nonetheless, the idea of transferring application logic from code to the configuration (or even data) is promising. In this paper we discuss how to implement it using ontologies.

The most of today’s applications are built using MVC (Model – View – Controller) pattern, when the data processing logic, representation logic and the user interaction logic are implemented on the different layers of code. The applications are commonly using the three-tier architecture, which means they have a front-end, back-end and a database. It is natural with these approaches that any business logic change requires amending the database structure as well as all the layers of the code. The micro-service architecture is also not a panacea, as each service’s API is rigidly reflecting the data structure. The new business requirements implementation slow and costly due to this. In practice, applications are always catching up business requirements, but never implement them all as new requirements are emerging faster. As a result, an average business application never satisfy business needs completely despite huge IT spendings.

MVC and the three-tier architecture are certainly proven and worthy development patterns, but isn’t the time to switch to more modern patterns? We promote ontology-centric applications development. We call such applications “model-driven”, as their behavior is largely controlled by an ontological model.

Ontology, or ontological model, is a machine-readable knowledge representation of certain domain, expressed according to W3C specifications such as RDF/RDFS/OWL. An ontology can be used for data structure definition and contain the data itself. An ontology consists of two layers: TBox (Terminology box) containing classes and properties definitions, which shape the data structure, and ABox (Assertions box) containing the individual objects assertions, or data. Both layers are technologically represented as a graph. They are homogeneous in the sense that both layers can be managed with the same tools, such as SPARQL requests. SPARQL uses the same query syntax to manage TBox and ABox, which differs from SQL which has completely different queries for structure and data management. An important advantage of ontologies is that each object can belong to several types (classes), in contrast to relational DBMS, where each record is hosted in one table which determines its type. Each property of each ontology object can have several values, which can be annotated with the language or datatype tag.

But that's not all – ontologies can describe data processing rules as well. The constraints can be represented using SHACL Constraints specification. The data augmentation rules and mathematical formulas can be expressed according to SHACL Advanced Features (SHACL Rules) specification. Each rule is an ontology entity, as well as any element of the data or its structure. The other kinds of rules such as data transformation, user interface representation, and business logic execution can be represented using ontologies as well. There are common models for some of these domains, or we can build up our own model. Then we have to write some code which will interpret the logic considering the data structure. This code shall be responsible to the model changes. In this way, ontologies allow to combine in the single graph representation the data, their structure and the logic of processing. This eliminates the artificial gap between these application components created by some development patterns.

Model-driven application architecture

A model-driven application allows to implement new business requirements very fast. For example, when a new business object type emerges, we can just change the model without intervening the program code. An application shall read the new object type definition and its processing rules from the model, and act accordingly – if it was properly designed and implemented. Let us demonstrate the differences between traditional and model-oriented development approaches. Let’s compare how certain development steps are executed.

Traditional approachModel-driven application
Data structure definition

The relational DBMS are the common choice for data storage. The data structure is projected to the tables structure. Some trade-offs are common when designing tables structure: for example, the private persons and individual entrepreneurs data will likely be placed into different tables, as they have various properties set – although in reality every entrepreneur is a private person. If some person shall be considered as a person and an entrepreneur as well, two records will be created in two tables. This will cause problems with the data update.

When using key-value or document-oriented storages, the data structure is not defined at the database level. This can be regarded as an advantage, though it complicates data integrity control and the developers collaboration, as they can potentially choose arbitrary keys to represent object properties.

The data structure is described by creating a set of classes and properties. A class is a set of the objects sharing some common properties. The classes can form several superclass-subclass hierarchies. We can define various class combinations, for example "all the counterparties except private persons and individual entrepreneurs".

The properties exist in ontologies independently of classes. It means that the objects of several classes, such as the furniture and trucks, can have the values of the "Length" property. This allows comparing objects of the various type using the same property. Each object can belong to the several classes. The set of applicable properties is inherited from all the classes and their superclasses.

At the ontology construction, the core classes and properties are typically imported from some common top-level ontology. Then it is enriched with the appropriate domain ontologies, which allows data interoperability. Finally, an ontologist creates the missing classes and properties.

Data storage

Relational DBMS store data in the tables, while document-oriented DBMS uses collections. The developers usually have to take care of the large datasets segmentation and sharding, use various tools to scale DBMS, optimize queries performance.

The graph databases (RDF triple stores) are the native data storage for ontologies. Such databases are not fit for the transactional data storage, but they are good for analytics. If a business application processes data in near real-time mode and often performs data writes, the data virtualization platforms are a good solution. They project the certain classes objects into relational DBMS. For the data user all the dataset is still represented as a single graph. A developer shall not care of the physical data storage structure. This structure can be changed on the runtime, not affecting the code.

Data processing logic definition

The data processing logic is implemented in the program code. Logic is rigidly bound with the data structure.

The logic is defined using ontology elements which represent rules of the various kinds: data transformation and augmentation, user interface rendering, business processes execution.

Changes management

The new business requirements are coming at every stage of a software lifecycle. They may be caused by the regulations changes, new business hypotheses or business processes transformation. In the classical development approach, each requirement is formalized as a task and placed in the implementation queue. Each task is executed by the developers having various roles: they are changing database structure, modifying user interface and interaction (front-end) and processing logic (back-end). This requires significant time, from weeks to months, to implement even simple requirements. The business has to wait for long for new functions.

It is enough to change only the model (ontology) to implement most of the functional requirements. This usually does not affect the data and reflects changes in the data structure and their processing logic. The correctly designed model-driven application will "understand" these changes in the runtime and adapt its behavior as needed. Even if an application has to be restarted, we can usually avoid amending its code.

The model changes can be propagated between development stages and productive environment. This can speed up significantly new features delivery and cut development costs.

The initial development of a model-driven application can be a bit more costly, as it requires some extra analysts and developers efforts. But its operational expenses will be significantly lower. It will allow faster business requirements implementation, which is crucial for an enterprise market efficiency.

Our practice gives an example. A large IT system has been developed to process millions of events in the near real-time mode. The data structure is not rigid due to rapid changes in the business environment. It would be a non-rational choice to implement such a system using relational DBMS, as we have thousands of business object and event types, and hundreds of the processing algorithms – and all of this can change at every moment. Supporting of such a system can be a huge problem for the development team.

We have decided to use ontology for data structure and logic processing description, and store data in the virtualized storages. The program code was written in the way that allows the new object types and algorithms processing when they emerge in the ontology. This feature was used many times during the system exploitation. The resulting economy of the developers work time has exceeded significantly the "excessive" initial development costs. And new features were deployed in production in days, not weeks.

If there is a need of a complex and changing data processing in your enterprise, we can help you to design, develop and support the model-driven applications. The ontologies usage scenario presented in this paper has all the benefits of the Low Code approach and is free of its drawbacks. The model-driven applications development can be done using any programming language and does not require specific programming skills.