The essence of the Low Code principle is to move as much logic as possible from the application program code to the level of settings that can be changed by an analyst rather than a programmer. It is believed that this approach allows making the development less labor-intensive and speeding up the introduction of new software features into the product. In practice, to implement this principle, cumbersome platforms and frameworks with configurators are created, which require long training of the personnel working with them, slow down the execution of programs and limit their functionality. Nevertheless, the very idea of transferring logic from program code to application settings (data) is productive. In this article we will discuss how it can be implemented using ontology modeling tools.

Most applications are still built according to the MVC (Model - View - Controller) model, in which the logic of working with data, the logic of presentation and the logic of interaction with the user are laid down at different levels of the program code. Most often applications are built according to the three-link scheme, i.e. they have server and client parts, as well as a database. It is clear that with this approach any change of the logical data structure causes the necessity to change the physical structure of the tables in the database, as well as to interfere in all three levels of the program code. The microservice architecture does not save either, since the API structure of each service is usually also tightly linked to the data structure. All this makes the process of adapting applications to new business requirements related to changes in data structure and logic of their processing long and expensive. In practice applications almost always "catch up" with business requirements, but due to the constant arrival of new requirements they never "catch up". As a result, the IT system never fully meets the functional requirements, even though significant resources are spent to support it.

MVC and three-tier architecture are certainly proven and worthy software design patterns, but isn't it time to move to more modern techniques? We propose to create applications based on the ontological model of the subject area. We will call such applications model-driven because their behavior is largely controlled by the model.

An ontology model is a machine-readable representation of knowledge about a subject area, represented according to the W3C consortium specifications: RDF/RDFS/OWL. An ontology model can be used to describe the structure of data, and the data itself can also be placed in it. Methodologically, the model is divided into two layers - TBox, which contains statements about classes and properties (roughly speaking, about data structure), and ABox, which contains the data itself. Technologically, both layers are represented as a graph. This makes them homogeneous: both data and their description can be managed using the same tools, such as SPARQL queries, the structure of which does not differ when the data itself or its structure changes (unlike SQL, where data structure and data itself are managed using different types of queries). An important advantage of ontology models is the ability to assign each entity to several types - not just one, as in relational DBMSs, where the table in which the record is located defines its only type. Each property of a particular information object in the ontological world can have several values, including those annotated with a language or data type label.

But that is not all. In an ontology model, you can also describe rules of information processing. There are several ways to do this. Logical integrity control rules can be described according to the SHACL Constraints specification. Rules for enriching, supplementing information based on logical conclusions are described in accordance with the SHACL Advanced Functions (SHACL Rules) specification. The same specification defines the ways of representation of mathematical formulas in the model. Each rule is a set of ontology objects, just like any structure or data element.

It is possible to describe in the model both rules of data transformation, and rules of user interface formation, and business logic of application execution - for this purpose it is necessary to use one of the ready ontologies or create your own. Then you need to write program code that will interpret the structure and rules of the model, and rearrange the logic of its operation in accordance with changes made to them. Thus, ontological modeling allows to combine in a single technological representation (graph) the data themselves, description of their structure and processing logic. This makes it possible to eliminate the gap between these elements of applications artificially created in some methodologies.

A model-oriented application will make it possible to realize some types of functional requirements as quickly as possible. For example, if a new type of data objects describing business objects that need to be processed in the application according to some logic appears, it will be enough to make changes in the model without touching the program code - and the application will work as it should. If, of course, it was designed accordingly.

Let's illustrate the differences between traditional and model-oriented approaches to application software development and support. For this purpose, let's compare how certain stages of application development are performed in both approaches.

Traditional approachModel-oriented application
Data structure description

Most often relational DBMSs are used, in which the data structure is embodied in the structure of tables. When projecting information about business objects into an SQL database, some compromises have to be made: for example, information about physical persons and individual entrepreneurs is likely to appear in different tables because they have a different set of properties, although in reality each individual entrepreneur is a physical person. If a person is interesting in the system both as a physical person and as a sole proprietor, it is likely that two rows in two different tables will be created to reflect information about him/her, which will cause problems with data updating.

If key-value or document-oriented stores are used, the data structure is not explicitly described at the database level. This can be considered as an advantage, but it complicates data integrity control and collaboration of developers on the program, since each of them can potentially choose any keys to reflect certain attributes of data objects.

Data structure is described by creating classes and properties. A class is a set of objects that have some common characteristics. Classes can form many hierarchies of superclass-subclass. You can set any fancy class configurations - for example, "All counterparties, except sole proprietorships and individuals".

Property definitions in the ontological model exist independently of classes. This means, for example, that the property "Length" can be possessed by objects of different classes - for example, furniture items and truck bodies, which allows comparing different types of objects by the value of the same property.

Each object can belong to any number of classes at the same time. The set of properties applicable to an object is inherited from all its classes and their superclasses.

To create classes and properties, a top-level ontology developed by conceptual modeling professionals is usually used as a basis. It is then extended with the necessary domain (industry) ontologies to ensure data interoperability. Finally, the missing classes and properties are inserted into the resulting set by the ontology analyst.

Storage of actual data

In relational DBMSs, data are placed in tables, in document-oriented DBMSs - in collections. As a rule, application developers must take care of segmenting and sharding large data sets, use various DBMS scaling mechanisms, and optimize query performance.

The native method of information storage for the ontological world is triplet storages - graph DBMS of RDF triple store class. Such DBMSs are poorly suited for storing transactional information, but good for storing analytical information. If a business application operates in near real-time mode and constantly performs data change operations, data virtualization platforms are usually used to store information, which project sets of objects of certain classes (but not the model structure and rules) into relational DBMSs. In doing so, the information remains a single graph for the data consumer. The developer does not need to think about the physical structure of information storage, and this structure itself can be changed in the course of the system operation without making any changes to the code.

Description of data processing logic

In the traditional approach, all the logic of working with data is described in program code. The description of the logic is usually tightly linked to the data structure.

Logic is described in the form of ontology objects representing different types of rules: data enrichment and transformation, user interface construction, and business process execution.

Change management

In the course of application operation, there is usually a flow of new business requirements, which may be related to changes in regulatory requirements and normative acts, new business hypotheses or business process transformation. In the classical approach, the implementation of each requirement is formalized as a task and queued for execution. Different specialists may participate in the task execution: some of them modify the DBMS data structure, others modify the user interface (front-end), and others modify the processing logic (back-end). Fulfilling even a simple requirement usually takes a long time, from several weeks to months. During this time, the business has to solve its urgent tasks outside the automated system.

To realize many functional requirements, an analyst only needs to make changes to the data model (which does not affect the existing data unless he takes care of it explicitly) and to the logic of their processing described in the model in the form of rules. A properly written model-oriented application "pick up" these changes on the fly and rebuild the logic of its work as required. Even if you need to restart the application, you will most likely avoid interfering with its program code.

Changes in the model can be distributed in packages between development, testing and product loops without touching the application itself. This allows you to significantly accelerate the introduction of new features into the product and reduce the cost of application adaptation.

Building the initial version of a model-driven application may require a little more effort on the part of analysts and programmers than traditional development. However, it will be much more cost-effective to run the application as requirements change. Most importantly, it will allow quickly meet the functional requirements of the business, which is important for building the competitive advantage of the organization.

Here is an example from the practice of our specialists. A large information system must process information about millions of events related to hundreds of thousands of different objects in near real time. The structure of information about objects and events is constantly changing, as the world around us is changing. Creating such a system on the basis of a relational DBMS would be an extremely irrational choice: the number of types of objects and events is measured in thousands, and the algorithms of their processing - in hundreds. All this is subject to rapid changes. Supporting changes in the data structure and processing logic would create a huge burden on the system developers.

It was decided to use the ontology model to describe the data structure and logic of their processing, and to store the data in virtualized storages. It was decided to write the program code of the system in such a way that in case of new types of objects and events, changes in the algorithms of their processing, it did not require restarting the application. This feature came in handy more than once, and the total saving of developers' working time exceeded "redundant" costs for initial design and development many times over. The main thing is that the interval from the appearance of a functional requirement to the output of the ready functionality into the product was reduced to a few days or weeks.

If your organization has a need to develop business applications that work with complex and volatile data structures - we would be happy to be involved in the design, development and maintenance of such applications. The ontology modeling scenario described in this article demonstrates the most visible and business-critical advantages of this technology over traditional development practices. It embodies all the advantages of the Low Code approach and at the same time is free from its disadvantages, since development can be done in any programming language and does not require any specific knowledge from programmers.