Model-based methods are gaining more and more popularity in development. Why is that? And what are they exactly?
At IMT, we always strive to make our development process efficient so that we can optimally support our customers in the implementation of their projects. There is no way around model-based methods, as they can improve or simplify the most diverse aspects of system development.
There is a very wide range of model-based methods, which are intended for a wide variety of purposes and it depends very much on the project which methods bring any added value (if any). It is therefore important to choose the right set of model-based methods for each project.
In this article, we try to clarify all the terms surrounding model-based methods and demonstrate how they can be used.
“A model is a simplified image of reality. Images can take the form of concrete objects or be represented in a purely abstract way.” Wikipedia
A good model shows all relevant properties of the real object, but simplified in the best way possible.
However, it also follows that a model is always optimized for a specific purpose.
As we can see in Figure 1, models can take a very different form to the real object. There is an almost unlimited number of possible models—the more complex an object, the more possibilities naturally arise. But even for very small things like atoms or light, different models are used in physics.
This may be one of the reasons why the term “model” in “model-based methods” is open to so many different interpretations.
However, the core is always the same and corresponds more or less exactly to the above definition. A model is a simplified representation that depicts all relevant properties of the real object. A model of this kind is less complex and thus simplifies work in relation to the specific purpose.
It is therefore only logical that there is a whole range of “model-based methods” for different areas of application. In the following, we will try to clarify the somewhat confusing terminology that arises.
According to everything we now know about models, the purpose of use as the central property of a model is a suitable parameter for a classification.
This also makes it clear from the outset that we only want to consider approaches for the development of technical systems here. So we can safely disregard the model car from Figure 1. However, the other two models demonstrate two important purposes. These purposes could be subdivided as follows:
Even though there is some overlap, the model-based methods can be divided into these two groups based on their basic intention.
In starting from a software application, it is obvious that a behavior-describing model can be used to automatically implement this application.
If this approach is extended by a second behavior-describing model, which represents the environment, i.e. the behavior of all interfaces of the first model, it is possible to simulate the behavior of the entire system.
The most common term for such an approach is:
This also makes it possible to carry out simulations or tests for different development stages. The possible approaches are referred to as:
A description of these approaches can be found in the glossary below.
Since the behavior is described by the model, it is not only possible to have the software application generated automatically, but also to verify the implementation in different development phases.
The third model in Figure 1 shows a highly abstracted structure of a car. A car consists of 2 front wheels, 2 rear wheels and an engine, whereby 2 wheels are driven by the engine via an axle. This type of model is very suitable for representing structures and relationships.
Model-based methods that use similar types of models to manage information include:
The first two methods are used in the specific disciplines of mechanics and electronics and are integrated in the corresponding development tools. MBSE, on the other hand, is a method for describing architectures of complete systems and thus in principle starts one level higher.
After the above explanations, it is clear that this is the wrong question. Each of the above methods has its specific purpose. The question should therefore be:
Model-based methods offer many advantages, but often require appropriate tool support. If this is given, it is also possible to use all the above methods without further ado.
If all of these methods are used, the following overall picture emerges:
|Model-based design||MBD||Model-based software development method, in which two behavioral models are used to describe the behavior of the software and the behavior of the environment.
This is also the basis for the use of “in-the-loop” procedures (MIL, SIL, PIL, HIL).
|Model-based enterprise||MBE||Model-based mechanics development method in which the 3D model serves as the central source of information.|
|Model-based system engineering||MBSE||The system engineering methodology, in which information about a system and its structure is stored in a central model. Model-based system-engineering.|
|Model in the loop||MIL||The simulation of an embedded system in an early development phase, where the model of the software is simulated along with the model of the environment.
Assumes model-based design.
|Software in the loop||SIL||The simulation of an embedded system in a very early development phase, where the generated code of the software model is simulated along with the model of the environment. The generated code runs on development hardware that is not identical to the target hardware. Assumes model-based design.|
|Processor in the loop||PIL||The simulation of an embedded system in a very early development phase, where the generated code of the software model is simulated along with the model of the environment. The generated code runs on the target hardware (processor). Assumes model-based design.|
|Hardware in the loop||HIL||The testing of an embedded system at a late stage of development, whereby the generated code of the software model is tested on the target hardware in an interaction with a real environment recreated by a HIL simulator. Assumes model-based design.|
|Electronic computer-aided design||ECAD||Software for the design of electronics. Also referred to as Electronic Design Automation (EDA).|
|Printed circuit board||PCB||A printed circuit board or circuit board.|
The terms model and model-based can be interpreted very broadly. This has not least to do with the fact that a model is always created for a specific purpose and the set of legitimate models is at least very large, if not infinite.
What they all have in common, however, is that they are simplified or reduced to what is essential. The aim is always to facilitate the activities it carries out.
General advantages and disadvantages of model-based methods:
The advantages lie in the improvement or support of work processes.
The necessary tool support can be considered a disadvantage. This can be further subdivided into tool costs, familiarization with new tools and the danger of tool lock-ins (strong ties to proprietary data formats, etc.).
However, the disadvantages mentioned can be greatly reduced with a considered choice of tools/development tools. Of course, even then it is still necessary to weigh up the advantages and disadvantages. At IMT, we are convinced that every project, regardless of size, can benefit from the use of model-based methods.
Examples for application of model-based methods at IMT:
For the development of specific electronic and mechanical components, we use CAD systems such as “Altium Designer” or “SOLIDWORKS 3D CAD”. The majority of such systems are already established on the market.
When developing larger systems, we mostly use MBD. In addition to the benefits of rapid prototyping, this also enables early fault detection through the simulation/testing approaches offered by the “in-the-loop” processes. Among other things, IMT relies on “MATLAB/Simulink” for this. For example, this is used in the development of measuring instruments.
If the product also has to meet strict standard requirements, it makes sense to combine this with an MBSE approach, which supports proof of conformity with the standard. For example, this is the case in the development of medical technology products.
Model-based methods are often used to implement smaller projects such as test automation tools, which allows “boilerplate code” to be generated. Among other things, IMT uses its own “DATAFLOW Designer” software tool, which follows an MBSE approach.
The (EN) ISO 13485 in its 2016 edition, as well as the FDA Guidances and the European Medical Device Regulation (MDR) require software used in the quality management systems of medical device manufacturers to be validated. While ISO 13485 and the EU MDR do not go into detail on validation should look like, the US FDA published guidelines on January 11, 2002: “General Principles of Software Validation; Final Guidance for Industry and FDA Staff”. The ISO and IEC working groups have also considered this topic and published ISO/TR 80002-2 in 2017: “Validation of software for medical device quality systems”.
The aim of this article is to demonstrate a simple way to set up tool validation.
Both ISO 13485 (in chapter 4.1.6) and the FDA Guidance (in chapter 4.8) require software validation to be commensurate with the risk of the software, regardless of the size of the company or the resources available. Accordingly, a quality management system requires an overview of the software applications used and their intended use. Preferably, this overview will also document whether the software is
For standard software such as Microsoft Word, which is installed in the thousands, the probability that an error will be found through the swarm intelligence of its thousands of users is probably much higher than through tool validation. This circumstance should be taken into account in tool validation. It is possible that the process for standard software, which is only used for non-critical purposes, has already been completed at this stage.
The software validation plan should include the following elements:
These elements can either all be documented in one document with different sub-chapters or they can be outsourced to different documents. The latter is particularly suitable for more extensive requirements documents.
The chapter or document describing the software should describe the intended use, the intended users and their environment. Use case diagrams can be beneficial to give a simple, visual overview:
The user requirements should be in a testable format and given a unique ID. At IMT, we mostly use the following syntax:
|UR-[ID]||[Role] would like [Function] for [Purpose]|
for example, such a requirement could be
|UR-001||The Serveradministrator would like to be able to reset passwords, so Users who have forgotten their password can be granted access again.|
|UR-002||The User would like to save the only partially completed form, in order to be able to continue working after an interruption.|
Both requirements have a unique ID and can be tested. Requirements that cannot be tested are to be avoided. For example, “fast” should be replaced by a time such as “2 seconds”, or “bright” by “under an inspection lamp with 1500 lx”.
Each user requirement requires at least one test. In addition to a unique ID, the test specification includes the test instruction and the acceptance criterion or expected result. A test for the UR-001 above could look like this:
|ID||Test steps||Expected result|
|1||Start Admin tool and register with a username and password with administrator rights||Admin tool starts and an Admin is logged in|
|2||Select the user “Vergesslich, Hans” and click on “Reset password”||Password reset window pops up|
|3||Enter a new password and save||New password has been set.|
|4||Log into the app as “Vergesslich, Hans” and enter the new password||Login successful.|
Since the effects of a software error in the quality management system are different from those of a medical device, the risk management plan must also be adapted accordingly. In particular, the definitions of the impact categories must be considered differently. Depending on the area of application, the same impact categories take on completely different meanings. The impact category “critical” potentially means the death of a patient in the case of a medical device, but the loss of data in the case of medical device tracking software.
Before the validation report can be produced, the software must be tested and the risks assessed.
The software must be tested according to the test specification. In the process, not only a “pass/fail” must be logged for each test step, but also the actual behavior observed, so that the test can be re-enacted. Therefore, the software and its environmental conditions must also be logged. This includes:
Date, examiner and 4-eye-principle evaluation
Using the above example, the report could look as follows:
The overall test is:
Date of assessment: January 01, 2000
Auditor: Max Muster Evaluation: Maximilia Meier
The risk analysis should assess all known problems. In the case of software that is distributed for use in a regulated environment, a corresponding list of known anomalies is often also published. It is worthwhile to check the manufacturer’s support website or to contact support. In addition to problems known to the manufacturer, shortcomings such as missing features must also be assessed. If a risk is not acceptable, control measures must be defined to reduce the risk. As medical device manufacturers who are used to risk management according to ISO 14971, we recommend following the same process.
The software validation report should include the following elements:
NB: For “smaller” applications, it is possible to produce the test report, risk analysis and validation report in one document.
Identifying the software and environment, test report
Often there is a matrix of validated versions and environments for the software. If this is the case, it should be mapped accordingly in the validation report:
|Software Version||Windows 10, 32 bit||Windows 10, 64 bit|
|V1.0.1, Build 1.0.1.00123||n/a||Bericht 1.pdf|
|V1.0.1, Build 1.0.2.00254||Bericht 2.pdf||Bericht 3.pdf|
Risk management report on the known anomalies of the software
The validation report shall further assess the residual risk arising from the use of this software. The focus is on the acceptability of the known anomalies and/or missing functions.
Another aspect to cover is to prove that all requirements for the software have been tested. Depending on the scope of the requirements and the documentation of the tests, two ways of documenting this have emerged:
Forward linking of the requirements to the tests
A column is added to the requirement table where the requirement is checked:
|UR-001||The Serveradministrator would like to be able to reset passwords, so Users who have forgotten their password can be granted access again.||TC 1-4|
|UR-002||The User would like to save the only partially completed form in order to be able to continue working after an interruption.||TC 5-8|
This variant is particularly suitable for a small scope of requirements and if the entire validation is created in one document.
In the case of extensive requirements documents or if the test implementation has its own IDs, it is advisable to create a traceability matrix that contrasts the requirements with the test specification and possibly the test report:
If several reports need to be mapped for different runtime environments and software versions, this can be mapped very easily in a traceability matrix by adding additional columns.
Formal evaluation and approval
Last but not least, the entire package of documentation should be evaluated and documented with an approval decision.
As shown in the images, the validation of the software in the quality management system is nothing more than the systematic, documented execution of the upper left and right corners of the V-model widely used in software development:
Embedded „system design“, „system architecture“, „software architecture“, what exactly is behind these terms? What do they mean? What are the definitions? Why is it important to differentiate between these two terms?
This article deals with the answers to these questions.
Here, we are dealing with technical systems, Or more specifically, with embedded systems. An embedded system is defined as follows:
“An embedded system is an electronic computer that is integrated (embedded) in a technical context. In this context, the computer either performs monitoring, control or regulating functions or is responsible for some form of data or signal processing, for example in locking or unlocking, encoding or decoding or filtering.”
Very similar to this definition (source https://www.embedded-software-engineering.de):
“An embedded system is a binary-valued digital system (also called a computer system) that is embedded in and interacts with a surrounding technical system. Here, the computer usually takes over monitoring, control or regulation functions, but is often also responsible for some form of data or signal processing.”
These definitions can be clarified with the following diagram:
For the sake of simplicity, we usually simply refer to the “system” we are dealing with. Throughout the remainder of this article, we will often only refer to the „system“, which means the corresponding technical or embedded system.
In order to explain the terms, we must demonstrate how such a system is built. Figure 1 presents the embedded system as a black box. The structure of the system has not yet been defined. The components from which the system is to be built must be defined. This activity is known as “design”. A “system design” is therefore about defining and determining the system structure. Based on the diagram (Figure 2), the result of this design activity could look as follows:
Design activities tend to be repetitive. This means that the components—referred to here as architectural elements—are decomposed until they are sufficiently defined. Only then can the creation (development) of the system begin.
The result of this design activity is called “architecture”. Figure 2 is therefore a first “system architecture” (strictly speaking, Figure 2 is a reference architecture of a general embedded system), although the term architecture still needs to be defined more precisely in this context. There is no single established definition of the term architecture. You can find a list of definitions here: Definition Software Architecture. While these are definitions of software architecture, they also apply to the term system architecture. The distinction lies in the scope—more on this below.
We use the following definition of architecture (from „Software Architecture in Practice – Third Edition“):
Definition of (software) architecture:
“The (software) architecture of a system is the set of structures needed to reason about the system, which comprise (software) elements, relations among them, and properties of both.”
It contains the following 3 sentences:
So, what does that mean?
The structures consist of the elements, their relationships and their properties. Architecture consists of structures, which are needed to ensure that the system behaves as required. The whole area of non-functional requirements is also meant here, in addition to the fulfillment of functional requirements. For example, requirements regarding further development usually influence the structure of the system. However, not all structures in a system belong to architecture. Not all structures are architecture-relevant. These structures are then no longer referred to as architecture, but as design.
“Design” is on the one hand the activity that leads to an architecture (see above), and on the other hand the term for all non-architectural results of the design activity.
Many definitions simplify things by stating that the rough structure or the structures defined early in the design process are architecture. All later or more detailed structures are design. However, these definitions are insufficient, as more detailed structures may well be necessary to meet the required demands. Also, in agile development, not everything relevant is defined early in the design process.
Finally, the distinction between “system architecture” and “software architecture”: As already mentioned above, they differ in scope. The system architecture includes the structures of various system elements, such as hardware, software and mechanics. The software architecture, on the other hand, comprises the structures that define the structure of a software element (an element from the system architecture). This can be seen in the diagrams below:
In the further decomposition of a software element, we eventually end up with the software architecture:
The most important terms and their definitions are summarized again in the following table:
|Embedded system||An embedded system is an electronic calculator or computer, which carries out a defined function and is embedded in a physical environment, is optionally surrounded by other subsystems and has an optional user interface.|
|Design (activity)||The activity that defines how a system is built from several components (architecture elements).|
|System design (activity)||See “Design”— with the specification that it is the design of a system, i.e. various elements such as hardware, software, mechanics.|
|Software design (activity)||See “Design”— with the specification that it is the design of a software (software elements).|
|Architecture||The architecture of a system is the set of structures needed to infer the system, or to ensure that the system satisfies the required properties. The structures include the elements, their relationships and their properties.|
|The design||The structures which, in addition to the architecture-relevant structures (see „Architecture“), also define the structure of the system, but which are not relevant for the required system properties.|
|The system architecture||See “Architecture”— with the specification that it is the design of a system, i.e. various elements such as hardware, software, mechanics.|
|The software architecture||See “Architecture”— with the specification that it is the design of a software (software elements).|
The documentation for system and software architectures is explicitly required for medical systems with safety-critical requirements, such as patient monitoring systems or ventilators. In addition, proof must be provided that an architecture has also been implemented according to its definition. But even outside the regulated area, the accurate description of an architecture is useful for most applications, because the description of an architecture is not only proof that requirements have been fulfilled, but it also helps to detect errors in the specification at an early stage. This article shows which (auxiliary) means can be used to describe an architecture and what can be followed.
As explained in the article on system and software architecture, the architecture of a system is “the set of structures needed to infer the system, or to ensure that the system satisfies the required properties. ”
Admittedly, this definition is rather abstract and shows no concrete reference to hardware, electronics or software. However, a system consists of different structures such as software, electronics or mechanics. In addition, a system can also be characterized by logical structures or temporally sequential processes. In this sense, the term “the architecture” can be misleading, as the singularity of the term architecture might suggest that an architecture consists of a single image. However, practical experience shows that a system must be viewed from many different angles (views) and thus must also be described. For example, a system may be divided into logical units and functions that do not necessarily correspond to the physical division. This can be shown by the following, highly simplified, example:
But which is the right or best view, and most importantly, the one recognized by regulations?
An architecture description usually consists of showing different views, making connections between views, and making the fulfillment of different requirements clear. In this respect, the description of an architecture consists of several views, which highlight different aspects. That is, there is no one true view, but the description of an architecture takes into account different views equally .
The views are an essential part of the international ISO/IEC/IEEE 42010 standard [link to https://www.iso.org/standard/50508.html], which provides guidance on how to describe the architecture of a system. This standard was published in 2011 and is the result of a joint ISO and IEEE revision of the earlier (software-heavy) IEEE 1471 standard [link to https://standards.ieee.org/standard/1471-2000.html]. The original IEEE 1471 standard specified how to describe the architecture of a system. Further requirements, such as the structure of an architecture as well as requirements for description language, were added in 42010. However, the standard does not require a specific architecture description or description language. Instead, the practice of describing an architecture should be standardized.
Views are intended to represent a system by considering stakeholder requirements (concerns) based on defined viewpoints. Figure 2 shows how views, viewpoints, requirements and stakeholders are related.
Due to IMT’s many years of experience in the field of system and software design, the following process has been established to create an architecture description, which is strongly based on the ISO 42010 standard:
The individual steps are described in more detail in the following chapters.
In order to define the viewpoints in a targeted and, above all, systematic manner, the first step is to identify the stakeholders who work with and for the system, or make decisions about the system. These may include developers, product managers, risk managers, end users, production staff, project managers, etc.
The requirements for the architecture description should be recorded and documented based on the stakeholders identified. In addition to the context interview held with the stakeholders, information on requirements for architecture documentation can also be found in the user or system requirements. Table 1 shows how this might look using Figure 1 as an example.
|Stakeholder||Expectations of the architecture (concerns)|
|Software Developer||Definition of the target system(s)
Definition of interfaces
|Product Manager||Expected scalability
Proof of fail-safe operation
|Data Protection Officer||Evidence of data protection|
The viewpoint is characterized by a set of requirements from the corresponding stakeholders, as well as different conventions regarding model types, notations and techniques for the views based on them. Table 2 gives an example of how the viewpoints can be defined. The corresponding notations must be defined in addition to the tabular list. This can be done either in each view separately, or, in terms of the 42010 standard, in a separate viewpoint-specific section.
|Viewpoint||Concerns addressed||Model types||Analyze techniques|
|Interfaces||Hierarchical decomposition diagram including description of all interfaces.||Reviews|
|Hierarchical decomposition diagram including description of all interfaces.||Reviews
Fault tree analysis
|Data processing procedures||Data protection||Tabular list of data processing operations, including sensitivity classification and intended use.||Check-lists|
A viewpoint is a type of template that can be (re)used across multiple systems and/or is interchangeable among architects. This template is then used to create system-specific views based on it. A well-known collection of viewpoints is the “4+1 view model https://www.cs.ubc.ca/~gregor/teaching/papers/4+1view-architecture.pdf” by Kruchen or the arc42 template , which are used in software systems especially. However, the use of viewpoints is not limited to software architectures—they can be applied to any system.
The definition and choice of viewpoints is primarily determined by the concerns of the identified stakeholders. In the following chapters, we present three possible views, although this set is not applicable to every system.
In general, it is recommended to keep the number of views used to a reasonable minimum, as redundancies increase with each additional view. In addition, each view must be maintained, tracked and, most importantly, kept consistent throughout the course of the project. The resulting effort increases with the number of views. In this case, less is more—provided that the views allow the requirements to be proven.
The logical view is often used as the (primary) viewpoint for devices consisting of hardware and software. In the process, the system is broken down into its individual logical parts and arranged. In this view, functional requirements can be assigned to components and possible functional redundancies within the system can be identified. Explicit assignment of system requirements makes sense, as this ensures traceability. Proof of traceability is required, especially for safety-critical systems. This applies not only to system architectures, but also to subsystem and software architectures.
Communication paths and timing sequences/dependencies can also be defined within the logical view, depending on the architectural language. A combination is presented, as logical division and the associated processes are extremely dependent. Alternatively, the processes can be represented in a dedicated view, in which case the design of the logical view and process view is usually repetitive.
As with the logical view, different physical viewpoints can expand the description and understanding of the system. Physical views can be, for example, electronic, pneumatic or electromechanical views, which are useful depending on the requirements of the system. A physical view can also make sense for distributed software systems, in which implementing non-functional requirements such as scalability, availability or performance can be demonstrated.
The development view is often only implicit, but it is also very important for a smooth project workflow. This shows how the system to be developed can be divided into small work packages in order to assign them to individual developers or development teams. Not only can internal requirements such as reusability or the selection of development tools be taken into account, but this view also enables development costs and deadlines to be monitored throughout the course of the project. The development view is often not part of the formal system architecture, but is created as part of project planning.
Running scenarios is a popular tool for architecture verification. The most important key scenarios are defined depending on the requirements. A scenario is not a view in its own right, but builds on existing views by linking them. Therefore, a complete architecture description is the basis for verification, or rather, the scenarios can only be run if the architecture has been fully described. Analogous to the description of the logical or physical view, the sequence of the scenario is presented in diagram form. Object interaction diagrams are often used to represent the interaction between logical and/or physical elements (see also Figure 1). Such scenarios are not only used to verify an architecture, but also to help understand the structure of a system.
Often, different stakeholders are involved who place different demands on a system. In order to address these concerns specifically and to verify that they have been fulfilled, several views are needed to describe an architecture. These views can be derived from generally defined viewpoints. The viewpoints consist of universally applicable conventions and are thus interchangeable and reusable across multiple systems. Finally, an architecture description can be verified with the help of scenarios to demonstrate fulfillment of the most important requirements.
 Basic principle (II) of ISO 42010
References to non-existent chapters or documents, outdated figures, inconsistent terms!
Unmaintained system architecture documentation often raises more questions than it is able to answer. Accordingly, care should be taken to ensure that all information is kept up to date.
However, even with the best intentions, precisely defined workflows, and strictly adhered to development processes, mistakes will creep in every now and then. As a result, the information is not always completely trustworthy and must be checked more often during implementation.
The following section highlights the issue of information management in the system development process. The weaknesses of the “classic” document-centric approach are addressed and alternative approaches are highlighted.
A system architecture has a hierarchical structure, which is usually described in one or more documents by means of figures and text.
Figure 1 Shows an example of the structure of a system architecture. On one hand, the arrows represent the hierarchical dependency, and on the other, the flow of information (terms, etc.) throughout the course of the design process.
Thus, the information is spread over several documents and images, and if one piece of information is used in several places, it is always a duplicate. This means that each time a copy of the information is made, there is no longer any connection to the original information.
The issue here is evident. To make a change correctly, the information must be adjusted in all places where it is used. This is not only very time-consuming, but also prone to error, as it is not known where else the corresponding information is being used.
The continuous copying and passing on of information is strongly reminiscent of the well-known children’s game “Chinese Whispers”, the appeal of which lies in the fact that mistakes are inevitable.
In Figure 2, this simple game is used as an example to show what could happen with a parameter designation—in this case even without introducing a content error. However, this is also very detrimental to the comprehensibility of a system architecture.
If we look at the “Chinese Whispers” example, the solution is probably obvious.
And just like that, all the fun of the game is ruined. However, for all other cases where divergent information is no fun, this is very advantageous.
In software development, there is a basic principle that aims to do just that: “Don’t repeat yourself”. The main goal is to avoid redundancy in order to increase consistency. This principle can also be applied to a system architecture. This would make it possible to keep them up to date easily and with little risk of error, benefiting not only the design process, but also the development and lifecycle process.
Another way to achieve this is by continuous synchronizing information. In this case, any changes made to information is automatically propagated to all instances where it appears. This is a decentralized approach in which a software development analogy most closely resembles a Git repository, with the ability to merge multiple information assets.
Single Point of Definition
“The code is the design” is another approach from software development that can be seen as either pragmatic or radical. Perhaps this approach is also a result of the “Law of the Instrument” (“If your only tool is a hammer, every problem looks like a nail”).
And while the simplicity of this idea has a certain appeal, it comes with too many limitations.
A somewhat more general approach is to use a central information manager, which manages information centrally and makes it available for the various areas of application. This idea is nothing new and is used in software development—for example in the “Model View Controller” concept. In systems development, this is often referred to as “model-based…” or “model-driven…”. The term “model” refers to the place where the central data is stored or managed.
In Figure 4, the example of the structure of a system architecture is shown above, with a central information management instance added.
However, this approach needs stronger support from specific software solutions, since information must be obtained from the “system model”. As a result, the architecture documents are no longer the direct output of the design process, but must be generated from the “system model” in a separate step.
Although this involves changing the usual working tools to a certain degree, the advantages of such a solution outweigh the disadvantages. This can be seen not least from the fact that similar concepts have already been established in other areas. Examples include CAD software for mechanical development or ECAD software for electronics development. In IMT, for example, “SOLIDWORKS 3D CAD” and “Altium Designer” are used for this purpose.
A common name for this approach is “roundtrip engineering”, where information in different documents, software artifacts, etc. is to be kept in sync. It describes something like the ideal image of a software architecture, or even the implementation of a whole product. Regardless of where the information is edited, it is automatically tracked in all places where it is used. It promises even more: Regardless of whether a new component is added to the architecture or implementation, you get the same complete set of documents, software artifacts, etc. As nice as this would be, it’s not all plain sailing from here.
The fundamental problem is that this assumes a one-to-one relationship between all hierarchical levels. This would not only lead to very strict limitations in terms of implementation, but in some cases it is not even achievable, as abstraction at higher levels means that not all the information is available.
That being said, if we look at this more pragmatically, it becomes apparent that this is not even necessary for the case of application of system development. If it is clearly defined at which level which information is defined, it actually seems to be much more understandable. With the exception of this limitation, centralized data storage, which was introduced earlier, can provide the same functionality.
Of course, it would be consistent to use this central data in as many aspects of system development as possible. However, this would require all the necessary functions to be integrated into a software solution, or at least a defined interface to share the information. However, since the different areas of system development each have very specific requirements and there are already very good stand-alone software solutions for this (e.g. “SOLIDWORKS 3D CAD” and “Altium Designer”), it would probably involve an enormous amount of effort to also use the information directly in the specific CAD applications.
However, for the most agile part of system development—software development—it is quite possible to use code generation to create the structure and so-called “boilerplate code” for implementation. This allows the benefits of centralized information management to be leveraged right through to implementation. For this purpose, IMT relies on its own in-house developed software tool “DATAFLOW Designer”, which supports not only the provision of boilerplate code but also verifies conformity to standards.
The goal is an up-to-date, consistent and comprehensible system architecture that provides every developer with the required information quickly and easily. To achieve this, the developers should be supported in the best possible way during creation and customization. Central information management is a fundamental building block for this, which makes it possible to support developers in a variety of ways—from design to implementation. The achievable increases in efficiency and quality benefit both the customer and the developer.
Nutzen Sie unsere 30-Tage-Testversion
Wir wünschen Ihnen viel Spass mit DATAFLOW Software