Banking

Modernising suitability and portfolio monitoring checks

symbolic picture for application modernisation: dots connected through lines
11 Minuten Lesezeit
Mit Insights von...

Migration of an advisory services application from stored procedures to web services: Learn more about our recent application modernisation case.

  • Recent application modernisation case for our clients

  • Performance, complexity and quality in focus

  • Migration of the operational production system from the initial to the target scenario

The initial situation was an existing advisory services web application combining different aspects in a single backend, supplemented with additional business logic implemented as stored procedures in a relational database. We supported the customer in rebuilding the existing functionality as modularised web frontends and backend web services, using the database for persistence only.

A major part of the project was to rebuild the component to execute suitability and monitoring checks for security portfolios. We addressed the topics of performance, complexity and quality together with the client. The goal was to provide a solution for implementing new checks with less effort and higher reliability to roll out the application to additional countries.

Thanks to the client’s trust, close cooperation and interdisciplinary teams, our experts were able to contribute their full experience in application modernisation, agile methods and the highly complex challenges presented by the project. These were the success factors for achieving the project goal, which was to replace the database-centred solution consisting of stored procedures and database views with a backend-based solution providing a web service as the entry point.

The following sections describe how we migrated the operational production system from the initial to the target scenario. You will also see how we used behaviour-driven development, domain-driven design and agile testing to proceed iteratively with fall-back strategies, and what the resulting improvements were in terms of performance, quality and maintainability.

Initial scenario

The existing solution was implemented mostly in the database with stored procedures and database views. Some of the checks were provided by a rule engine exposing a web service. The backend called the rule engine first to store the results in the database. In a second step, an initial stored procedure was called to execute a sequence of checks. The implementation combined check configuration, data collection, value calculation, check rules and state handling, sometimes in a single select statement.

relational database Figure 1: Initial scenario with database-centred solution

Two uses cases exist:

  • Single portfolio processing to check security transactions for one portfolio, initiated by user actions in one of the consuming applications/UIs.
  • Batch portfolio processing job to check multiple security portfolios, initiated after data updates (such as updates of portfolio, finance and master data).

Due to the scope and complexity of the logic implemented in the database, the following challenges emerged:

  • High testing effort – for manual and automated tests – due to many preconditions and possible results, related to a combination of check configuration, data collection, value calculation, check logic and state handling.
  • Recurring bugs – despite additional effort to increase test coverage – because the implementation is difficult to maintain and develop due to complexity, low visibility and redundant code locations.
  • Performance problems when checking individual transactions due to strongly fluctuating execution times, with runtimes ranging from several seconds up to database timeout.

As a result of these issues and the planned rollout of the application to additional countries, it was decided to look for a solution to address these challenges and be able to implement new checks with lower effort and higher reliability.

Target scenario

The target architecture is based on the following requirements:

  • Separation of the individual steps of check configuration, data collection, value calculation, check logic and state handling to simplify implementation and testing.
  • Execution of checks via a web service to decouple consumers from the database and enable future use outside the application as checks-as-a-service.
  • Reliable execution times for single and batch portfolio processing.
backend-based solution Figure 2: Target scenario with backend-based solution

Iterative approach

With the target scenario defined, we began solution design to gain an overview of the major building blocks and identify blind spots with uncertainties. It quickly became clear that analysing the existing logic check by check would take a lot of time. Migrating consumers from a tight database integration to a well-defined web service interface would also be challenging. Instead of conducting further analysis to try to answer open questions, we started with the base implementation of a single, initial check. This gave us early feedback to test our plan and validate the estimated complexity for each user story.

To go live with the newly implemented check, we needed a mechanism to integrate existing and new check implementations. This was achieved by transforming the result of the existing implementation into the result structure of the new implementation. Due to the separation into data collection, (stateless) check results and state handling, existing consumers had to interpret check results differently. To adapt consumers iteratively as well, the new data structure was initially converted back to the old result structure on the consumer side. This part proved more difficult than expected due to a huge number of result attributes and special cases, but was the key to avoiding a big-bang approach.

Behaviour-driven development (BDD)

We decided to define the requirements of individual checks and value calculations as executable acceptance criteria in order to be able to test them automatically. To do this, we relied on the Gherkin syntax GIVEN, WHEN, THEN and used the SpecFlow library to process the acceptance criteria in unit tests. These Gherkin specs then became the central element of the iterative procedure, which we followed for every suitability and portfolio monitoring check.

Thanks to the executable specification, we were able to start the implementation in a test-driven way, and living documentation of the current behaviour of the system was always available. This gave us further confidence for deployments and refactorings.

Domain-driven design (DDD)

To write acceptance criteria in Gherkin syntax and define the web service interfaces, a widely understood domain model is key. We therefore initially focused on the data model from a business viewpoint for every iteration. Based on the domain-driven design (DDD) theory, the domain model diagram in UML class notation – defining aggregates, aggregate roots, entities and value objects – was also one of the major living documents, and was extended with every iteration.

To keep the business domain functionality well separated from infrastructure and interface aspects, the implementation was based on DDD tactical design and clean architecture principles:

  • The domain, containing aggregates, entities, value objects, domain services, check implementations and repository interfaces.
  • An infrastructure part, implementing the repository interfaces to wire domain and infrastructure, such as database access for persistence and web services to call other systems.
  • A web part, defining and implementing the web services provided for consumers.

Fall-back strategy

To integrate changes rapidly and new check implementations iteratively in the operational production system, different feature toggles were implemented on the provider and consumer sides. On the provider side, single-check implementations could be deactivated, resulting in a fall-back to the existing implementation in the database. On the consumer side, the new web service-based implementation could be toggled on and off, again with a fall-back to the existing implementation in the database. This allowed us to activate new implementations iteratively and in test environments first, to gain the confidence to ship changes into production.

To benefit from performance improvements in the new implementation and avoid interference between old and new check results, the existing database implementation was extended with a bit mask to toggle single checks on and off in the main stored procedure. This slight adjustment to the existing implementation to skip single checks didn’t impact the existing implementation and made runtime improvements visible in performance monitoring with every re-implemented check.

Data flows between consumers and the check component, including feature flags Figure 3: Data flows between consumers and the check component, including feature flags

Agile testing/quality

As mentioned, behaviour-driven development including executable specifications and domain-driven design based on clean architecture principles gave us the opportunity to verify the functionality automatically at the unit test level. This part covers the most complex implementation containing the domain behaviour.

Verifying the main parts of a system with fast-running unit tests is fine, but we also had to ensure that these units were working together as expected. The web service provided to consumers also had to be tested to ensure service availability and expected functionality. To cover this, integration and system tests were also used in addition to unit tests based on the classical test pyramid.

Integration tests were mostly used for automatic testing of infrastructure implementations such as database access. System tests were used to call the running check component to verify continuous functionality from the web service down to the infrastructure.

Overall, the reliability of the new implementation improved significantly compared to the previous database-centred solution due to better testability. This was reflected in less maintenance work, which gave us more time to work on implementing new business functionality.

Runtime performance

The runtime performance of the new implementation was critical and is also related to the time-out issues in the current implementation. Therefore, performance aspects were addressed in the system architecture at multiple levels:

  • Only one call to external components for each check component web service call.
  • Calculate and collect only the data really needed for checks to be executed.
  • Keep data fetching under the control of the check component (not service consumer).
  • Parallelise execution where possible, e.g. execution of multiple checks.
  • Keep options for further improvements if necessary.
  • Provide a batch request for consumers to check multiple portfolios in one web service call.

Testing and monitoring also addressed system performance accordingly:

  • Use of automated performance tests to detect issues.
  • Performance measurement under real conditions in test and production environments.
  • Performance visualisation including resource consumption and system health.

Some of the points above may sound obvious and quite simple, but the complexity lies in the details. For example, it’s not possible to parallelise all the work or calls to other systems, since a dependency exists between them and the result is needed to proceed with the next steps. Calculating and collecting only the required data also depends on check configurations. These define the checks to be executed, which themselves define the data and calculations needed, and the calculations also define the required data to be fetched. The workflow of data collection, value calculations and check execution is therefore reversed to identify the aggregates to be fetched first. Finally, an automated performance test with realistic scenarios also takes time to implement and test.

With these actions, the requirements related to single-portfolio check execution times could be met without implementing any of the options for further improvements. This means that the main performance-related goal was successfully met. The execution time for batch portfolio processing also met expectations. Batch processing is usually a major concern for backend-based data processing compared to database-centred processing. In this case, these concerns were unfounded, since the check execution time was comparable to that of the previous implementation in the database. To a certain degree, this was surprising, since fetching the data in the backend, executing the checks and writing the results back to the database sounds slower than processing the data in the database directly. However, querying the database with complex joins multiple times for every check in a sequential execution as part of a stored procedure seems to negate the expected performance benefit of a database-centred solution.

Our application modernisation project with Swisscom: learn more

Implementation performance

The new solution resulted in much better predictability of effort and complexity in implementing new checks or adapting existing checks. Thus, it was possible to complete the implementation of a new check in one sprint, mainly due to:

  • The new system architecture, including a higher re-use of existing functionality.
  • Better testability due to separation of different aspects.
  • Preparatory work, including early business involvement to define detailed acceptance criteria.

Overall, the goal of providing new checks with lower effort and higher reliability was also met.

Clean-up

Feature toggles and fall-back solutions involve extra effort and result in some additional and redundant code. The previous implementation, with lots of views and stored procedures in the database, also needs to be cleaned up. It’s important to do this clean-up as quickly as possible and invest the required effort. Having two implementations in parallel increases complexity and maintenance costs and should be avoided.

There’s sometimes a fear of deleting old code, due to losing the fall-back option or maybe some old, unmigrated functionality. We addressed this point with the rule of production releases. If a new check was in production for one release without the need to disable it via a feature toggle, the old functionality and the feature toggle were removed in the next release. With this approach, we had clear rules for removing old functionality and the clean-up was also iterative. Also keep in mind that the old code is still available in the version control system, so nothing will be lost.

Rule engine

At the start of the journey, we discussed where the checks should be implemented. Should we use a dedicated rule engine, implement the checks directly in the C# backend or use a rule engine library for C# as a compromise?

Based on the defined steps of check configuration, data collection, value calculation and check execution, the check execution step is usually the smallest one. For simple cases, the check itself is just an IF statement to check if a calculated value is below or above a given value. Most of the effort lies in data fetching, when other systems must provide additional data and interfaces must be built or extended. We also had the requirement for batch processing functionality, which should be possible with high performance.

After a proof of concept with a C# rule engine, we decided on the easiest, most flexible and high-performance way of implementing the checks directly in the C# backend. Since a dedicated rule engine was already in place containing complex country-based regulatory rules, we kept these rules and called them as an external check implementation. As these rules change from time to time and need to be reviewed, the rule engine with a graphical representation of the rule tree is suitable. For other checks which are straightforward and more stable due to configured or pre-defined target values, the chosen solution fits perfectly.

Summary

The explanations provide an overview of the chosen approach for modernising an existing software component used in production. We looked at the initial situation and the target architecture as well as the process for achieving the goal. Important success factors were an understanding of the business domain, close cooperation between interdisciplinary teams and experience in modernising applications. Other success factors were an iterative approach for fast feedback cycles, domain-driven design for a ubiquitous language, and behaviour-driven development to provide executable living documentation. Quality was ensured with fall-back strategies, agile testing and a focus on runtime performance. Furthermore, the topics of implementation performance, clean-up and the rule engine were also addressed.

Macel Stalder
Ansprechpartner für die Schweiz

Marcel Stalder

Principal Consultant

Marcel Stalder ist Lead Software Architect und bringt viel Erfahrung in den Bereichen Software-Engineering und Solution-Architektur von Enterprise-Applikationen in .NET und Java mit. Seine Themen-Schwerpunkte sind Domain-Driven-Design, Legacy Transformation und Application Integration in anspruchsvollen Kundenprojekten.
 

Kontakt
Vielen Dank für Ihre Nachricht.