Continuous Architecture: Turning the Database Inside-Out to Improve Transformation Decision-Making

Jul 18, 2018
Written by
Nick Reed
Nick Reed

Continuous Architecture: Turning the Database Inside-Out to Improve Transformation Decision-Making

In today’s business landscape, effective transformation is critical for enterprises to stay competitive.  By definition, transformation happens over time.

Enterprises (or some subsection thereof) have a current state which needs to change for the better.  That change – however small or large – results in a different future state. With the widespread adoption of Agile working practices and DevOps-based continuous delivery, these changes can be very small and very frequent.

With ever-accelerating changes in the market and the imperative for enterprises to become more adaptive, the desired future state changes more frequently too. In other words, as soon as you start travelling from current state A to desired future state B, B has already change to a new desired future state C.

On top of that, A, B, and C are probably not enterprise-wide either. This means there are multiple future states at different levels of detail and scope, which are all constantly evolving in parallel. There are also interdependencies between these future states.

To add to the complexity, digitization of the enterprise allows for real-time operational data and KPIs to inform architectural decision-making. For example, the number of incidents of a service, the capacity of an API, and the vulnerability of a technology component may all contribute to architectural decision-making.

Creating and managing stateful architecture

Putting all this together, it’s not hard to see why traditional architecture methods no longer suffice.   The days of “boiling the ocean” (modeling the entire enterprise in the hopes of answering any question that arises) are long gone. Modeling all current and future states, as well as roadmaps, and manually keeping them all up to date with all the small, frequent changes is simply too much overhead. This is especially true if you need to keep a disparate community of stakeholders informed of the bits in which they are interested in a timely manner.

The challenge for modern architecture, therefore, is how to manage a “stateful” architecture, where you can reliably keep track of the current and future states of the architecture with all this change going on in multiple parallel streams.

A lean approach to architecture – often described as “Just Enough, Just in Time” – is essential to minimize time-to-value and to deliver frequent, iterative return on investment in EA.

Architecture frameworks have traditionally focused on sets of stakeholders and the viewpoints they need at different levels of detail, covering all of the (traditional) architectural domains. With more modern lean approaches to architecture, we don’t want to cover all the domains across the entire organization (“boiling the ocean”), but rather take an adaptive approach that enables specific stakeholders to get the specific views they need in real time. These organizations also need an architecture solution that is flexible enough to accommodate new stakeholders and new views as they are needed.

Fortunately, the world of social media has been grappling with similar challenges for some time and has come up with some pretty cool technology to deal with it.

Unbundling the architecture

LinkedIn needed to deal with a huge number of incoming user updates (“writes”) and a huge number of users consuming updates about connections and topics they follow (“reads”). The problem of maintaining a current state of all the data that was valid for everyone forced LinkedIn to think differently about how to keep things in sync.

They came up with two key innovations that solved their problem and enabled a whole new approach to system architecture:

  1. Record all changes in the data as a stream of ‘facts’ that do not change over time. In other words, all changes are logged as an “immutable event stream.” The big advantage of this approach is that it preserves the history of all data. There is a distinct notion of time (an entity has multiple values over time), as opposed to a database, where you typically only have the latest value. When you need to query the data (or, more likely, a particular subset of the data), you derive a ‘view’ from the data set by processing all the changes that have occurred up to the current moment (or you can similarly derive a view for a historical snapshot by only processing changes up to a certain moment in time). We call these ‘materialized views’ of the data. Usually these materialized views are persistent and they are easily kept in sync, by incrementally applying new facts that are written to the stream.
  2. Use the event stream log as a buffer between systems. A producer can write data without knowing who consumes the data. This could be one consumer, no consumer, or multiple consumers. The producer does not know those systems, it knows only the log. It decouples systems in both time and space. If a consumer system crashes, or runs slowly, it does not affect the producer.

In changing the focus from the current state of the data set to focusing on all the immutable changes in the data set, the very concept of change becomes became central to the system.

You can build as many views as you like for as many purposes as you need, using any technology that best fits the purpose, such as a document-oriented database for search, a graph database for network analysis, or a good old relational database for reporting and pivot table analysis. Think of it as unbundling the database or, in EA terms, unbundling the architecture.

This is why we are taking a radical new approach to the architecture of architecture management.

What this means for the BiZZdesign platform

At BiZZdesign, we have built our new-generation product around the event stream in order to enable an adaptive, flexible, and powerful platform that can adapt faster than traditional systems.

This approach, based on modern, cutting-edge technology, allows us to turn the database inside-out and change our rate of innovation in new functionality by orders of magnitude. The concept of event streams has wide applicability, from integration with external data sources to personalization of news feeds. The paradigm shift is from request-response to subscribe-notify. This is a mind-shift, and has radically changed our application development approach.

This offers a number of benefits to our customers, including:

  • ‘Forward compatibility’ of our platform: we can add new materialized views in the future, using any subset of the event stream. Materialized views that we add in the future will take all the historical data into account.
  • Full audit trail data that can be used for multiple purposes.
  • Flexibility to use new technologies for new use cases as they emerge: we are already using a range of database technologies to support different use cases, such as search, business intelligence reporting, and our API, with more to come.
  • A modern and flexible approach to integration: BiZZdesign consumes event streams from external data sources and provides event streams for consumption by external systems, allowing flexibility in how data is produced and consumed across the IT ecosystem.

This is why you will continue to see new functionality added to the BiZZdesign platform at an ever-increasing rate.  Stay tuned…

References:

https://www.confluent.io/blog/leveraging-power-database-unbundled/  

https://www.confluent.io/blog/turning-the-database-inside-out-with-apache-samza/  

https://www.confluent.io/blog/stream-data-platform-1/