For the last five years, several people from Thales, GX Software, Luminis and various other developers have been working on an open source project called Amdatu. The vision behind Amdatu is that software should be built out of re-usable modules. From an architectural point of view, modules exist at many levels of granularity, depending on the overall size and complexity of an application. If designed well, these building blocks have many advantages. They can be easily removed or replaced by a different implementation, which makes evolving an application easy, even if it consists of many modules. They also allow you to make an application more robust and scalable by deploying them redundantly and distributing load over them. Finally they also help in securing your applications, responding to different threats and containing security breaches.
Another crucial point is the management of the life cycle of applications. For simple applications, such as mobile applications that a user starts, uses for a short while and then stops again this is not that complicated. Once you get to applications that dynamically scale in a cloud and can be updated without downtime, every component you design needs to be aware of both its own life cycle and that of other components and services that it uses in the system.
All the Amdatu code is open sourced under the Apache license. The Amdatu Foundation (founded in The Netherlands) maintains the Amdatu.org website which links to all the infrastructure that is in use for this project (mailing lists, code repositories, …). Within this community we have a “labs” department where new and experimental projects start their life and from which they either graduate to become official projects or not. People are free to join and contribute!
In the past we have also entertained the idea to have commercial components that are part of Amdatu, but we recently decided against that because it complicates our open source message. So everything that is Amdatu is open source. We still have commercial components, but those carry a different name.
Over the course of the project we have created quite a few components. Most of them have been designed to be used in web-based cloud applications but a lot of them are generic enough to be used in a wide range of applications. A full list can be found on the website, but let’s explore a prototypical application and start with an architectural blueprint.
A popular way of architecting such applications is by having vertical slices that implement single features. Some even call those “microservices” nowadays. Within that slice there are layers for data access, services that capture business logic and REST endpoints that are typically used by the user interface.
For data access we have several different options, and in general for each slice you should consider what the best option is. For more traditional information models, we support JPA to interface with all kinds of relational databases. Of course we offer this in a service-oriented way and support annotation based managed transactions, JTA and schema evolution. If you need a more document-oriented store, MongoDB is a good choice. We also have connectors that interface with the different, cloud-based BLOB stores such as Amazon’s S3. If you have an application that needs to capture more complex data that evolves over time then the “Information Grid” (see below) is something to consider.
For business logic you will probably write most of the code yourself. After all, by providing everything else, this is about the only part of the application that you need to design. Even so, at this layer we provide many useful support components that can handle configuration, task scheduling, support for remote services and multitenancy. We also have components that can handle sending e-mail notifications, doing templating and for generic validation of objects.
Finally we have a full-featured web server that can host content and provide REST endpoints based on JAX-RS annotations. Documentation is built-in and can be browsed interactively, which helps the design of the front-end since they can pretty much discover the REST API themselves (not that I’m advocating that it should not be designed in close cooperation with the front-end developers). In this layer we also provide various components that help implement security.
All of this can be hosted using the “Cloud RTI”, a second commercially licensed component which we currently offer as a hosted service. It integrates with Bamboo and allows you to setup a full cloud deployment with development, testing, acceptance and release environments that can be deployed and promoted automatically and offer dynamic scaling, monitoring and health checks of all aspects of your application as well as centralized logging that aggregates all logs from your cluster and even front-end applications (web-based or native) if you want. Cloud RTI leverages technologies like Kubernetes and Docker to create an environment with managed containers. This also means we can embed code written in other languages, as long as it can be deployed in a Docker container and hooks up with our logging, monitoring and health-check APIs.
If you have an application that needs to capture more complex data that evolves over time and you want a fully managed environment where you can model and version that data then a commercially licensed component is available called the “Information Grid”. This really gives you a head start and supports things like integrated search based on the semantics of your data model, schema transformations and connectors to many different types of storage including Apache HBase, OrientDB, Zookeeper, MongoDB and many more. It supports CQRS and lambda architectures and allows you to easily create different “views” of your historic data.
Apart from the components that are available as part of Amdatu we also have a full-featured implementation of Attribute Based Access Control (ABAC) that can be integrated in the application. This is also a commercial component that we can license to customers or use in our own applications. On top of that, as part of our INAETICS involvement, we are currently developing more extensive and dynamic security components that deal with short-lived certificates and applications that can deal with different (cyber)threat levels.
The first big project we donated to the Apache community was Apache ACE. That is a top level project since a couple of years now and we are currently looking at investing more time to further extend its capabilities for device provisioning.
So why are we doing all of this? The short answer is: because we want to do more with less. That means we want to build better applications, with higher quality, in less time. To do that we need to avoid some common pitfalls in our industry and make sure we build applications based on our opinionated architecture, using proven components we understand well and can re-use so we can focus on the domain specific aspects of each application and make sure we spend our energy on those, without having to worry about a lot of the “plumbing” that is also crucial to the end result, but is not unique for each application.