Skip to main content

Protected variations in software engineering explained and extended beyond the common usages

While digging through some standard programming principles like low coupling and high cohesion I stumbled upon the fact that they are part of a larger series of principles called "GRASP" principles. After reading a bit about them, they seem just as important if not more important than the "SOLID" principles

And one particular principle from that series stuck with me: protected variations. According to this principle, variations and changes in parts of the application should be contained only in them and not trigger further changes in the application. In general terms points of inflection should be established between the parts that change and the rest of the application which act like a boundary and stop additional changes from propagating to the rest of the application.

For example, one of the most common parts that might change in an application, is the data access and storage methods. For example instead of using direct sql to read and write to a database, an ORM framework might be adopted. If the application is badly coded and the database access is made directly in every place in the code where it is needed, if the database is changed or the data access method is changed, additional changes will be triggered in every place in the code that does database operations. There might be tens of places that need to be changed in that case so there is a very good chance that some places will be overlooked and will still use the old data access method. Or in some circumstances it might break some existing functionality. Like when replacing the data access logic in some code of some functionality, a part of the code responsible for the actual functionality might accidentally get  deleted causing it to malfunction. Like in the example bellow.

Bellow is the code before the changes:


And bellow is the code after the changes:



Notice how the "functionality code block1" got accidentally removed when changing the data access method. Depending on how important it was, it might cause the functionality to stop working properly altogether. So this is a case when additional changes in the code are triggered by an initial change. Of course these changes can reflect in the behavior of the functionality too.

So what should have been done in that case? The data access logic should have been separated from the rest of the functionality. And how can that be achieved? Using the standard approach, meaning dependency injection and interfaces wrapped over the data access code. So in a way, this principle is more basic and more fundamental that dependency injection and inversion of control.

A better approach bellow:


Another case would be when 2 functionalities are adjacent in the code, in the same scope then when one it is changed, then those changes might trigger additional unexpected changes in the other adjacent functionality. This is a different type of change, at a later stage after development and not during development like in the example above.

For example bellow we have 2 adjacent functionalities that use the same data:



Now it is required that the first functionality to be changed to work only on merged cases and not individual cases. A hurried developer might change the shared variable between the 2 functionalities:


Now the second functionality will have a different behavior too. This can be called a cascading functional change.

An example of how the above code should have initially looked like:



Notice 3 parts involved above. Firstly, there is a constant part, the interfaces which never change being exposed to the outside. Secondly there is an unstable part that tends to vary and change more often which is the code and implementation of the methods of the interfaces. And there is a third part which is not so obvious, the part that links the 2 parts mentioned earlier together, the CLR/.Net dynamic method dispatch mechanism. This mechanism tells the program what method to select and call/invoke at runtime from a possible list of methods based on the object.

And how the that code should have looked like after the changes:


As you can see now the second functionality is protected from changes for the first functionality. The 2 interfaces "IFunctionality1" and "IFunctionality2" mark the so called points of inflection which limit changes.

Furthermore, lately I have been dabbling more with cloud, microservices, systems, containers and containter orchestration with kubernetes. At first I was a bit skeptical that this principle could be applied here but after looking a bit more in depth, I realized I couldn't have been more wrong. The first example that comes in my mind is a load balancer with a couple of machines behind it hosting multiple instances of the same service exposed by the load balancer. When a machine goes down or an instance of the service crashes, the load balanced will just route traffic to the other remaining machines and services. From the outside, applications that depend on that service won't see any change and won't have to do anything. The load balancer acts as a point of inflection between the services it has behind it and the outside. Or if the load increases, the load balancer or the system that manages it might allocate more instances and more machines to that load balancer. It will automatically scale the service horizontally. This will be transparent to everyone, nothing will change, the code will also stay the same.

Again, notice the 3 important parts above. A constant part which is the ip adress or url address of the load balancer coupled with the exposed api of the services behind it. A part that varies which is the available set of instances of the services and machines that they run on. And an indirect part which is the load balancer itself between the two parts mentioned before.

Another case is with microservices. If a system is split into multiple smaller services, then if one of those services goes down then the other microservices might still be up and the system might still be partially usable. It won't be affected as much as if was one monolithic thing. For example let's take a monolithic ecommerce websites that lists items and suggestions in one single page in one request. If the suggestions functionality goes down for some reason with an error, then the entire items page will become unavailable if the error is not resolved. Now if the application was using smaller services, when the user requested the items page, it will just send the page with just the items and the suggestions can be loaded from another separate service after the page has been loaded in the browser. If the suggestions functionality crashes then the user at least will get to see the available items.

Or another not so obvious case is when running a cloud hosted database, it can be scaled dynamically while it's still running to allocate a faster processor to it or more storage. From the outside users, the database is the same, the connection string doesn't change at all, the address of the server stays the same. This is due to the fact that the database is exposed through an endpoint with an url address. What happens behind that endpoint stays there. The database might even be moved to a different address, the DNS registry for it just needs to be updated with a new IP address which is transparent to the applications that use use.

Now image a global scale system like the one Facebook uses with tens of thousands of servers. At any point in time at least some of them are down or coming back up again. They need to protect everything in their system from these sort of changes. If for each server going down, their website goes down for even a couple of seconds, then these will add up to a couple of minutes or who knows, even hours each day. I can't even begin to imagine what kind of techniques and technologies they use to limit the propagation of these changes.

One notable example for these sort of systems that I kind of had to learn was consistent hashing. It's way to extend or contract a set of servers without triggering additional changes elsewhere while distributing requests across them evenly. That is also actually a point of inflection in the system.

So points of inflection should be established between parts of the application which stop the propagation of changes between parts of the application or system by separating and establishing clear, controlled boundaries between parts of the application.

Now there are 3 important parts to a point of inflection:
- a constant part, exposed to the outside
- an unstable part that changes a lot
- and indirect part, a mediator, that links the constant and unstable parts together

Related to changes/variations, these can be grouped into 3 main categories:
- code changes, when changing some code, additional changes to adjacent code might be triggered
- functional changes, after a functionality is changed or updated, other changes to adjacent functionalities might be triggered some of them might be unintentional and with bad consequences
- runtime changes, when running services or applications, some of them might change their state or the load on them reacting differently to requests

And each of those categories has different ways to limit changes and different kinds of "points of inflection". I only enumerated some of them, but probably there are many others since I just started learning about them. One of the reasons for this post is to become aware of them so that when a new one is found then it can be used in other places when needed.

In the end I think this really is a powerful concept and principle that developers should be more aware of. I am not really done with this honestly, I just started to explore

Comments

Post a Comment

Popular posts from this blog

Some software development common sense ideas

 I haven't really written here in a long time so it's time to write some new things. These are just some random common sense things that seemed to short to write about individually but seem really interesting and useful to me or other people 1. There is nothing fixed in software development, all things vary by circumstances and time Remember the documentation that didn't seem that important when you started the project, well after a couple of years one the application has grown and become really complicated, no one actually knows everything about the application anymore. So now you really need that documentation. What happens if you suddenly need much more people to develop the application because of some explosive growth? Without documentation, new developers will just look at the application like they look at a painting. This actually happened to me. Maybe in the beginning of a project, a technology really helped you a lot but as the project grew, it started making things

My thoughts as an experienced developer in this field.

To be honest, I would have never guessed that      I would end up saying these these things, especially jn the beginning of my career. As you become more experienced, you kind of get to see everything and you become less impressed by new things. Honestly there is some sort of repetitiveness in this field and we kind of like to reinvent the wheel quite often, because well, we become a bit too easily bored. Take for example how the web looks now, a lot of it has been borrowed from other places, mostly desktop development, especially the parts about making custom reusable components.  Or from older frameworks, like state management in the UI which actually was first properly implemented in Asp .Net Webforms, as far as I know, something like 20 years ago, where you used to have the dreaded view state.  But most likely something similar existed before that too. I was not that surprised when React adopted initially this idea. Or even less surprised when hooks where introduced that seemed a b

Some things which are often blindly applied and followed by developers which are not always good

This is probably one of the most controversial things that I have written so far but I am just tired of hearing about these things and discussing these things. Other developers that I know share part of my feelings. I would rather hear more about how people built things, overcame challenges or what new interesting ideas and concepts they implemented. Those things are really interesting and innovative, not hearing about the same theoretical things over and over again. I can just read and learn those things from 100 sources on the internet. Firstly, one of the most discussed and promoted things is agile/scrum development. I think I have been through 5-8 workshops about agile development methodology. And each time, some things differed. There is no 100% standard approach to this. Everyone uses their own version of this development methodology and seem to argue a lot that their approach is right and everyone else is doing it wrong. You go to an interview, this will be one of the first 10 t