Skip to main content

Some things which are often blindly applied and followed by developers which are not always good

This is probably one of the most controversial things that I have written so far but I am just tired of hearing about these things and discussing these things. Other developers that I know share part of my feelings. I would rather hear more about how people built things, overcame challenges or what new interesting ideas and concepts they implemented. Those things are really interesting and innovative, not hearing about the same theoretical things over and over again. I can just read and learn those things from 100 sources on the internet.

Firstly, one of the most discussed and promoted things is agile/scrum development. I think I have been through 5-8 workshops about agile development methodology. And each time, some things differed. There is no 100% standard approach to this. Everyone uses their own version of this development methodology and seem to argue a lot that their approach is right and everyone else is doing it wrong. You go to an interview, this will be one of the first 10 things asked. I am so sick of hearing about agile that I can't even pay attention to these things anymore. 

There are just too many rules and things to follow when doing agile. From what I have seen, a lot of times things change during a sprint and you end up doing something completely different from what was initially planned and refined. And all that time spent to split stories into nice tasks or split an epic into stories goes to waste, including the time spent to estimate them. You have no control over this, maybe the end users of the application you are developing don't have a good idea what they need and change their mind a lot of times.

With more complex features or requirements, it is a real pain to estimate them. Or split the development into smaller parts. You don't really know what the end implementation would look like until you finish implementing this. But a lot of things take more than a sprint to develop. So you have an epic with stories inside it. But when everything is done in the epic, the actual work done does not match at all the stories in that epic. This seems to work well only with user interface development where you already have a pretty good idea how to split the user interface into smaller parts and components each assigned to a specific story. But for complex back end features that have no obvious way how they should be split into components from the very beginning. Some things are better to be developed as a big thing. 

And for some things you can't really estimate them properly or split them into stories until you have technical meeting in with you design the structure and overall implementation of that feature. You can't just come up with random superficial stories and then just come up with some story points based on some initial impression of the stories.

Furthermore, a lot of times sprints are pretty small, just 2 weeks. It's really hard to properly fit in stories in 2 weeks sprints and just too much time is spent on the meetings like refinements, planning and retrospectives. Sometimes you just don't have any way to fit in any stories without leaving some gaps or taking more than the available capacity for that sprint. Then you spent more time discussing how to fit everything in. And all of this organization every 2 weeks just takes a lot of effort and time. 

For some things, scrum and agile work really well. But for other things, they don't work well at all. Still everyone is blindly praising agile and scrum like they are the holy grail of programming which will solve all of your problems. And you are almost forced to use them to seem "cool"/"modern" and fit in with the rest of the community. This guy here: (11) One Hacker Way Rational alternative of Agile - Erik Meijer - YouTube explains why agile is not so good. And he worked for Microsoft designing async programming. Now we works for Facebook and contributed greatly to the idea of reactive programming.

Secondly, another discussed and overemphasized topic is about design patterns, especially Gang of Four patterns. 20 years ago or more when they were first invented, they might have been something really useful but now a lot of the things there are outdated or you come up with similar but with better names and more specific functionality more suited for the actual business logic. In the end they are just some naming standardizations for a series of commonly used approached to specific problems. But a lot of times they are just too generic and don't carry enough information about the actual context and business logic. So even if you use them, you will have other names for them and a lot of time with good object oriented knowledge they can be easily replicated without even knowing about them. 

Take for example the "chain of command" or "visitor" patterns. If you have ever worked with modern web frameworks, when a request is received it is processed by a series of middleware. Each of them might be considered something like a command or a visitor but no one really uses those names because they are too generic. In this context, middleware is a much better name because it provides a deeper meaning. Middleware are just middle logics executed between when a request is received and when a response is prepared for that request. They logics stand in the middle, that's why they care called middleware.

Or there is another series of design patterns defined in Gang of Four called creational patterns. But with the advent of dependency injection, these patterns are mostly included into the dependency injection mechanism or replaced by it. The dependency injection mechanism is now responsible for providing/creating instanced of classes. We don't need factories anymore to create objects most of the times. And if you need to pass specific arguments when creating an object, you can defer that and pass the arguments later in an "init" method if you really need them. Or pass the arguments directly when using methods exposed by that class. Also with dependency injection, singleton pattern is pretty much obsolete. A singleton is just an instance of a class that has its lifetime scope the entire duration of the application.

Sometimes patterns are almost some syntactic sugar for some very simple things. Take for example the "strategy" pattern which dictates what algorithm to use at runtime. This is just a fancy way of writing a switch statement or an if statement. For each execution branch in that if, a class is created and then another class is created which selects an execution branch class to use based on the context or some arguments. Additionally, I have a project at work in which I actually use this pattern a lot but the "strategy" name is taken and reserved by other components which are part of the business logic. So I can't really use this pattern and actually name the branches as "strategies" because it would become really easy to confuse strategies from the design pattern and strategies from the business logic.

For some patterns we use newer, more modern and widely understood names. For example, for decorator and adapter patterns, nowadays we just use and call them "wrappers. In my opinion, with good knowledge of object oriented programming and some practice, it's really easy to invent patterns similar to the ones in "Gang of Four". The only value of these patterns to me is just to show to students and new developers just how you can use object oriented programming to creatively design functionalities so later they can come up with their own ideas.

Thirdly, another discussed series of concepts are the SOLID principles. In theory they seem nice and make sense. Until you get to the practical part. Honestly, most of the time, only the single responsibility principle is used, the "S" from SOLID. 

There is a bit of debate here, because what if you need to do a complex operation composed by a series of smaller and simpler operations. You still need to write classes for the smaller operations and then somehow use them together to do the bigger, complex operation. So you would need to write code that uses the classes for the smaller operations together and this code will be included in a bigger class. This big class does all of the smaller operations because they are required. But it does not do it directly but through other intermediate, lower level components.

So in the end you will have a hierarchy of components, with lower level components and higher level components which use the lower level components.

Other people say that single responsibility principle means that a class should have a single reason to be changed. Well, expect that you can't really predict how and why a class will be changed. Maybe a class is a functionality used by customers. And the customers are unhappy with that functionality because of 10 reasons. Then you need to change the class that implements that functionality because of the 10 reasons. You can never guess what the end user things and really needs all the time. 

Another principle of SOLID, is the open to extensions, closed to changes principles. This principle tells us that existing code should not be changed and instead new code should just be appended to the codebase without changing other code at all. Except that you can never really do this. Sometimes, customers are unhappy with an existing functionality and want some changes to it. What do you do then? You keep the old functionality and create a new correct functionality from scratch? Won't this cause some code duplication because the old and new functionalities might be similar. Why should you even keep the old functionality around. It might be so bad that it might be a good idea to remove the code or rewrite it directly just to make sure that there is no chance that the old, bad behavior of the functionality is not executed anymore.

And there is even more another useless pattern, called interface segregation pattern which tells us that instead of a big interface for a class, to write smaller interfaces with more specific names and contracts. But this is pretty much the single responsibility principle applied to interfaces.

There are still other issues with SOLID but I wrote already pretty much about them. Bottom line is that SOLID are just useful for ideal, theoretical situation but the real world is far from this. In my opinion, the GRASP principles are better and make much more sense.

Fourthly, if you have worked with web services you would have had to run into REST/RESTful/whatever services. Its mostly a convention about house to use and expose resources as urls on the internet. The thing is, that its pretty much designed for CRUD operations. But a lot of times, CRUD is not enough. Take for example if you need to expose an url to begin executing a long lived process. That's not CRUD anymore, you don't insert/delete/update any data, you executed something. Or even better, what about remote procedure calls, RPCs, as web services? How do they fit in with the REST/RESTful concepts? My current project that I work on, has plenty of RPC calls built into it over the web because that what the customers and end users wanted. Its the only way to structure and code the requirements that we got from them.

With swagger and other api description methods, its pretty easy to document and make web services easy to use by other developers or applications. The REST part of the services is automatically embedded in the swagger documentation. And if it uses another set of conventions, then is just as easy to document them with swagger. For example, I used some web services that expose a hierarchy of resources. So you can find a resource under the url/address of another resource and so on which I used a lot in my various jobs.

And there is a lot of debate about what exactly REST means. There is really no single source of truth and everyone thinks he is right pointlessly debating who is right about REST to no end. I don't care about sticking to a stupid convention, I just want to get work done which is useful and easy to use by other people. I don't need to debate about pointless things. Give REST a rest already and forget about it. Just don't make too complicated and sane urls for web services that follow the same pattern and are as predictable as possible.

In the end, sometimes there is a lot more focus in the software development field on blindly following rules without thinking and using common sense. Each developer should develop his own critical thinking to see when something is really useful or not, to see things with both its pros and cons.


Comments

Post a Comment

Popular posts from this blog

Some software development common sense ideas

 I haven't really written here in a long time so it's time to write some new things. These are just some random common sense things that seemed to short to write about individually but seem really interesting and useful to me or other people 1. There is nothing fixed in software development, all things vary by circumstances and time Remember the documentation that didn't seem that important when you started the project, well after a couple of years one the application has grown and become really complicated, no one actually knows everything about the application anymore. So now you really need that documentation. What happens if you suddenly need much more people to develop the application because of some explosive growth? Without documentation, new developers will just look at the application like they look at a painting. This actually happened to me. Maybe in the beginning of a project, a technology really helped you a lot but as the project grew, it started making things

My thoughts as an experienced developer in this field.

To be honest, I would have never guessed that      I would end up saying these these things, especially jn the beginning of my career. As you become more experienced, you kind of get to see everything and you become less impressed by new things. Honestly there is some sort of repetitiveness in this field and we kind of like to reinvent the wheel quite often, because well, we become a bit too easily bored. Take for example how the web looks now, a lot of it has been borrowed from other places, mostly desktop development, especially the parts about making custom reusable components.  Or from older frameworks, like state management in the UI which actually was first properly implemented in Asp .Net Webforms, as far as I know, something like 20 years ago, where you used to have the dreaded view state.  But most likely something similar existed before that too. I was not that surprised when React adopted initially this idea. Or even less surprised when hooks where introduced that seemed a b