Skip to main content

Speeding up request processing on a server

So lately I had to speed up the server with my colleagues at work so it could process requests faster. In our case we have a lot logic that needs to be executed which takes a bit of processing power, it's not a normal web server that just renders a simple page. There are tens of thousands of lines of code that need to be executed and not a couple of hundreds or a few thousands in the case of a normal web server.

And to successfully process a request a lot of times the server needs to put the request on-hold until it receives all information about that request freeing that thread to process another request. This is the first place where performance issues can arise because of context switching. Once a request is put on-hold and after that reprocessed, all the context must be restored. This can involve loading heavy stuff from the database. Ideally you should make sure that you load the context from the database once a request is actually ready to be processed and once all the required information has been received for it. If you load all the context for that request from the database only to see that you do not have all the information to process it then you would have wasted a lot of database calls.

As a general rule, you should do as much work as possible once you load all your data from your database. In our case after optimization it takes around 70 milliseconds to restore the request context from the database. Which is pretty considerable. Sometimes a request has to be put on-hold 3 times in a row only to be processed in the end. Only load as much data as you need and then make maximum use of it.

Secondly, exceptions are a bad as they slow down the code by a considerable amount. In the CLR virtual machine there are more than 10 thousand lines of code which are executed if an exception is thrown. Each throw exception takes at least 30 milliseconds to process in general. If you throw one or a couple of them while you process a request your server won't be able to handle more than 10 or 20 requests per second.

Thirdly, insert or delete operations in the database can be quite costly especially for complicated table structures with restrictive isolation levels. You should minimize them. If you do a lot of insert/delete operations when you process a request it will have a big negative impact on the performance. In our case all the insert/delete operations takes around 50 milliseconds on average. Try to figure out if you really need that many insert operations. If possible you can do them asynchronously after you process the request. Honestly having bigger tables here with more rows helps. Adding indexes or materialized views in the database can increase the insert and delete times which can directly slow down the server without you changing a line of code in your application.

When you need to get some data or wait for more information to finish processing a request you should do everything in parallel as much as possible and once they are finished you can start processing it. This way the request processing will be delayed as much as the longest retrieval operation instead of the sum of all the retrieval operations. Or if you need to wait for more information, then wait for all the individual pieces of information at once instead of waiting for each piece at a time.

If you have a tree like structure in a database between some tables then it is a very bad idea to do more than 3 or 4 joins between those tables. Especially if root node has a lot of children.

Finally, too much logging in your code can have a big impact too. Especially in big loops or repeatable actions. Here is the tricky part, too little logging information and you won't be able to fix bugs that appear at clients. Too much logging and you will slow down your server. And too much log entries actually makes debugging much harder because you will have much more irrelevant information. I ran a test and without logging I could process in around 6.5 minutes the amount of requests that I could process with logging enabled in around 7 minutes. So it's a 30 second difference. But the test wasn't perfect and there may have been other factors.

Those are pretty much all the lessons that I learned.

Comments

Popular posts from this blog

Some software development common sense ideas

 I haven't really written here in a long time so it's time to write some new things. These are just some random common sense things that seemed to short to write about individually but seem really interesting and useful to me or other people 1. There is nothing fixed in software development, all things vary by circumstances and time Remember the documentation that didn't seem that important when you started the project, well after a couple of years one the application has grown and become really complicated, no one actually knows everything about the application anymore. So now you really need that documentation. What happens if you suddenly need much more people to develop the application because of some explosive growth? Without documentation, new developers will just look at the application like they look at a painting. This actually happened to me. Maybe in the beginning of a project, a technology really helped you a lot but as the project grew, it started making things...

Some things which are often blindly applied and followed by developers which are not always good

This is probably one of the most controversial things that I have written so far but I am just tired of hearing about these things and discussing these things. Other developers that I know share part of my feelings. I would rather hear more about how people built things, overcame challenges or what new interesting ideas and concepts they implemented. Those things are really interesting and innovative, not hearing about the same theoretical things over and over again. I can just read and learn those things from 100 sources on the internet. Firstly, one of the most discussed and promoted things is agile/scrum development. I think I have been through 5-8 workshops about agile development methodology. And each time, some things differed. There is no 100% standard approach to this. Everyone uses their own version of this development methodology and seem to argue a lot that their approach is right and everyone else is doing it wrong. You go to an interview, this will be one of the first 10 t...

Protected variations in software engineering explained and extended beyond the common usages

While digging through some standard programming principles like low coupling and high cohesion I stumbled upon the fact that they are part of a larger series of principles called "GRASP" principles. After reading a bit about them, they seem just as important if not more important than the "SOLID" principles And one particular principle from that series stuck with me: protected variations. According to this principle, variations and changes in parts of the application should be contained only in them and not trigger further changes in the application. In general terms points of inflection should be established between the parts that change and the rest of the application which act like a boundary and stop additional changes from propagating to the rest of the application. For example, one of the most common parts that might change in an application, is the data access and storage methods. For example instead of using direct sql to read and write to a database, an...