You strive for good design patterns as they are useful for building reliable, scalable, and secure applications. Good design encompasses factors such as consistency and coherence in components design and deployment. Decisions made during the design and implementation phase have a huge impact on the quality and total cost of ownership of application. I often like to think a good design pattern sets ourselves up for success. It is like creating a pit of success. Succeeding should be as simple as falling into a pit, but is only possible with strategic intentionality and discernment.
If you do not know where you’re going, all roads lead you there. If you know where you are going, only a few roads bring you there.
If you wish to achieve efficiency, scalability, reliability, and overall success in software engineering, you have to set yourself up for it. One such ways is to gain clarity of the bigger picture through design patterns.
1. Competing Consumers pattern
You want to consider this design pattern when you wish to enable multiple concurrent consumers to process messages received on the same messaging channel, in scenarios where the workload is has wide variation. This allows your messages from this same channel to be processed concurrently to optimise overall throughput, improve scalability and availability, and to balance workload. Note that this works only for tasks that can be run asynchronously, independently, and in parallel. Consider this design if the desired solution must provide high availability, and is resilient if the process of a message fails. One good use case can be an independent service, such as a bank transfer transaction.
The amount of workload may vary, and this pattern provides a load-levelled system to handle this wide variation in volume. This pattern serves as a buffer between the applications and consumer services. See it like a funnel, in which all messages will reach consumers through one single channel. Once the message is passed, any one message consumer will take up this message asynchronously - one single long-running process does not prevent other messages from being handled concurrently by other consumer service instances.
Going through a queue, instead of a direct communication between applications and consumer services, improves reliability. There is a high probability of messages being lost or failed to be proceed if the consumer fails. However, in this pattern, a failed service instance will not block a message from being lost in transmission, but will be picked up by any working service instance. Through a queue, it can improve resiliency. In the event a transaction like request is proceeded by a working service instance but fails. It can be resent back into the queue to be picked up and processed by another working consumer service instance.
There are definitely some limitations with this design pattern.
a) Message orders are not guaranteed. This is for independent service requests to be consumed and executed independent of other messages. Thus, use cases such as livechat messaging services may not be considered under this particular design pattern.
b) You have to ensure the consumer services are designed to send messages to only one consumer service. This approach can be tied in with other design patterns. Don’t have any one in particular in mind right now, but it is definitely open for discussion.
c) Design a poison message detector. Any poisonous messages that, requests for resources that are not available, can cause a service instance to fail should not be returned back into the message queue. Instead, the system should be store these details of these messages elsewhere so they can be analysed if necessary
d) the competing consumer design pattern fully decouples the application logic that generates the request, called the producer, with the consumer service instance. Thus, in the event the application requires a result to be passed back to the application, it has to be done through another location where both the instances can access. In Microsoft azure, a worker process can pass results back through a dedicated message reply queue, in which the application must be able to correlate the results with the original message.
Do note, do not consider this pattern if the tasks to be performed must be performed in a specific sequence.
Reference: Competing Consumers pattern
It will help to improve repeated access to information. It performs a read-through strategy; first checking if data exists in cache, else it will access DB to retrieve data, and then stores it into the cache for future reference. Key to note the frequency of invalidating cache and fetching from DB should not be too high else it can cause applications to continually retrieval from data store. You can consider this design pattern only if cache does not provide native read-through and write through operations.
Write into cache only whatever that will be frequently re-read, else the latency to even write to the cache might not be worth saving, and could simply just be written into main memory directly.
There are three different storage methods:
- Write-through: high write latency with data reliability and low read latency. Good if data is written infrequently but read frequently
- Write-around: low write latency with data reliability but high read latency. Good if data is written frequently but read infrequently
- Write-back: low write latency with risk of data reliability and high read latency. Good if data is written frequently and non crucial data that can be lost between an allotted time interval
Understanding the Use-Case of each of these storage methods will help to optimise the application’s workflow at each of its component, achieving an overall optimization and performance boost.
This is to protect applications and services by using a dedicated host instance that serves as a broker between client and server to sanitize, validate and pass requests and data between them. You can consider this design pattern if you wish to add in an additional layer of security to your publicly exposed services.
The gatekeeper should always run on a limited privilege mode, and hosted separately from hosted services. Also, Gatekeepers should not have access to credentials or keys used by the trusted host to access storage or protected services. So even if gatekeeper is compromised, it does not expose any secret, or keys to sensitive information.
Gatekeeper, serves as a firewall, sanitizes and verifies the validity of requests before passing them through to trusted services. You can also have multiple instances of gatekeepers to prevent a single point of failure. Often times, gatekeeper sits in the Demilitarized zone (DMZ)
Reference: Gatekeeper pattern
You want to consider this design pattern if you wish to deploy components of an application into a separate process or container to provide isolation and encapsulation. Sidecar applications are attached to the main application to provide supporting features for the application, sharing the same lifecycle as the parent application, created and retired alongside the main application’s lifecycle.
Some examples of applications that can apply the Sidecar design pattern are logging, monitoring, configuration, and networking services. Allowing these services to run in a sidecar can decouple the language and technologies in which these services are created with. Despite the decouple, it can also share the same resource as parent application, and thus enable it to monitor system resources used by both the sidecar and the parent application.
The beauty of a sidecar pattern is the proximity it has with the parent application, allowing it to share the same resources, but at the same time allow flexibility and provide extensibility for the parent application’s service. The sidecar can also share the overall lifecycle of the application while being independently updated.
Note, you should not consider this design pattern if the side car service needs to scale differently than or independently from the main application. In such cases, it is better to deploy these services independently. Also, if your application is to prioritize inter-process communication, note that a sidecar application provides additional latency and possibly notable latency in calls. Not the most suitable for chatty interfaces.
Reference: Sidecar pattern