I define Serverless as an approach to using the cloud (in my case, AWS) that exclusively uses fully managed services billed based on actual usage. Typically, this means API Gateway backed by Lambda microservices. The approaches detailed here have different pros and cons, and each is suitable for particular use cases.
The Monolith
It may be a shock to some, but it is absolutely possible to develop monolithic solutions in Serverless architecture. You simply develop all of your code in a single Lambda microservice. Though, ‘micro’ may not be the correct term for that given the potential size it could become. I have used this approach when rapid prototyping as it is fast, so you can prove a theory or understand a particular approach quickly. However, the result is rarely suitable for a production environment and can end up being a bit spaghetti code. That being said, for small and simple applications such as processing a registration form, this can also be a suitable approach.
Pros:
- Fast, all the code sits together.
- Easy, just one lambda to manage.
- Low latency because everything happens in a single microservice.
Cons:
- Unlikely to be suitable for production.
- High risk of spaghetti code.
Use case:
- Rapid prototyping and experiments.
- Simple backends such as a form processor.
Shared Layer
Layers are a feature in Lambda where you can add code that can be included in one or more microservices. Often these are used to store third-party libraries to help segregate your own code from external code. This helps facilitate updating the third-party code without touching your own code. This can be beneficial, especially for frequently updated open-source libraries.
Of course, you are free to use the layer feature as you wish, so another common pattern is to store your own shared scripts and classes there that can then be used by multiple microservices. This can be taken to extremes resulting essentially in a monolith app sitting in the layer and the Lambda microservices being simple gateways with minimal input processing before using the monolith layer for the bulk of the application capabilities.
Despite having many Lambda microservices in this design, this is not a true microservice architecture because the microservices are not independent. If the layer breaks, the entire application will break, so it is really just a variant on the monolith. This can be a good approach for simple backends that essentially do the same processing on each type of input. For example, basic CRUD (Create-Retrieve-Update-Delete) processing of multiple API resources or paths with a simple SQL data model. This approach will significantly minimise code duplication compared to an independent microservice approach.
Pros:
- Fast, most of the code sits together in a layer.
- Minimise code duplication.
- Some segregation through the different lambda microservices enabling a bit of customisation per service if needed.
Cons:
- Risk of complete failure if the layer breaks.
- Risk of spaghetti code as most of the code is in a single layer.
Use case:
- Simple backends with very similar processing in each request, such as a CRUD API.
Independent microservices
Each Lambda microservice is entirely independent in this approach and should give the expected output for a given input even when other microservices are unavailable. I am still partial to using Layers for third-party libraries in this approach, but the layers should not be used for shared resources as this would create dependence. Generally, this can easily be avoided with the right architecture design. A third-party image processing library in a layer should only need to be used by an image processing microservice, for example. While they are two separate components that depend on each other, they are still a contained microservice with a defined capability as a whole.
Pros:
- Highly fault-tolerant, supporting detailed monitoring and self-healing due to independence of microservices.
- Individual components are highly scalable in real-time.
- Cost-effective and can be beneficial for security and privacy.
- Reusable components, microservices can be used across multiple applications.
Cons:
- Risk of high latency if multiple microservices and services are needed for a single request.
- Very different application design with a learning curve to do it right.
- Complex architecture with many individual components.
- Duplicate code is not uncommon, though it can be managed pre-build.
Use case:
- Complex backends for API’s and Serverless applications that do not require servers.
- Asynchronous processing pipelines.
- Security and other monitoring, automated testing and deployment.
On-demand Servers (or containers)
While a fully Serverless microservice architecture is an excellent option for many use cases, a microservice does not have a server’s raw processing power. For use cases that involve data science, complex machine learning models, or need heavy processing or GPUs, microservices alone will be insufficient.
The on-demand approach is typically asynchronous. A microservice can accept a given request, validate the input and report back to the requester that the request has been received and that it will be processed.
The microservice will then launch a container, server, or even a fleet of servers to handle the request. Typically there is a means to monitor when the job has completed or failed, and another microservice that can then finalise, shut down the container or server(s) and notify the requester that the request has completed.
Pros:
- Use server-based resources while still only paying for actual usage.
- Access to capabilities not supported by Lambda microservices such as GPU.
Cons:
- Risk of potentially expensive ‘zombie’ resources that needs to be managed.
- Latency, it can take time to start the container or server(s).
- Servers, their operating systems and software need to be maintained.
Use case:
- Any processing use case that needs more resources (Memory, CPU, execution time or storage) than Lambda microservices provide.
- GPU requirements such as machine learning, 3D or video rendering, etc.
Island or sectioned approach
This approach splits a large and complex backend into smaller sections or islands. Each one, typically, has more functionality than a single microservice as we are not splitting based on functionality but more based on infrastructure needs. Each section or island is given the appropriate architecture design for its needs, and there is an agreed communication structure for each one to communicate as needed with one another.
For example, there might be a CRUD API section that uses a shared layer approach. This can pass asynchronous jobs off to a pipeline section that uses a true Microservice approach. There is also a data processing section using an on-demand approach to launch a fleet of servers on a schedule to process large amounts of data.
Pros:
- Pick the best architecture for each section of a complex application.
- Not limited to specific cloud services, can use the right tool for the job.
- The benefits of each section’s approach can be considered.
Cons:
- Very complex architecture with multiple approaches, not for the faint of heart or inexperienced teams.
- The cons of each section’s approach should be considered.
Use case:
- Large complex applications and backends with different needs that cannot be solved with a single approach.
Have you tried other approaches or anything to add to the above? connect with me on LinkedIn and let me know! https://linkedin.com/in/thomasjsmart