What is Serverless Computing?

Ram Kumar OSP
Published On :

Serverless computing is an agile and modern approach to building and deploying applications without worrying about servers or storage or networking. As a developer, you can simply focus on designing business logic and writing code. No need to burden yourself with things like server management, capacity planning (scaling up/down), and even networking.

There are significant advantages to adopting serverless computing over traditional deployment methods. Firstly, there is no operating system, networking, or patching required. This means that you don't have to worry about maintaining any underlying infrastructure, which can save a lot of time and resources, especially in cases where speed to market is important. Additionally, there is no capacity planning required as the underlying infrastructure scales up and down automatically. It just grows as your business grows.

Serverless is Going to Be a Game Changer

A game-changing advantage of serverless computing is that you pay exactly for what you use. To put this into perspective, any cloud infrastructure is pay per use, but you pay 100% whether the resource is at 0% utilization or 100% utilization.  Your servers may be idle or underutilized when there are fewer than usual business transactions. If a server is on, you pay for that service. In contrast, with serverless computing, the infrastructure is only used when needed, which can result in significant cost savings.

One of the key benefits of serverless computing is that it allows you to build and test each business functionality independently. This means that you can focus on each function individually, rather than having to worry about the overall architecture. This approach can help you develop your application more quickly and efficiently, leading to faster time-to-market.

AWS Lambda / Azure Functions / Google Cloud Functions

All major cloud providers offer some form of serverless computing - notably AWS Lambda, Google Cloud Functions, and Azure Functions. These platforms are built on an event-driven architecture, which means that a function (or an app or a feature) is invoked only when a certain event occurs. This approach helps ensure that you are only using resources when needed, which can result in significant cost savings.

Some of the other benefits of serverless computing include zero configuration or maintenance, pay-for-execution pricing, comprehensive language/framework support, and high uptime. The infrastructure is completely abstracted from the user, which means that you can focus on developing your application without having to worry about the underlying infrastructure.

To Serverless Or Not

To sum up, serverless computing is an innovative approach to building and deploying applications that offers many advantages over traditional deployment models. The best way to fully utilize the power of serverless computing is to use it for testing and development and in cases where there is a high degree of variation in the number of business transactions. In cases where there is predictability in transactions and product maturity, going the traditional route of VMs or containers will still be better choice.

Getting Started is Cheap. But, Watch Out for Cost Escalations

At the time of writing this blog, AWS charges just $0.0000166667 for every GB-second of memory used & $0.2 per Million requests. Sounds insanely cheap. You simply pay for memory used and requests. But as you scale up your applications, your apps start using more memory and perform more and more requests. And there is no way to let any cloud provider limit the total cost incurred. For ex, if a bug in your application is causing your app to make millions of requests, that is going to translate to thousands of dollars in cloud costs. Thus it becomes critical that your apps are bug-free and built efficiently with a low memory footprint.

Monitoring Serverless Functions with SnappyFlow

SnappyFlow can help monitor all of your apps running on a serverless architecture. For example, application logs or metrics can be directly sent to SnappyFlow through code instrumentation or by using a simple extension that can forward all logs to SnappyFlow.

To know more about how SnappyFlow can help you monitor your serverless deployments, talk to us today.

What is trace retention

Tracing is an indispensable tool for application performance management (APM) providing insights into how a certain transaction or a request performed – the services involved, the relationships between the services and the duration of each service. This is especially useful in a multi-cloud, distributed microservices environment with complex interdependent services. These data points in conjunction with logs and metrics from the entire stack provide crucial insights into the overall application performance and help debug applications and deliver a consistent end-user experience.
Amongst all observability ingest data, trace data is typically stored for an hour or two. This is because trace data by itself is humongous. For just one transaction, there will be multiple services or APIs involved and imagine an organization running thousands of business transactions an hour which translates to hundreds of millions of API calls an hour. Storing traces for all these transactions would need Tera Bytes of storage and extremely powerful compute engines for index, visualization, and search.

Why is it required

To strike a balance between storage/compute costs and troubleshooting ease, most organizations choose to retain only a couple of hours of trace data. What if we need historical traces? Today, modern APM tools like SnappyFlow have the advantage of intelligently and selectively retaining certain traces beyond this limit of a couple of hours. This is enabled for important API calls and certain calls which are deemed anomalous by the tool. In most troubleshooting scenarios, we do not need all the trace data. For example, a SaaS-based payment solutions provider would want to monitor more important APIs/services related to payments rather than say customer support services.

Intelligent trace retention with SnappyFlow

SnappyFlow by default retains traces for
SnappyFlow by default retains traces for
HTTP requests with durations > 90th percentile (anomalous incidents)
In addition to these rules, users can specify additional rules to filter out services, transaction types, request methods, response codes and transaction duration. These rules are run every 30 minutes and all traces that satisfy these conditions are retained for future use.
With the built-in trace history retention and custom filters enabled, SREs and DevOps practitioners can look further to understand historical API performance, troubleshoot effectively and provide end-users with a consistent and delightful user experience.
Get in touch
Or fill the form below, we will get back!
Is SnappyFlow right for you ?
Sign up for a 14-day trial