Concerns that go away in a serverless world

AWSDevOpsWeb Development

A common claim from serverless advocates is that it can increase a team’s feature velocity and reduce the ops overhead of running an application in the cloud. But if you’re an engineering manager, developer or ops engineer who has never run a serverless workload in production, you may find this claim hard to visualise and quantify.

How will serverless increase our team’s productivity?

In what ways will it affect the skillsets we need in our engineering team?

To help answer these questions, I’ve compiled a list of specific concerns and tasks encountered when building and operating server-based applications that no longer apply when using a serverless architecture.

By the end of the article, you should have a clearer picture of how selecting a serverless architecture can potentially reduce the total cost of ownership of a new application within your organisation.

When reading through the list, try to think of a server-based app you are currently or have previously worked on, and ask yourself the following questions about each list item:

  • Do I have the knowledge to complete this task myself or is this something a specialist or more experienced member of my team takes care of? If the latter, how much time would it take me to understand what’s involved in doing this?
  • How much time do I (or another member of my team) currently spend doing this and how frequently does it need repeated?
  • Even though I can do this task myself, am I confident that I’ve implemented it correctly using best practices and that I haven’t introduced any security holes or performance issues?
  • Is this something that we don’t currently do on our team (due to time or skill constraints), but that I know that we should be doing, and thus we’ve taken on extra risk by omitting it?

A note on scope: I focus on the AWS ecosystem so the items listed below primarily relate to the EC2, VPC, RDS, ECS/EKS and ELB services Amazon provides. Nevertheless, most points are generally applicable to other major cloud providers. I’ve assumed the server-based system is run either in containers or directly on the virtual machine instances.

Ok, let’s dive into the list of concerns which which go away in a serverless world…

Server Provisioning and Scaling

  • Configure AMIs for your VM instances with specific OS version and any required application software
  • Set up a VPC and subnets using best-practice security settings
  • Configure security groups and identify what ports need to be open on each instance
  • Create launch configurations and auto scaling groups for each EC2 instance type
  • Configure load balancers and associated health checks
  • Set up internet gateways
  • Configure route tables
  • Configure VPC peering
  • Configure RDS cluster with appropriate storage and instance size
  • Regularly observe load-related metrics and modify scaling limits or instance resource allocation accordingly
  • Repeat most of above steps for each environment (dev, test, staging, production)

Application Development & Maintenance

  • Define your container environment (Dockerfile)
  • Configure your container orchestration cluster (ECS, Kubernetes, etc)
  • Configure the pods/services/task definitions within your cluster
  • Debug container inter-connectivity/service discovery issues
  • Write script to deploy build artifact (Docker image, zip file) to EC2 instance
  • Regularly update base Docker image with latest patches (e.g. to Node.js/Python/Java or whatever language your app uses)

Server Maintenance

  • Set up a secure VPN/SSH bastion instance (and keep it patched)
  • Manage VPN/SSH access to different servers for authorised engineers
  • Manage regular patching of all VM instances (either manually via SSH or automated via script/Systems Manager)
  • Be available to promptly deploy emergency patches (e.g. heartbleed)
  • Set up alerts to be notified about emergency patches
  • Set up monitoring to watch for low disk space
  • Manually expand a volume when it’s out of space
  • Handle SSL certificate renewal and deployment (if installing keys directly to instances and not just to load balancers where it’s managed by AWS)
  • Repeat most of above steps for each environment (dev, test, staging, production)

Cost Control

  • Pay for an EC2/RDS/ElastiCache instance when it’s not in use
  • Over-provision instances to handle occasional sudden traffic spikes
  • Write cron jobs to spin down dev/test environment instances at evenings and weekends

Related: How to calculate the billing savings of moving an EC2 app to Lambda.

By now, you’re probably shouting “That’s great Paul, but what about all the new concerns that serverless brings?” And you would be correct to do so, my healthily skeptical friend!

I do intend to write about this soon, but for now I will point you at this article on Containers vs Serverless from a devops standpoint and also Martin Fowler’s section on the drawbacks of operating serverless systems.

That said, and while this is very difficult to objectively measure, I personally believe that for most development teams building a greenfield production system in the cloud today the total cost of ownership of a serverless app will be lower than that of a server-based app.

This is especially true if your organisation doesn’t have a skilled ops team already in place with availability to help your dev team build out the infrastructure and automation required to provision and maintain your application.

Got any more items that I should add to the list? I’m sure there are plenty of things I’ve forgotten so please leave me a comment below and I’ll add it in.

Originally published .

Other articles you might enjoy:

Free Email Course

How to transition your team to a serverless-first mindset

In this 5-day email course, you’ll learn:

  • Lesson 1: Why serverless is inevitable
  • Lesson 2: How to identify a candidate project for your first serverless application
  • Lesson 3: How to compose the building blocks that AWS provides
  • Lesson 4: Common mistakes to avoid when building your first serverless application
  • Lesson 5: How to break ground on your first serverless project

    🩺
    Architecture & Process Review

    Built a serverless app on AWS, but struggling with performance, maintainability, scalability or DevOps practices?

    I can help by reviewing your codebase, architecture and delivery processes to identify risk areas and their causes. I will then recommend solutions and help you with their implementation.

    Learn more >>

    🪲 Testing Audit

    Are bugs in production slowing you down and killing confidence in your product?

    Get a tailored plan of action for overhauling your AWS serverless app’s tests and empower your team to ship faster with confidence.

    Learn more >>