Takeaways from Programming AWS Lambda by Mike Roberts and John Chapin
I spoke with Mike Roberts about his new book, Programming AWS Lambda, and we discussed a number of fundementals for building serverless applicatons.
Recently, Symphonia co-founders Mike Roberts and John Chapin wrote a book called Programming AWS Lambda: Build and Deploy Serverless Applications with Java. I personally abandoned Java long ago, but I knew full well that anything written by Mike and John was sure to be great. So despite the title (and my past war stories of working with Java), I picked up the book and gave it a read. I discovered that it's not really a book about Java, but a book about building serverless applications with the examples in Java. Sure, there are a few very Java specific things (which every Java developer probably needs to read), but overall, this book offers some great insight into serverless from two experts in the field.
I had the chance to catch up with Mike on a recent episode of Serverless Chats. We discussed the book, how John and Mike got started with serverless (by building Java Lambda functions, of course), and what are some of the best practices people need to think about when building serverless applications. It was a great conversation (which you can watch/listen to here), but it was also jam packed with information, so I thought I'd highlight some of the important takeaways.
The book offers extensive insight on testing serverless applications, and it takes a really strong stance against using local mocking libraries. Plain and simple, Mike and John think that local mocking libraries are a bad idea. Using those libraries for building local tests can be great for experimentation, but once you rely on those, then you have created more complex testing setups. If you are calling an API, it is likely just returning JSON. Instead of using a library, simulate that JSON statically, test against it, and then you know the response is always the same and it isn't going to change because of some update to localstack or some other local mocking library.
Mike also says that engineers have to wear two hats when writing software built for longevity: the experimentation hat ("how does this fit into the software I'm building?"), and the "built-to-last software" hat. They're two very different modes of thinking, and you have to write your tests with the production code in mind. Mike thinks that many people fall into the trap of doing some experimentation and then writing some code without switching those gears.
Integration tests are also important with serverless applications, given the different connectivity pieces and services communicating with each other. But it's also possible to simulate complex workflows locally using functional tests. In most cases, we just need to know: "If this Lambda function receives this event, does it do what it's supposed to do?" That is relatively easy with functional tests, which is testing how a bigger part of the application responds to the system around it without actually running the entire system around it. The most important takeaway is to design and build your applications so that they can be easily tested.
Mythbusting Cold Starts
Mike shared a case where a client of his was writing Lambda functions in Java, and they were extremely concerned about cold starts. However, when they put it into production, everything was fine - something Mike observes happening pretty regularly. He says cold starts have the reputation for being scary because when you're in development with Lambda, every time you run your new Lambda function, you see a cold start. But really, they're just not as much of a problem as they used to be, especially now that Amazon has fixed the VPC issue.
Another thing we discuss is that the vast majority of our Lambda functions now run asynchronously, so the user isn't waiting on a response. In these cases, cold starts generally don't matter at all. Although the cold starts in Java are certainly higher than Node or Python, once a Java function is initialized, it's much faster than many of the alternative runtimes. If you're running high volume data processing, this faster performance could end up saving you a lot of money. Mike also added that if all you've ever used Lambda for is synchronous APIs, then you're missing about 95% of what Lambda is about.
The Caveats around Provisioned Concurrency
As much as Mike has tried to convince people that cold starts aren't a major problem, he still often hears the need for guaranteed latency. Even though he doesn't think most people actually need it, he's glad that AWS has provided the Provisioned Concurrency "escape hatch" so he can just point those people to that.
However, Mike admits it comes with some caveats, like it's slow to deploy. For example, it took an extra 1.5-2 minutes to deploy a single provisioned concurrency Lambda function, and an extra four minutes to deploy a function set to 50. It's also rather expensive. With Lambda functions, you only pay when they are used, but with provisioned concurrency, you're always paying a flat fee plus execution time, so you have to be very aware of the economics. Finally, Mike says you need to make sure you don't use the same provisioned concurrency settings in development as you do in production. When using a SAM template, for example, the difference between the development configuration of a Lambda function versus the production configuration is often precisely the same, aside from different environment variables.
Caveats aside, Mike finds this "escape hatch" to be incredibly helpful. There's also the added benefit of using provisioned concurrency to pre-warm a pool of Lambda functions if you are anticipating a big traffic spike.
When to use (or not use) Custom Runtimes
Lambda comes with a number of standard Java runtimes, but AWS also gives users the ability to use any custom runtime they want. As a result, a number of people came along with specialized runtimes for different environments. Mike says you could write your own custom runtime to try and solve some of these cold starts issues.
If you're a large organization, you might want to use your organization's Java runtimes versus Amazon's. Mike suggests that another option is using an alternative like Graal, which takes your Java code and produces something that doesn't run in a regular JVM, which makes your startup times significantly faster.
Mike also points out that if you use your own custom runtime, then you also have to maintain it. Updates happen automatically with AWS's standard Java runtimes, but with something custom, you'll likely need to do some testing when the underlying environment is updated.
Also, if you're trying to minimize all of that undifferentiated heavy lifting through serverless, you're defeating the purpose by maintaining your own runtime. If you're a big organization, this may make total sense because you can bake in security and other things you might want to do, but certainly for the average developer or company, it can be an overwhelming undertaking.
The Spring Framework
One of the things that's very popular with Java, especially when it comes to building APIs, is the Spring framework. AWS has invested a significant amount of time and energy into the Serverless Java Container project, which makes it easy for you to write Java Spring Boot projects with Lambda. Mike and John make it very clear in the book that this is a bad idea.
Mike explains that Spring is based on the idea that you'd bring in a whole bunch of things and dependencies at the startup of an application, then that application would serve requests over the course of days or weeks. But, as Mike says, those assumptions just don't make sense in a world of Lambda. Even if you take those assumptions out, there's still a cost to running Spring, and those costs come at cold start time.
He adds that while Spring does a bunch of stuff at startup using reflection to dynamically load code, it still just doesn't make sense. For Mike, he sees the real problem being that Java developers using Spring will get to a point where they'll realize that Lambda wasn't designed to support this type of implementation. He says that Java developers would be far more effective if they scratched Spring, and focused on the underlying function they're trying to write.
Multi-Module, Multi-Function Lambda Applications that Share Code
Mike and John outline a really interesting solution to create multi-module, multi-function serverless Java applications. Mike offers a scenario: if you have 10 different types of requests, consider having 10 different Lambda functions. This way, each Lambda function can follow the principle of least privilege and only access the things it needs. Plus, you can reduce cold starts because you're only loading up the code and the libraries that each Lambda function actually needs - and that makes a difference.
They show readers how Lambda functions can rely on external dependency libraries, but also on internal libraries that can be shared across functions. This can all be done using Maven. Maven has its drawbacks, plus it's XML, but as far as Mike's concerned, its semantics around modeling dependencies are far more advanced than any other main language on Lambda. If you want to model dependencies, he recommends using something like Maven.
Mike and John have a really great "gotchas" section in the book which is a must read for the uninitiated. New serverless developers often get excited about everything they can do, and then all of a sudden hit a limitation that knocks the wind out of their sails. I think this is true with most cloud-native development, but Mike and John call out a few things specific to serverless.
Mike references at-least-once delivery as one of those things that can bite developers, especially those new to distributed systems. Amazon has gotten some flack here, which Mike considered warranted, as he thinks there should be a switch for this. Lambda generally guarantees that your function will be called. What they don't guarantee is how many times it will be called. Almost every time it's a one-to-one correspondence, but sometimes your Lambda function will be called more than once, which if not properly dealt with, could lead to nasty side effects like double billing a credit card.
Mike also discusses the impact of Lambda scaling on downstream systems, something I'm pretty passionate about (you might have heard me talk about this before). This is a big deal when building hybrid serverless, non-serverless systems. He uses the example of having a Lambda function in front of a SQL database. Mike says that one of the really awesome things about Lambda is that it will, by default, scale to 1,000 instances. On the flipside, one of the terrible things about Lambda when it's connecting to a SQL database, is that it will automatically scale to 1,000 instances! If you're not careful, that could take down your non-serverless infrastructure components.
The book also points out that serverless is a different way of architecting systems. When you look at Lambda code, it looks like the same code that we've been writing for years. But when you're architecting for Lambda, you have to think differently. He thinks people don't necessarily realize that. Because the code is easy, we've now shifted the mental effort from the code to architecture. Mike encourages everyone to be an architect and to learn about the architectural trade-offs that come when building distributed systems. I totally agree.
I admit I was skeptical because of the title, but this book covers many of the fundamental serverless concepts that developers really need to know. If you happen to be developing in Java, then that's just a bonus. To listen to my conversation with Mike, check out our Serverless Chats episode, or click here to purchase a copy of the book.
Listen to the episode:
Watch the episode: