5 Reasons Why Your Serverless Application Might Be A Security Risk
Does the amount of information on serverless security overwhelm you. I've boiled it all down to 5 best practices to help keep your applications safe.
There has been a lot of buzz lately about serverless security. People are certainly talking about it more and sharing great articles on the topic, but many serverless developers (especially new ones) are still making the same critical mistakes. Every time a serverless function is deployed, its unique security challenges need to be addressed. Every time. I've researched and written extensively about serverless security (see Securing Serverless: A Newbie's Guide). I've read countless articles on the subject. And while there is no shortage of information available, let's be honest: developers are busy building applications, not pouring through hundreds of articles.
I know, it sounds boring, but I would encourage you to do your research on serverless security. Serverless applications are different than traditional, server-hosted applications. Much of the security responsibility falls on the developer, and not following best practices opens you (or your company) up to an attack. But I know you're busy. I totally get it. So rather than forcing you to read a bunch of long articles 😴 or watch a plethora of videos 🙈, I've whittled it all down to the five biggest serverless security risks for you. Sure, there are a lot of other things to consider, but IMO, these are the most important ones. Nothing here hasn't been said before. But If you do nothing more than follow these principles, your serverless applications will be much more secure. 🔒
So here are 5 Reasons why your serverless application might be a security risk:
1. You're not sanitizing input from events 💩
Code and SQL injection should be old hat for web application developers by now. If you're just starting out as a developer, and you don't know what these are, stop reading right now and go look them up. In the traditional web app world, we tend to rely on WAFs (web application firewalls) and WSGs (web security gateways) to add an extra layer of security to our applications. While it's possible to use WAFs with HTTP event traffic in serverless applications, these traditional tools don't analyze input from the multitude of other events that trigger our serverless functions. SNS, SQS, Cloudwatch Logs, S3 uploads, to name just a few, are all new sources of exploits for attackers. Bottomline: inspect and sanitize EVERY piece of data coming into your apps, even if you "trust" the source. If you're still skeptical, read Event Injection: A New Serverless Attack Vector.
2. You're not scanning third-party dependencies for vulnerabilities 📦
We all use third-party packages in our applications. Regardless of the runtime or the repository you source them from, vulnerabilities are bound to creep in from time to time. Sometimes these vulnerabilities are mild and unlikely to cause any significant harm. But other times, packages are either compromised, or one of its dependencies is compromised. This could include someone embedding nefarious code or simply using out-of-date protocols. This is mostly likely to happen when packages are updated, so be sure to scan any packages you use, and then lock the version in your app. When a new version is released, scan it, and then update the version your app uses. You don't want your CI/CD systems automatically deploying new versions of compromised dependencies.
3. You're not using an observability tool 👀
Serverless applications are typically running in containers managed by your cloud provider. While this is great for those that want to take a hands-off approach to managing infrastructure, it also means that installing traditional agents and daemons to monitor OS level activity like network traffic and memory usage aren't an option. The surge in severless computing and adoption has spawned an entire industry around serverless observability. Utilizing logging and using tools like AWS X-Ray are good first steps. But you should seriously consider using one of the newer tools emerging to give you deeper insight into what's happening during function execution. Some of these tools are Dashbird, IOpipe, Thundra, Stackery, Epsagon, Honeycomb, and Puresec. Even New Relic has tools for AWS Lambda now.
4. You're not practicing the least privilege principle 🤦🏻♂️
This is one of the simplest things to overlook with cloud computing, but it may also be the most dangerous. As wikipedia puts it, "every module must be able to access only the information and resources that are necessary for its legitimate purpose." A consequence of removing DevOPs and InfoSec from the serverless deployment process, is that developers often don't understand complex IAM roles and permissions. Creating an Action: -"sns:*"
rule is much easier than attaching permissions directly to a topic ARN and restricting the role's ability to create and destroy resources. The primary reason this is so dangerous is because access keys get leaked all the time. An over-privileged key could easily wind up spinning up hundreds of EC2 instances (the horror stories are real 🙀). Make sure that you assign only permissions that the function MUST have. Even better, don't allow devs to create their own IAM roles. Have DevOPs create a role for them and then grant access to services and resources as needed.
5. You're not handling errors correctly 💥
We write code for serverless applications the same way we write code for any traditional application. We dump data structures, environment variables, and other bits of information to help us debug our applications while we're building them. For serverless applications, many developers like to test their functions in the cloud, especially when accessing other services that are hard to replicate locally. This is all part of the normal development process, but before these functions go into production, debugging information should be removed. Polluting your production logs with verbose error messages, JSON blobs, stack traces, and other debugging information not only makes logs less useable, but potentially exposes application secrets to anyone (or thing) with access to the logs. Even worse is if you're returning this data to your callback and sending it to users. Attackers could use this information to steal secrets or find other vulnerabilities with your functions. 🕵️♂️
Okay, so what now?
Like I said, it would be a wise investment of your time to dig deeper into serverless security. As new tools emerge, and cloud providers add more features, some of this stuff will get easier. Until then, follow these best practices, share them with your team, and encourage others to keep security top-of-mind when developing serverless applications.