Hacker News new | past | comments | ask | show | jobs | submit login
Launch HN: SST (YC W21) – A live development environment for AWS Lambda
165 points by jayair on March 2, 2021 | hide | past | favorite | 88 comments
Hi HN, we are Jay and Frank and we are working on SST (https://github.com/serverless-stack/serverless-stack).

SST is a framework for building serverless apps on AWS. It includes a local development environment that allows you to make changes and test your Lambda functions live. It does this by opening a WebSocket connection to your AWS account, streaming any Lambda function invocations, running them locally, and passing back the results. This allows you to work on your functions, without mocking any AWS resources, or having to redeploy them every time, to test your changes. Here's a 30s video of it in action — https://www.youtube.com/watch?v=hnTSTm5n11g

For some background, serverless is an execution model where you send a cloud provider (AWS in this case), a piece of code (called a Lambda function). The cloud provider is responsible for executing it and scaling it to respond to the traffic. While you are billed for the exact number of milliseconds of execution.

Back in 2016, we were really excited to discover serverless and the idea that you could just focus on your code. So we wrote a guide to show people how to build full-stack serverless applications — https://serverless-stack.com/#guide. But we noticed that most of our readers had a really hard time testing and debugging their Lambda functions. There are two main approaches to local Lambda development:

1) Locally mock all the services that your Lambda function uses. For example, if your Lambda functions are invoked by an API endpoint, you'll run a local server mocking the API endpoint that invokes the local version of your Lambda function. This idea can be extended to services like SQS (queues), SNS (message bus), etc. However, if your Lambda functions are invoked as a part of a workflow that involves multiple services, you quickly end up going down the path of having to mock a large number of these services. Effectively running a mocked local version of AWS. There are services that use this approach (like LocalStack), but in practice they end up being slow and incomplete.

2) Redeploy your changes to test them. This is where we, and most of our readers eventually end up. You'll make a change to a Lambda function, deploy that specific function, trigger your workflow, and wait for CloudWatch logs to see your debug messages. Deploying a Lambda function can take 5-10s and it can take another couple of seconds for the logs to show up. This process is really slow and it also requires you to keep track of the functions that've been affected by your changes.

We talked to a bunch of people in the community about their local development setup and most of them were not happy with what they had. One of the teams we spoke to mentioned that they had toyed with the idea of using something like ngrok (or tunneling) to proxy the Lambda function invocations to their local machine. And that got us thinking about how we could build that idea into a development environment that automatically did that for you.

So we created SST. The `sst start` command deploys a small _debug_ stack (a WebSocket API and DynamoDB table) to your AWS account. It then deploys your serverless app and replaces the Lambda functions in it, with a _stub_ Lambda function. Finally, it fires up a local WebSocket client and connects to the _debug_ stack. Now, when a Lambda function in your app is invoked, it'll call the WebSocket API, which then streams the request to your local WebSocket client. That'll run the local version of the Lambda function, send the result back through the WebSocket API, and the _stub_ Lambda function will respond with the results.

This approach has a few advantages. You can make changes to your Lambda functions and test them live. It supports all the Lambda function triggers without having to mock anything. Debug logs are printed right away to your local console. There are also no third-party services involved. And since the _debug_ stack uses a serverless WebSocket API and an on-demand DynamoDB table, it's inexpensive, and you are not charged when it's not in use.

SST is built on top of AWS CDK; it allows you to use standard programming languages to define your AWS infrastructure. We currently support JavaScript and TypeScript. And we'll be adding support for other languages soon.

You can read more about SST over on our docs (https://docs.serverless-stack.com), and have a look at our public roadmap to see where the project is headed (https://github.com/serverless-stack/serverless-stack/milesto...).

Thank you for reading about us. We'd love for you to give it a try and tell us what you think!




This is amazing. Congratulations! I've been thinking along similar lines and so happy this exists.

Have you considered extracting the lambda portion into a lambda extension so that you can deploy this as part of any cdk/cloudformation stack or standalone lambda and toggle on/off local debugging based on some registration in dynamo (e.g. store a predicate applied to the event and proxy to local if the predicate evaluates true)? This would allow something like this to be hugely useful in mitigating outages.

I didn't catch it in the repo yet but I'd you're proxying local AWS calls through the lambda to allow quickly catching IAM and security group/routing issues, that would be incredible.

Happy to help with this stuff and will definitely be a user!


Thanks!

I hadn't thought of using something like this for mitigating outages. Do you mind elaborating on that a bit more?

Currently there isn't a simple way to switch off the debug mode. So if `sst start` deploys the Lambda with the debug mode, `sst deploy` will disable it by putting the original Lambda back. But your idea of switching it on and off should work!

For the IAM portion, we are don't use the IAM credentials of the local machine while executing the Lambda. We use the credentials of the original Lambda. This allows for testing against the right credentials.

Appreciate the feedback! If you've have any questions or are curious about the internals, I'd love to hear from you jay@anoma.ly


Sorry, more investigation prior to mitigation than mitigation. I think moving the lambda aspects of sst to a lambda extension would allow for a workflow like this: 1. Find production issues 2. Replicate production issues in user facing tools 3. Run a local `sst start <fn> --filter '.userId == 'my-user-id'` to have remote <fn> connect to my local for any requests that match `evt.userId == 'my-user-id'` 4. Replicate issue and debug with the request sent to this particular lambda, potentially many calls away from the user action 5. Patch and deploy.

Being able to debug in situ would be fairly incredible in a complex application.

None of that is meant to take away at all from what you've built. What you have here is really awesome and I'm excited to use it.


Not at all. I really like that idea. It makes a ton of sense. I think we'll try and get something like this on the roadmap.

Appreciate the feedback!


As usual to those who are not invested in the ecosystem already it looks so strange to see people popping champagne corks to have back basic development functions we had in 1995.

Are the benefits of things like lambdas really so great that everyone gave up such basic functionality to test and develop your code?


I'm a heavy user of Lambda. Our primary stack is Heroku+Rails, but one of our workflows allows students to upload DOCX files where we do some heavy processing (2-3 seconds per file). This wasn't a great fit for Rails, fabulous fit for serverless.

And yes, I enormously miss the basic functionality I had in the webdev ecosystem. Developing on Lambda was very difficult by comparison.


Yeah, the is exactly the way we felt. While SST's internals might be complicated, from a user's perspective you get a "normal" local development experience.


I don't work with Lambda regularly, but I think, yes - basically.

That's even more evident for simple stuff that doesn't need debugging on the logic, but benefits from the Lambda features in terms of scaling etc.

Not mentioned is that your unit tests should be getting you most of the way...


If you’re still writing unit tests consider using a strongly typed functional language instead. Typescript will get you most of the way there. At some point you’ll realize all the assertions you’re making are already covered. Integration tests are the only valuable tests in my experience, using something like puppeteer, playwright, or cypress that automatically runs as part of a build pipeline.


Until it breaks in production and I'm deploying new versions to get extra logging added so can I try and work out what's gone wrong.


We personally do, we've been using Lambdas and serverless for the past 4 years. And we've never had to worry about scaling or downtime. We are a small team, so this has a real impact on us.


Wow. I tend not to comment much on HN, but this is another level. I'm trying it out now — if it works like the video, you've unlocked days of my time over the next year.

Really excited about this and it has been a LONG time coming.


Thanks! SST takes a little bit to start up when you run it the first time. But it's pretty fast after that.


How does this handle dependencies, for example if I'm working on a python project and I need to run a pip install. I will give you money if you can automate getting those python dependencies up into a lambda function.

If you can do this reliably, and pass corporate venting, you easily have a product worth millions.

On the other hand, if this gets popular Amazon can easily just clone the product and integrate it directly. You'll know it's a good idea then.

Edit : MIT ! Awesome, thanks for the donation to The Open source community. If I ever use this in a real project I'll go ahead and donate a few bucks.


Please forgive me if I am misunderstanding the problem, but we approach this using Docker now that Lambda supports container images for packaging. Our process is to just put all that sort of stuff in the Dockerfile, run docker build, push the result up to ECR, and point Lambda to the new version.


Lambda layers is another option.


We'll be looking at Python in the coming weeks, currently it's JS and TS only.

But the way this works locally (for Node) is that you do an npm install just as you would and it executes your application locally. The packaging part comes into play when deploying these functions to AWS. And I agree that portion even for Node isn't bullet proof. It's something we want to do a better job of.


Wait so you automatically deploy the lambda dependencies for me. The other issue I run into is occasionally you have to split npm packages into different layers when you deploy lambdas. I don't have time personally to contribute, but I hope you're able to develop this into something which becomes a part of my workflow.


So the `sst start` command fires up a local environment but it doesn't deploy your functions. Instead it'll run it locally when it gets invoked.

But when you `sst deploy` it, we'll package your functions. To do this we use esbuild (https://esbuild.github.io), it's like Webpack but 10x faster. It'll generated a single js file that should be fairly small and you shouldn't have to use Layers.

However, this isn't bullet proof. There are some dependencies that are not compatible with esbuild/webpack, and you'll end up having to zip them up as a directory. That's something we are going to work to improve in the future.


Not to downplay Serverless Stack, but the AWS CDK does this for you: https://docs.aws.amazon.com/cdk/api/latest/docs/aws-lambda-p...


Not at all! SST is built on CDK. So anything you can do in CDK, will also work in SST.


you can do 99% with pipreqs . && pip install -t . -r requirements.txt


I don’t want to be that guy, but this sounds incredibly complicated. What happened to spinning up a local environment with “docker compose up”? At this point I need a i) SaaS to deploy lambdas ii) SaaS for live-testing lambdas.

It’s a cool business though - selling shovels.


Yeah the internals of the `sst start` command are a little complicated. But it only needs to setup it up initially, from that point on it simply connects to the WebSocket API.

SST is open source and runs on your AWS account. We don't charge you for it and it's most likely going to be within the free tier on AWS.


Nah, I have my permissions managed as IAM roles on the Lambda. It becomes way more complex to do something than to just `bin/push` up my lambdas and test in "almost prod".


Though it seems like a decent approach, I find it sad that this is necessary, since this slows down dev cycles. If AWS services were built on open standards, it would be easy to write local emulators and this whole problem would be fixed. Furthermore, if people avoided the nanoservice/serverless architecture the issue would be much less severe.


It is only necessary if you choose "serverless" (somebody else's servers). Hey (the email service) has what should be an ideal use case for it but they're using Kubernetes instead.


I partly agree that it's weird that it takes something complicated to solve local development. That said, we've explored a wide range of options over the past few years and the local emulation route can be really hard to setup and ends up being pretty slow.

I've personally gone back and forth on the idea of purely local vs cloud based development. And fwiw, SST strikes a very interesting balance between the two. You get the feel of a local environment, while still connecting to every cloud resource.


The idea sounds cool. No servers to manage. I don't know what to think though. A true language framework for backends is just so much better for me.

There's Apache Openwhisk and OpenFass which are all open standards for serverless.


BRAVO! Just tried it locally and its like driving a Tesla for first time.


Thank you, that's awesome to hear!


How does this differ from 'serverless invoke local'?

I use that model and it's pretty good. I can output local log lines and see them on the console.


Yeah `sls invoke local` works well for testing individual functions where you are mocking the specific inputs or triggers. But there are a couple of cases where it can become hard to use. For example, if you want to test a Lambda function that's invoked by an API that needs authentication. Or if Lambda A publishes to an SNS topic that invokes Lambda B and you want to test the entire flow.

To fix this SST will deploy your entire stack, and let you trigger the workflow live. So you can hit the endpoint just as your frontend would, or trigger the entire SNS workflow. Except all the Lambda function invocations will be streamed to your local machine and the responses will be sent back.

This approach allows you to get the best of both worlds. The speed on an `sls invoke local`, without having to mock anything.


> This approach allows you to get the best of both worlds. The speed on an `sls invoke local`, without having to mock anything.

But I have to deploy stub functions and have a different deployment model for prod. :) So there are definitely tradeoffs.

I like this idea and will certainly try it out!

What I was really hoping for is someone who could figure out a way to launch a terminal in Lambda so that I can modify my functions in place and view the logs in place, and then when I'm done, save my work to git, like I used to be able to do when I had a fleet of servers and could just log onto one of them in prod and muck around with logs (and yes I know this is a bad idea but it's also how things get done quickly).


Awesome!

While this doesn't quite do the edit in place and I wouldn't recommend running this for prod with live traffic; we use it to run some scripts against prod. You get an environment where you can make changes quickly as opposed to having to deploy your function to run them.


Not sure if you noticed this (https://news.ycombinator.com/item?id=26323888), but it's similar in spirit to the Lambda terminal idea.


SST is amazingly great! Being able to develop serverless applications with live development support while also accessing the proper AWS runtime is such a game changer!

Very developer-friendly founders as well, being ever helpful over Slack and more.

Highly recommended!


Appreciate the support Pal! All your feedback has been really helpful.


In my opinion this beats the purpose of serverless. Call me a shitty engineer, a hacky dev, I see purpose of serverless to be quick iterations, get your idea out as fast as you can, develop in production mode. The way people did in Google app engine days. I am not sure why people realise it lot of ways the whole serverless infra from a business value standpoint is very similar to Google app engine, which when an orgs growth out of tries to get out of. Building a serious CI/CD or dev stack is waste of precious resource that startups or mid market should not invest.


The interesting thing is that it lets you develop against your real infrastructure. This speeds up your iteration cycle drastically when compared to the other serverless tools.

But I do feel your pain, building a CI/CD for serverless is quite annoying and is a big waste of time for startups. I would not recommend it.

And quite frankly serverless as an ecosystem needs to be improved before it can compete with app engine, when it comes to developer experience. We care a great deal about it and that's what we are working on.


This is really cool! A teammate of mine built something like this for a hackathon once, but far less advanced.

How do you think this compares to using a more pure local solution, like localstack? https://github.com/localstack/localstack


That's cool to hear! I'd love to see that implementation if possible.

LocalStack is a good initiative but in practice it ends up being slow. As a result, we see people using it for writing tests instead of local development.


You think the 5-10 second deployment cycle is rubbish? You should try Google Cloud Functions! They're 3-4 minutes! Yay!


With an unmatchable cold start of 10s every time you load it!


I love how it will block the second request for ten seconds while it spins up a new container, even though your previous request was completed 9.999 seconds ago and that container is sitting unused. Well, that’s probably not true. It’s probably busy being torn down on 9 seconds of inactivity...


That's not good! We want to add support to GCP and Azure as well.


I've been migrating a small Serverless Framework project over to SST and I've been loving it. The teams commitment to clear and useful documentation is second to none. The support they provide in various channels is top notch.

Wonderful tooling, I've been enjoying it quite a bit. Keep it up! I'm excited to see the feature set grow.


Appreciate it!

After having struggled with all the confusing serverless related content out there, the docs are a big point of emphasis for the project.


How do you think about SST compared to projects like arc.codes/Begin.com in the long run? Do you want to expand to do a lot more around serverless? I always think about Rails vs Sinatra, and how both can make you happy :) The demo gave me Sinatra vibes in that sense (minimal code, no boilerplate generators, it just works)


That's a really interesting comparison. The part of SST that I didn't get a chance to touch on in the post is the higher level constructs. These give you more of that framework type feel.

Currently we do APIs, queues, cron jobs, and pub/sub. We won't force people to use these but we'll be adding support for more serverless use cases as we go along.


Been using this for a few months and it has been excellent! Highly recommend this for full stack serverless deployment


Thank you! Glad to hear that it's been working well.


I really don't want to knock the work y'all have put into this, but it feels like an antipattern to develop this way ("severless" functions in general). But I suppose many people could benifit from this, best if luck!


I'm curious, why do you think this is an anti pattern?


The fact that an elaborate work around is needed for a dev environment - something that should be very simple.


And the vendor lock in.


Ah I see what you are saying. Not that this approach is an anti-pattern compared to other serverless setups. But that serverless in general is an anti pattern.

I can see where you are coming from. What we are doing with SST does seem very complicated just to get a good local development setup.


Forgive my ignorance but is this similar to SAM, from their docs: > 'Use SAM CLI to step-through and debug your code. It provides a Lambda-like execution environment locally and helps you catch issues upfront. '


Yeah it's a little confusing to describe the difference. But what most of the tools currently do is, locally mock a Lambda function execution. You cannot test anything beyond that.

For example, if your Lambda function publishes to a SNS topic that invokes another Lambda function. You can only test this locally my mocking the inputs for the first one, testing that. And repeating it for the second one.

With SST, it deploys this stack to AWS. So when you trigger this workflow, both the Lambda functions get invoked on your local machine. You don't need to mock anything. It is as if they were invoked live.


Absolutely brilliant.

I've had some issues with a small existing lambda, so I decided to re write it using sst.

It works like a charm, and I'm highly impressed with the project

Good luck


Hey guys, this is really great! This is a need that I continue to struggle with using AWS CDK.

Do you have any plans to support AppSync?


Thanks! We'd love to cover as many cases as possible. Feel free to open an issue and we'll add it to our roadmap!


What made you pick the websocket connection over something more like a pure TCP connection or something like ngrok?


Yeah it's a good question. The serverless WebSocket API allows us to keep the debug stack entirely serverless. This makes it really cheap (basically free) and it's very reliable because we don't have to manage starting and stopping the servers.


AWS Lambda can run via a custom docker image now. I use this strategy for deployments and local testing. The Official docker images have a built in emulator called REI for testing locally. It even estimates billing.


Using these Docker images works well to emulate the Lambda environment for sure. It just falls short when I want to trigger that through, let's say an SNS topic or an authenticated API that my frontend application can connect to.


Good luck with this. It solves the most annoying problem with serverless development.


Thank you!


Very interesting. It’s open source so what’s your business model?


Yeah this is a framework that's completely for local development.

We have a separate service that's a CI/CD pipeline for serverless apps. We use a SaaS model there — https://seed.run. It supports SST out of the box. But it also supports Serverless Framework, the other popular option out there.


Congrats Jay and Frank! It's been fun watching your progress and this certainly feels like the natural product evolution for you. Best of luck in the future!


Thanks Nick! Appreciate it.


I absolutely love the idea of this tool but all my Lambdas are Python or Golang. I'll watch your repo for progress on that front!


Yup, we'll be working on it soon. Interestingly, we get more request for Go than Python!


That makes sense to me. The Python packaging story is annoying. Much prefer Go with the single binary. Easy deploy!


Your project looks fairly similar to Chalice but using TS. Am I correct? How do you compare yourselves to Chalice?


The big difference is that Chalice (from what I remember) runs a local server where you test your functions and once you are ready, you'll deploy it to AWS.

This deploys it to AWS first, and when a function gets invoked remotely, streams that request to your local machine where it'll execute that function and send the results back to AWS. You basically get to test and develop against your real infrastructure.


I'm either missing something but what's the difference between this and zappa?


The key difference here is that Zappa and the other frameworks emulate the API (or SNS, SQS) service on your local machine. This means that if you want to test an API that requires authentication or if your Lambda function is triggered by some other async event, you cannot test it locally. You'll need to deploy it first.

SST on the other hand, deploys your app and when a deployed Lambda function gets invoked remotely, it'll stream that request to your local machine, execute it, and send the results back to AWS. So you get the advantage of working against real infrastructure while still making changes locally.

The video helps a illustrate it a little better — https://www.youtube.com/watch?v=hnTSTm5n11g


Is there somewhere I can subscribe to be notified when Python support is ready?


You could watch our repo — https://github.com/serverless-stack/serverless-stack. Our Twitter account could work also — https://twitter.com/ServerlesStack


Right now it looks like a MIT project. What's your monetisation strategy?


Yup, we have a separate SaaS product for teams, a CI/CD pipeline for deploying serverless apps. It supports SST and Serverless Framework.

https://seed.run


Debugging Lambda is a pain, your site is amazing, so clear and easy to follow.


Appreciate it! Yeah we really want to fix this, given how much time we spend working on it!


How does SST compare to Serverless Framework?


There are a couple things that SST does differently.

- The Live Lambda Development environment that I talked about in the post. That's really the biggest difference. - And instead of CloudFormation YAML, SST uses CDK. So you can use regular programming languages to define your infrastructure.


Does this work with Greengrass?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: