Jun
RigD Developer Journey into Serverless Architecture – Part 1
RigD started like most other startups; a few guys sitting around a coffee shop talking about a pretty neat way to solve a big problem in the market. Armed with only $2,000 and only nights and weekends, we started building our masterwork. Little did we know that a few limitations would cause us to throw away months of work and be happy we did!
All of us at RigD have experience with microservices based applications leveraging containers. So, we went forth with confidence on our architectural plan and developed a set of containerized microservices. Things were going pretty well as we each got our own services built and working on our dev machines. That’s about as far as our smooth sailing went.
Our Container Architecture
As we started bringing things together in AWS we ran into some conflicts. The first problem was the bank account issue. Once we got everything up and running we realized we needed a pretty significant set of servers and services running to support our end to end app. While AWS is way cheaper than buying a bunch of servers and equipment, it still cost money that we didn’t have.
Here is a quick look at what our AWS menu looked like.
- 3 r3.large instances running as a container cluster at $120 per month each
- 2 App Load Balancers one internal, one external at $20 per month each
- 2 cache.r3.large nodes for config data at $165 each
- 2 db.m4.large nodes for database at $130 per month each
- 1TB of Data storage at about $115 per month
- Add in some extra data transfer fees for about $10 per month
Our barebones v.001 architecture was costing us around $1,115 per month. While that didn’t seem too bad relative to the bills we had seen elsewhere, that wouldn’t fly with our current funding situation. We explored going to free tiers of any service that we could, but frankly that would not adequately run our services. We then tried to turn everything off when we were not working on it. However this compounded the other major issue we were facing. We were not moving fast enough with the limited hours we had. We were at a point where updates were essential and required some level of reconfiguration of the deployment. There was no way we could afford more lost time to start and stop everything for each working session.
We started the move to Serverless with a temporary mindset. Thinking there were some things maybe we could do with Serverless to save some costs and speed up some development. We started by looking at the UI. What was a python Flask app, was refactored into an S3 served react app with an API Gateway and lambda for the backend. React was easy to get a functional configuration UI going in and all we needed from the backend was simple CRUD operations on a DB record. We also made the switch to DynamoDB from RDS and after using DynamoDB for a while we saw the performance was good enough that we cut out the ElastiCache and used DynamoDB for config too. At this point we decided to start exploring moving some of our backend services into Lambda.
Stay tuned for part 2 of our journey.
No Comments