Microservices for Startups: An Interview with Darby Frey of Gamut

jakelumetta

Jake Lumetta

Posted on January 29, 2018

Microservices for Startups: An Interview with Darby Frey of Gamut

This interview was done for our Microservices for Startups ebook. Be sure to check it out for practical advice on microservices. Thanks to Darby for his time and input!

Darby Frey

Darby Frey is the Senior Engineering Lead - Platform at Gamut. Gamut focuses on quickly delivering industrial products and providing customers with a fast search engine that allows customers to quickly identify the right product.  

For context, how big is your engineering team?

Our engineering team is roughly 30 folks right now, but we started the year at less than 10, so we've grown the team pretty significantly over the year.

Did you start with a monolith and later adopt Microservices?

When we started, we made a conscious decision to not to build a monolith. Based on experiences from previous companies, the anticipated scale, and where we thought we would be headed given our parent company. We could envision scenarios where we would quickly grow out of a monolith, and end up having to pull apart a thing we just build.

So we decided we would start with a few larger services instead of many microservices. What that forced us to do was to think about infrastructure and deployment differently than if we had started with a monolith. So not only did we build out our services, but we also focused on building an API gateway, a CI/CD pipeline, and all the related infrastructure to deploy multiple apps.

How did you approach the topic of microservices as a team/engineering organization?

Many of the folks on the team when we started had some sort of experience with service architectures, which I think that was actually really helpful. Everyone had some sort of story like, "Yeah we screwed this up over here before and I don't want to do that again."

We all had learned from our first time around, and so when it came time to do it again, we all had a lot more information to start with. Understanding that there is a big infrastructure cost to doing this, and we can't discount how much effort is involved in deploying multiple apps. Then, there are issues around how you create dependencies between services. Very tightly coupled services mean you can't really do anything in one service without having to have the other service available, which makes local development challenging, and makes isolating the thing you're working on difficult. We had experienced a number of these issues already, so it informed the rules that we made around how we architected our services.

Did you change the way your team(s) were organized or operated in response to adopting microservices?

The pattern of "a single team is responsible for a particular service" is something you see a lot in the service architectures. We don't do that, for a couple of reasons:

The primary business reason is that we want teams that are responsible for not specific code but for customer facing features. A team might be responsible for order processing, so that will touch multiple code bases but the end result for the business is that there is be one team that has that owns the whole thing end to end so there are fewer cracks for things to fall through. The other main reason is that I think people can have more ownership of a feature or a part of the business this way. They can actually think about it holistically.

How much freedom is there on technology choices? Did you all agree on sticking with one stack or is there flexibility to try new? How did you arrive at that decision?

At my last company we had a lot of services and a fairly small team, and one of the main things that made it work, especially for the team size that we had, was that every app was the same. Every backend service was a Ruby app. It had the same testing framework, the same database backend, the same background job processing tool, the same message bus. Everything was the same, so when an engineer would jump around between apps they're not having to learn a new pattern or learn a different language each time. So we're very aware and very strict about keeping that commonality.

From time to time there are people who want to introduce a new language, and I love the idea of trying new things, but having a polyglot architecture can increase the development and maintenance costs. If it's just all the same, you can focus on business value and business features and not have to be super siloed in how your services operate.

I don't think everybody loves that decision, but at the end of the day when they have to fix something on a weekend or in the middle of the night, they appreciate it.

How do you determine service boundaries?

At the end of the day, it's really all about the data.

If you need to be joining between two tables in a database pretty frequently, then those should probably live next to each other. If the access patterns are less frequent, then that could be a candidate for separation.

Does Table A need to know what's in Table B all the time? If so, then just put them in the same service, otherwise you're making network requests to get that data which creates a new set of challenges. If it's not that way, if Table A does its own thing, and Table B can do the same, and then maybe they come together every so often, then those could be independent services.

What was that discussion like within your team?

As bigger features come up, and we'll pause and have a conversation. We talk through the data requirements and the expected access patterns. We look at the current services we have in place and try to make a determination if the new features will live in one or more of places, or if it should be it's own thing and try to make a call from there.

What lessons have you learned around sizing services?

You can definitely make them too small. That's maybe the thing that I disagree with the most about a lot of the approach to microservices.

I've heard things like every service should be just one database table and it should be as small as possible, or only one API endpoint or things like that. Which goes back to the Unix philosophy of minimalist programs that you can chain together to do things. I see that approach and I see how that can work on a single machine, but when you actually have to stand up infrastructure to support that, it becomes way more time-consuming and costly.

Operationally, every app needs to have its logs aggregated somewhere, and needs to be monitored and have alerting. You need to have standard operating procedures and runbooks for when things break. You have to manage SSH access, and so on. There's a huge foundation of things that have to exist in order for an app to just run. And if you're doing that for one table and one API end point it can be very costly. So I think of them as much bigger than that.

But I also think about how you keep context of a thing together in one codebase. It goes back to the boundaries that we were talking about before. Does this data operate on itself, or can it live in isolation? Maybe that thing can live in isolation and it's just one table that doesn't need anything else. But if its access pattern is also very low, if it gets hit three times a day, it just doesn't make sense to spend the operational effort to make it its own thing.

How have microservices impacted your development and ops processes?

Having multiple services definitely increases complexity for teams building features. To help with that we built a home grown service runner type of tool that would do all the leg work for you to get all of the code, install the dependencies, create and seed the databases, and then boot up all of the services in a pre-determined way. So you can just run a command and have everything you need.

This works petty well, but it took some time to build which is an investment we had to make to support this architecture. There is also an ongoing maintenance cost as well. As the architecture changes we need to keep the tool in sync.

How have microservices impacted the way you approach testing? 

Testing microservices is not easy. In a monolith you can just spin up the app and run the tests and and you're good. There's definitely a lot more effort that goes into setting up an environment that will allow you to test. This is another consideration you have to factor in when you're deciding should this be another service of its own.

How many services do we want to run? Because that means not only running in production and staging, but also adding it into the test environment as well. It becomes a lot more work to set that up, get the data seeds correct, and so on. But the thing that it does force you to do, whether you want to or not, is build a very robust automated process for it.

Without that you just can't do it, and you get into a state where you're not confident in your deploys because you're not sure if deploying Service A is going to break something in Service B. It's a cost taking a services approach, but the result it gives you is a more stable and more durable system.

How have microservices impacted security and controlling access to data?

Having more services means having more surface area for potential security risks. This forces us to be a bit more comprehensive than we might be otherwise, it forces us to create automated systems to scan for vulnerabilities or potential exploits. It also forces us to create more rigid policies and auditing around access control. At the end of the day, we want to have all of these systems in place anyway. Had we built a monolith to start we might have deferred some until later, but having services forced us to start this way.

Thanks again to Darby for his time and input! This interview was done for our Microservices for Startups ebook. Be sure to check it out for practical advice on microservices. 

💖 💪 🙅 🚩
jakelumetta
Jake Lumetta

Posted on January 29, 2018

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related