Sharing how Gentlent rapidly glues together websites and custom infrastructure.
Here at Gentlent, we've optimised our internal processes to rapidly design, build, and publish websites as well as code changes to the underlying infrastructure.
These websites range from small one-pagers for project-specific information to full-blown company websites.
So — how did we do it?
Every project usually starts with hosting, or servers to be more specific. Gentlent is not an exception. Some computer connected to the internet needs to serve our rendered content on - favourably - port number 443.
This is where our first advantage lies. We have one codebase for the majority of our sites and services. Every single server is able to handle our HTTP, HTTPS, SMTP, and DNS traffic, amongst other protocols we have implemented. In the context of deploying new code, we don't have to worry about spinning up new servers and scaling them individually, but rather have them deployed on our existing distributed network of servers. This has some benefits that are super useful for our use case:
Now that we got our servers and basic infrastructure figured out, how do we reduce the work to actually serve our sites?
Let's jump into the next critical component: Virtual Hosts and SSL/TLS encryption. We are utilising our own, custom web server that routes incoming HTTP/HTTPS requests to a custom code path dynamically, based on the values stored in our distributed database. SSL/TLS certificates are also distributed (with additional encryption in place) to servers through our database. Having centralised storage distribute the virtual hosts including which protocol versions should be served, and necessary certificates is critical to avoid manually pushing configuration changes to each servers individually. But how do we issue the certificates we need in the first place?
Yet another custom piece of software comes into play for SSL/TLS certificate issuance. We implemented our very own, custom ACME client that generates private keys and certificate signing requests (CSR) on the fly using either RSA or ECDSA, and sends the CSR to an ACME provider of our choice; which is LetsEncrypt by default, but also has some backup CAs listed in case of downtime.
Having a fully automated SSL/TLS issuance obviously has the advantage that, again, once we make any changes, it'll be rolled out to all our servers. For example, once we introduced OCSP Stapling , we wanted to incorporate OCSP Must-Staple flags into our certificates to force major browsers to make use of OCSP and avoid bypassing revocation checks in case our certificates ever get compromised. But there is also a major disadvantage:
If the server, on which the fully automated SSL/TLS issuance runs, ever shuts down and goes unnoticed for a long enough time, certificates wouldn't get renewed and downtime would occur on (sometimes unexpected) services ranging from HTTP(S) to SMTP. Sounds like an easy fix? Just run it on multiple servers? But did you think about the strict rate limiting and tightened error tolerance that CAs have in place to prevent abuse? What if these accidentally run at the same time and result in collosions or duplicates? For these cases, we've implemented yet another core component:
A distributed, fault-tolerant, self-orchestrating queue. There is a variety of use-cases that need such type of queue: invoice generation, health checks (which also alerts us in case of any issues around any of our components), and SSL/TLS issuance. This lead us to implementing a custom queue (think of it like a distributed yet organised cron job), that would run in a predefined interval without a single point of failure, and redistribute jobs to other servers in case of failure whilst making sure that it doesn't run twice. It's another complex topic in and of itself and will likely be covered by its own future blog post.
After all the trouble in automating servers, vhosts, certificates, and things related to it, one still has to write logic to actually handle the incoming requests beyond the HTTP connection and routing of virtual hosts.
As many of you could already guess by the title of this section, we reuse a lot of code by keeping parts (especially functions) modular and reusable. There are both public Rest-APIs that we publish to make building frontends easier and more independent of the underlying backend, and there are specific helper functions which are independent of the rest of the code and can be used across all code paths. Need examples for these? Audit logs, billing, caching, cookies, crawlers, crypto, blocklists, entitlements, exchange rates, geo IP lookups, centralised ID generation, translations, database connections, email sending, sanitizers, password handling, OAuth, web requests, and way more. All these can be and are re-used across all projects and code paths which makes maintenance and pushing non-breaking changes way easier.
We even have an API to manage our own public key infrastructure (PKI) which is used for our intra-server communications and a custom internal reverse proxy used to re-route incoming traffic to separated services running on different ports.
Our journey continues to the frontend side of things. Rapidly glueing together the underlying infrastructure is one thing. Designing and maintaining a usable and user-friendly frontend is another, but we have found a very similar approach to that.
Our websites and projects are usually split into different re-usable components, except
for the main content of a site, as this usually differs and is not re-usable. We've got
components for navigations, sidebars, footers, alerts, call-to-actions, and more, that
are created as needed. All this is supported by our own, yet again custom, CSS
(Cascading Style Sheets) and JS (JavaScript) framework which is used for all our
projects, as well as select customer sites. It includes things like button designs,
navigations, containers, form inputs, and everything else you can think off. In fact,
the site you're currently seeing is completely relying on this framework. We utilise
JavaScript to then enhance these designs by adding necessary functions like moving
title
attributes to a tooltip that pops up once you hover over the
respective element or preloading linked sites to increase page switching performance.
Some projects require us to have the frontend completely separated from the backend. Our frameworks also support that use case. For example, we designed a whole website for one of our customers that used a WordPress-based backend. We were able to create the whole frontend including connecting it with the backend, in less than 2 days of work hours and it worked like a charm.
Let's assume we've finished coding a new feature, infrastructure is already working, how do we roll it out? Easy! Just push it to our Git repository and you're done. That's atleast the workflow on a developer site. A bunch of next steps are then kicked off automatically:
Yes, but not only that. It allows us to quickly prototype our ideas and carry them to production in an efficient and optimised way. This might not work for everyone and it surely took us a couple of years to achieve the independence we have, but it was worth it as we learned a lot of things during that journey and are able to share these achievements with our partners and colleagues to help them shape their workflow to be more efficient and useful. For us, it gives us the power to finally focus on ideas again, leaving the complex world of implementation behind - to an extend.
There are parts that I didn't mention yet, for example domain management, A/B testing, issue tracking, and other parts. Maybe we'll cover them in future posts, but our goals stay the same: Minimizing required maintenance whilst allowing flexible use of all our modular components.
Also, if you are interested in a specific topic or want to collaborate on a blog post, then it's your time to shine! Let us know and we'll figure something out. :)
Tom Klein
Founder & CEO
Gentlent UG (haftungsbeschränkt)
Gentlent
Customer Support
support@gentlent.com