In A Nutshell
About Android OS
Some parts of Android will be familiar, such as the Linux Kernel, OpenGL, and the SQL database. Others may be completely foreign, such as Android's idea of the application life cycle. You'll need a good understanding of these key concepts in order to write well-behaved Android applications. Let's start off by taking a look at the overall system architecture--the key layers and components that make up the Android stack. Read More
Linux From Scratch
There are always many ways to accomplish a single task. The same can be said about Linux distributions. A great many have existed over the years. Some still exist, some have morphed into something else, yet others have been relegated to our memories. They all do things differently to suit the needs of their target audience. Because so many different ways to accomplish the same end goal exist, I began to realize I no longer had to be limited by any one implementation. Prior to discovering Linux, we simply put up with issues in other Operating Systems as you had no choice. It was what it was, whether you liked it or not. With Linux, the concept of choice began to emerge. If you didn't like something, you were free, even encouraged, to change it. Linux From Scratch
Creating a Raspberry Pi-Based Beowulf Cluster
Raspberry Pis have really taken the embedded Linux community by storm. For those unfamiliar, however, a Raspberry Pi (RPi) is a small (credit card sized), inexpensive single-board computer that is capable of running Linux and other lightweight operating systems which run on ARM processors. For those who may not have heard of a Beowulf cluster before, a Beowulf cluster is simply a collection of identical, (typically) commodity computer hardware based systems, networked together and running some kind of parallel processing software that allows each node in the cluster to share data and computation. Joshua Kiepert, Boise State University
Let's Encrypt News
Reflections on a Year of Sunlight
The Certificate Transparency ecosystem has been improving transparency for the web PKI since 2013. It helps make clear exactly what certificates each certificate authority has issued and makes sure errors or compromises of certificate authorities are detectable.
Let’s Encrypt participates in CT both as a certificate issuer and as a log operator. For the past year, we’ve also been running an experiment to help validate a next-generation design for Certificate Transparency logs. That experiment is now nearing a successful conclusion. We’ve demonstrated that the new architecture (called the “Static CT API”) works well, providing greater efficiency and making it easier to run huge and reliable CT log services with comparatively modest resources. The Static CT API also makes it easier to download and share data from CT logs.
The Sunlight log implementation, alongside other Static CT API log implementations, is now on a path to production use. Browsers are now officially accepting Static CT API logs into their log programs as a means to help guarantee that the contents of CA-issued certificates are all publicly disclosed and publicly accessible (see Safari’s and Chrome’s recent announcements), although the browsers also require the continued use of a traditional RFC 6962 log alongside the new type.
All of this is good news for everyone who runs, submits certificates to, or monitors a CT log: as the new architecture gets adopted, we can expect to see more organizations running more logs, at lower cost, and with greater overall capacity to keep up with the large volume of publicly-trusted certificates.
Certificate Transparency
Certificate Transparency (CT) was introduced in 2013 in response to concerns about how Internet users could detect misbehavior and compromise of certificate authorities. Prior to CT, it was possible for a CA to issue an inaccurate or malicious certificate that could be used to attack a relatively small number of users, and that might never come to wider attention. A team led by Google responded to this by creating a transparency log mechanism, where certificate authorities (like Let’s Encrypt) must disclose all of the certificates that we issue by submitting them to public log services. Web browsers now generally reject certificates unless the certificates include cryptographic proof (“Signed Certificate Timestamps”, or SCTs) demonstrating that they were submitted to and accepted by such logs.
The CT logs themselves use a cryptographic append-only ledger to prove that they haven’t deleted or modified their records. There are currently over a dozen CT log services, most of them also run by certificate authorities, including Let’s Encrypt’s own Oak log.
The Static CT API
The original 2013 CT log design has been used with relatively few technical changes since it was first introduced, but several other transparency logging systems have been created in other areas, such as sumdb for Golang, which helps ensure that the contents of Golang package updates are publicly recorded. While they were originally inspired by CT, more-recently invented transparency logs have improved on its design.
The current major evolution of CT was led by Filippo Valsorda, a cryptographer with an interest in transparency log mechanisms, with help from others in the CT ecosystem. Portions of the new design are directly based on sumdb. In addition to designing the new architecture, Valsorda also wrote the implementation that we’ve been using, called Sunlight, with support from Let’s Encrypt. We’re excited to see that there are now at least three other compatible implementations: Google’s trillian-tessera, Cloudflare’s Azul, and an independent project called Itko.
The biggest change for the Static CT API is that logs are now represented, and downloaded by verifiers, as simple collections of flat files (called “tiles,” so some implementers have also been referring to these as “tiled logs” or “tlogs”). Anyone who wants to download log data can do so just by downloading these files. This is great for log operators because these simple file downloads can be distributed in various ways, including caching by a CDN, which was less practical and efficient for the classic CT API.
The new design is also simpler and more efficient from the log operator’s perspective, making it cheaper to run logs. As we said last year, this may enable us and other operators to increase reliability and availability by running several separate logs, likely with lower overall resource requirements than a single traditional log.
Our Sunlight experiment

For the past year, we’ve run three Sunlight logs, called Twig, Willow, and Sycamore. We’ve been logging all of our own issued certificates, which represent a majority of the total volume of all publicly-trusted certificates, into our Sunlight logs. Sunlight logged these certificates quickly and correctly on relatively modest server hardware. Notably, each log’s write side was handled comfortably by just a single server. We also achieved high availability for these log services throughout the course of this experiment. (Because our Sunlight logs are not yet trusted by web browsers, we didn’t include the SCT proofs that they returned to us in the actual certificates we gave out to our subscribers; those proofs wouldn’t have been of use to our subscribers yet and would just have taken up space.)
A potential failure mode of traditional CT logs is that they could be unacceptably slow in incorporating newly-submitted certificates (known as missing the maximum merge delay), which can result in a log becoming distrusted. This isn’t a possibility for our new Sunlight-based logs: they always completely incorporate newly-submitted certificates before returning an SCT to the submitter, so the effective merge delay is zero! Of course, any log can suffer outages for a variety of reasons, but this feature of Sunlight makes it less likely that any outages will be fatal to a log’s continued operation.
We’ve demonstrated that Sunlight and the Static CT API work in practice, and this demonstration has helped to confirm the browser developers’ hope that Static CT API logs can become an officially-supported part of CT. As a result, the major browsers that enforce CT have now permitted Static CT API logs to apply for inclusion in browsers as publicly-trusted logs, and we’re preparing to apply for this status for our Willow and Sycamore logs with the Chrome and Safari CT log programs.
Let’s Encrypt will run at least these two logs, and possibly others over time, for the foreseeable future. Once they’re trusted by browsers, we’ll encourage other CAs to submit to them as well, and we’ll begin including SCTs from these logs in our own certificates (alongside SCTs from traditional CT logs).
How to participate
The new Static CT API and the rollout of tile-based logs will bring various changes and opportunities for community members.
New Certificate Transparency log operators
Companies and non-profit organizations could help support the web PKI by running a CT log and applying for it to be publicly trusted. Implementations like Sunlight will have substantially lower resource requirements than first-generation CT logs, particularly when cached behind a CDN. The biggest resource demands for a log operator will be storage and upstream bandwidth. A publicly-trusted log is also expected to maintain relatively high availability, because CAs need logs to be available in order to continue issuing certificates.
We don’t have statistics to share about the exact resource requirements for such a log yet, but after we have practical experience running a fully publicly-trusted Sunlight log, we should be able to make this more concrete. As noted above, the compute side of the log can be handled by a single server. Sunlight author Filippo Valsorda has recently started running a Sunlight log—also on just a single server—and offered more detailed cost breakdowns for that log’s setup, with an estimated total cost around $10,000 per year. The costs for our production Static CT API logs may be higher than those for Filippo’s log, but still far less than the costs for our traditional RFC 6962 logs.
As with trust decisions about CAs, browser developers are the authorities about which CT logs become publicly trusted. Although any person or organization can run a log, browser developers will generally prefer to trust logs whose continued availability they’re confident of—typically those run by stable organizations with experience running some form of public Internet services. Unlike becoming a certificate authority, running a log does not require a formal audit, as the validation of the log’s availability and correctness can be performed purely by observation.
Certificate authorities
Once the Willow and Sycamore logs are trusted by browsers, our fellow certificate authorities can choose to start logging certificates to them as part of their issuance processes. (Initially, you should still include at least one SCT from a traditional CT log in each certificate.) The details, including the log API endpoints and keys, are available at our CT log page. You can start submitting to these logs right away if you prefer; just bear in mind that the SCTs they return aren’t useful to subscribers yet, and won’t be useful until browsers are updated to trust the new logs.
CT data users
You can monitor CT in order to watch for certificate issuances for your own domain names, or as part of monitoring or security products or services, or for Internet security research purposes. Many of our colleagues have been doing this for some time as a part of various tools they maintain. The Static CT API should make this easier, because you’ll be able to download and share log tiles as sets of ordinary files.
If you already run such monitoring tools, please note that you’ll need to update your data pipeline in order to access Static CT API logs; since the read API is not backwards-compatible, CT API clients will need to be modified to support the new API. Without updated tools, your view of the CT system will become partial!
Also note that getting a complete view of all of CT will still require downloading data from traditional logs, which will probably continue to be true for several years.
Software developers
As logs based on the new API enter production use, it will be important to have tools to interact with and search these logs. We can all benefit from more software that understands how to do this. Since file downloads are such a familiar piece of software functionality, it will probably be easier for developers to develop against the new API compared to the original one.
We’ve also continued to see greater integration of transparency logging tools into other kinds of services, such as software updates. There’s a growing transparency log ecosystem that’s always in need of more tools and integrations. As we mentioned above, transparency logs are increasingly learning from one another, and there are also mechanisms for more direct integration between different kinds of transparency logs (known as “witnessing”). Software developers can help improve different aspects of Internet security by contributing to this active and growing area.
Conclusion
The Certificate Transparency community and larger transparency logging community have experienced a virtuous cycle of innovation, sharing ideas and implementation code between different systems and demonstrating the feasibility of new mechanisms and functionality. With the advent of tile-based logging in CT, the state of the art has moved forward in a way that helps log operators run our logs much more efficiently without compromising security.
We’re proud to have participated in this experiment and the engineering conversation around the evolution of logging architectures. Now that we’ve shown how well the new API really works at scale, we look forward to having publicly-trusted Sunlight logs later this year!
Wed, 11 Jun 2025 00:00:00 +0000
How We Reduced the Impact of Zombie Clients
The Certificate Transparency ecosystem has been improving transparency for the web PKI since 2013. It helps make clear exactly what certificates each certificate authority has issued and makes sure errors or compromises of certificate authorities are detectable.
Let’s Encrypt participates in CT both as a certificate issuer and as a log operator. For the past year, we’ve also been running an experiment to help validate a next-generation design for Certificate Transparency logs. That experiment is now nearing a successful conclusion. We’ve demonstrated that the new architecture (called the “Static CT API”) works well, providing greater efficiency and making it easier to run huge and reliable CT log services with comparatively modest resources. The Static CT API also makes it easier to download and share data from CT logs.
The Sunlight log implementation, alongside other Static CT API log implementations, is now on a path to production use. Browsers are now officially accepting Static CT API logs into their log programs as a means to help guarantee that the contents of CA-issued certificates are all publicly disclosed and publicly accessible (see Safari’s and Chrome’s recent announcements), although the browsers also require the continued use of a traditional RFC 6962 log alongside the new type.
All of this is good news for everyone who runs, submits certificates to, or monitors a CT log: as the new architecture gets adopted, we can expect to see more organizations running more logs, at lower cost, and with greater overall capacity to keep up with the large volume of publicly-trusted certificates.
Certificate Transparency
Certificate Transparency (CT) was introduced in 2013 in response to concerns about how Internet users could detect misbehavior and compromise of certificate authorities. Prior to CT, it was possible for a CA to issue an inaccurate or malicious certificate that could be used to attack a relatively small number of users, and that might never come to wider attention. A team led by Google responded to this by creating a transparency log mechanism, where certificate authorities (like Let’s Encrypt) must disclose all of the certificates that we issue by submitting them to public log services. Web browsers now generally reject certificates unless the certificates include cryptographic proof (“Signed Certificate Timestamps”, or SCTs) demonstrating that they were submitted to and accepted by such logs.
The CT logs themselves use a cryptographic append-only ledger to prove that they haven’t deleted or modified their records. There are currently over a dozen CT log services, most of them also run by certificate authorities, including Let’s Encrypt’s own Oak log.
The Static CT API
The original 2013 CT log design has been used with relatively few technical changes since it was first introduced, but several other transparency logging systems have been created in other areas, such as sumdb for Golang, which helps ensure that the contents of Golang package updates are publicly recorded. While they were originally inspired by CT, more-recently invented transparency logs have improved on its design.
The current major evolution of CT was led by Filippo Valsorda, a cryptographer with an interest in transparency log mechanisms, with help from others in the CT ecosystem. Portions of the new design are directly based on sumdb. In addition to designing the new architecture, Valsorda also wrote the implementation that we’ve been using, called Sunlight, with support from Let’s Encrypt. We’re excited to see that there are now at least three other compatible implementations: Google’s trillian-tessera, Cloudflare’s Azul, and an independent project called Itko.
The biggest change for the Static CT API is that logs are now represented, and downloaded by verifiers, as simple collections of flat files (called “tiles,” so some implementers have also been referring to these as “tiled logs” or “tlogs”). Anyone who wants to download log data can do so just by downloading these files. This is great for log operators because these simple file downloads can be distributed in various ways, including caching by a CDN, which was less practical and efficient for the classic CT API.
The new design is also simpler and more efficient from the log operator’s perspective, making it cheaper to run logs. As we said last year, this may enable us and other operators to increase reliability and availability by running several separate logs, likely with lower overall resource requirements than a single traditional log.
Our Sunlight experiment

For the past year, we’ve run three Sunlight logs, called Twig, Willow, and Sycamore. We’ve been logging all of our own issued certificates, which represent a majority of the total volume of all publicly-trusted certificates, into our Sunlight logs. Sunlight logged these certificates quickly and correctly on relatively modest server hardware. Notably, each log’s write side was handled comfortably by just a single server. We also achieved high availability for these log services throughout the course of this experiment. (Because our Sunlight logs are not yet trusted by web browsers, we didn’t include the SCT proofs that they returned to us in the actual certificates we gave out to our subscribers; those proofs wouldn’t have been of use to our subscribers yet and would just have taken up space.)
A potential failure mode of traditional CT logs is that they could be unacceptably slow in incorporating newly-submitted certificates (known as missing the maximum merge delay), which can result in a log becoming distrusted. This isn’t a possibility for our new Sunlight-based logs: they always completely incorporate newly-submitted certificates before returning an SCT to the submitter, so the effective merge delay is zero! Of course, any log can suffer outages for a variety of reasons, but this feature of Sunlight makes it less likely that any outages will be fatal to a log’s continued operation.
We’ve demonstrated that Sunlight and the Static CT API work in practice, and this demonstration has helped to confirm the browser developers’ hope that Static CT API logs can become an officially-supported part of CT. As a result, the major browsers that enforce CT have now permitted Static CT API logs to apply for inclusion in browsers as publicly-trusted logs, and we’re preparing to apply for this status for our Willow and Sycamore logs with the Chrome and Safari CT log programs.
Let’s Encrypt will run at least these two logs, and possibly others over time, for the foreseeable future. Once they’re trusted by browsers, we’ll encourage other CAs to submit to them as well, and we’ll begin including SCTs from these logs in our own certificates (alongside SCTs from traditional CT logs).
How to participate
The new Static CT API and the rollout of tile-based logs will bring various changes and opportunities for community members.
New Certificate Transparency log operators
Companies and non-profit organizations could help support the web PKI by running a CT log and applying for it to be publicly trusted. Implementations like Sunlight will have substantially lower resource requirements than first-generation CT logs, particularly when cached behind a CDN. The biggest resource demands for a log operator will be storage and upstream bandwidth. A publicly-trusted log is also expected to maintain relatively high availability, because CAs need logs to be available in order to continue issuing certificates.
We don’t have statistics to share about the exact resource requirements for such a log yet, but after we have practical experience running a fully publicly-trusted Sunlight log, we should be able to make this more concrete. As noted above, the compute side of the log can be handled by a single server. Sunlight author Filippo Valsorda has recently started running a Sunlight log—also on just a single server—and offered more detailed cost breakdowns for that log’s setup, with an estimated total cost around $10,000 per year. The costs for our production Static CT API logs may be higher than those for Filippo’s log, but still far less than the costs for our traditional RFC 6962 logs.
As with trust decisions about CAs, browser developers are the authorities about which CT logs become publicly trusted. Although any person or organization can run a log, browser developers will generally prefer to trust logs whose continued availability they’re confident of—typically those run by stable organizations with experience running some form of public Internet services. Unlike becoming a certificate authority, running a log does not require a formal audit, as the validation of the log’s availability and correctness can be performed purely by observation.
Certificate authorities
Once the Willow and Sycamore logs are trusted by browsers, our fellow certificate authorities can choose to start logging certificates to them as part of their issuance processes. (Initially, you should still include at least one SCT from a traditional CT log in each certificate.) The details, including the log API endpoints and keys, are available at our CT log page. You can start submitting to these logs right away if you prefer; just bear in mind that the SCTs they return aren’t useful to subscribers yet, and won’t be useful until browsers are updated to trust the new logs.
CT data users
You can monitor CT in order to watch for certificate issuances for your own domain names, or as part of monitoring or security products or services, or for Internet security research purposes. Many of our colleagues have been doing this for some time as a part of various tools they maintain. The Static CT API should make this easier, because you’ll be able to download and share log tiles as sets of ordinary files.
If you already run such monitoring tools, please note that you’ll need to update your data pipeline in order to access Static CT API logs; since the read API is not backwards-compatible, CT API clients will need to be modified to support the new API. Without updated tools, your view of the CT system will become partial!
Also note that getting a complete view of all of CT will still require downloading data from traditional logs, which will probably continue to be true for several years.
Software developers
As logs based on the new API enter production use, it will be important to have tools to interact with and search these logs. We can all benefit from more software that understands how to do this. Since file downloads are such a familiar piece of software functionality, it will probably be easier for developers to develop against the new API compared to the original one.
We’ve also continued to see greater integration of transparency logging tools into other kinds of services, such as software updates. There’s a growing transparency log ecosystem that’s always in need of more tools and integrations. As we mentioned above, transparency logs are increasingly learning from one another, and there are also mechanisms for more direct integration between different kinds of transparency logs (known as “witnessing”). Software developers can help improve different aspects of Internet security by contributing to this active and growing area.
Conclusion
The Certificate Transparency community and larger transparency logging community have experienced a virtuous cycle of innovation, sharing ideas and implementation code between different systems and demonstrating the feasibility of new mechanisms and functionality. With the advent of tile-based logging in CT, the state of the art has moved forward in a way that helps log operators run our logs much more efficiently without compromising security.
We’re proud to have participated in this experiment and the engineering conversation around the evolution of logging architectures. Now that we’ve shown how well the new API really works at scale, we look forward to having publicly-trusted Sunlight logs later this year!
Every night, right around midnight (mainly UTC), a horde of zombies wakes up and clamors for … digital certificates!
The zombies in question are abandoned or misconfigured Internet servers and ACME clients that have been set to request certificates from Let’s Encrypt. As our certificates last for at most 90 days, these zombie clients’ software knows that their certificates are out-of-date and need to be replaced. What they don’t realize is that their quest for new certificates is doomed! These devices are cursed to seek certificates again and again, never receiving them.
But they do use up a lot of certificate authority resources in the process.
The Zombie Client Problem
Unlike a human being, software doesn’t give up in frustration, or try to modify its approach, when it repeatedly fails at the same task. Our emphasis on automation means that the vast majority of Let’s Encrypt certificate renewals are performed by automated software. This is great when those renewals succeed, but it also means that forgotten clients and devices can continue requesting renewals unsuccessfully for months, or even years.
How might that happen? Most often, it happens when a device no longer has a domain name pointed to it. The device itself doesn’t know that this has changed, so it treats renewal failures as transient even though they are actually permanent. For instance:
- An organization may have allowed a domain name registration to lapse because it is no longer needed, but its servers are still configured to request certs for it.
- Or, a home user stopped using a particular dynamic-DNS domain with a network-attached storage device, but is still using that device at home. The device doesn’t realize that the user no longer expects to use the name, so it keeps requesting certs for it.
- Or, a web hosting or CDN customer migrated to a different service provider, but never informed the old service provider. The old service provider’s servers keep requesting certs unsuccessfully. If the customer was in a free service tier, there might not be invoices or charges reminding the customer to cancel the service.
- Or any number of other, subtler changes in a subscriber’s infrastructure, such as changing a firewall rule or some webserver configuration.
At the scale of Let’s Encrypt, which now covers hundreds of millions of names, scenarios like these have become common, and their impact has become substantial. In 2024, we noticed that about half of all certificate requests to the Let’s Encrypt ACME API came from about a million accounts that never successfully complete any validations. Many of these had completed validations and issued certificates sometime in the past, but nowadays every single one of their validation attempts fails, and they show no signs that this will change anytime soon.
Unfortunately, trying to validate those futile requests still uses resources. Our CA software has to generate challenges, reach out and attempt to validate them over the Internet, detect and report failures, and record all of the associated information in our databases and audit logs. And over time, we’ve seen more and more recurring failures: accounts that always fail their issuance requests have been growing at around 18% per year.
In January, we mentioned that we had been addressing the zombie client problem through our rate limit system. This post provides more detail on that progress.
Our Rate Limit Philosophy
If you’ve used Let’s Encrypt as a subscriber, you may have run into one of our rate limits at some point, maybe during your initial setup process. We have eight different kinds of rate limits in place now; as our January post describes, they’ve become more algorithmically sophisticated and grown to address a wider range of problems. A key principle for Let’s Encrypt is that our rate limiting is not a punishment. We don’t think of rate limits as a way of retaliating against a client for misbehavior. Rate limits are simply a tool to maximize the efficient use of our limited resources and prevent people and programs from using up those resources for no constructive purpose.
We’ve consistently tried to design our rate limit mechanisms in line with that philosophy. So if a misconfiguration or misunderstanding has caused excessive requests in the past, we’re still happy to welcome the user in question back and start issuing them certificates again—once the problem has been addressed. We want the rate limits to put a brake on wasteful use of our systems, but not to frustrate users who are actively trying to make Let’s Encrypt work for them.
In addition, we’ve always implemented our rate limits to err on the side of permissiveness. For example, if the Redis instances where rate limits are tracked have an outage or lose data, the system is designed to permit more issuance rather than less issuance as a result.
We wanted to create additional limits that would target zombie clients, but in a correspondingly non-punitive way that would avoid any disruption to valid issuance, and welcome subscribers back quickly if they happened to notice and fix a long-time problem with their setups.
Our Zombie-Related Rate Limits and Their Impact
In planning a new zombie-specific response, we decided on a “pausing” approach, which can temporarily limit an account’s ability to proceed with certificate requests. The core idea is that, if a particular account consistently fails to complete validation for a particular hostname, we’ll pause that account-hostname pair. The pause means that any new order requests from that account for that hostname will be rejected immediately, before we get to the resource-intensive validation phase.
This approach is more finely targeted than pausing an entire account. Pausing account-hostname pairs means that your ability to issue certs for a specific name could be paused due to repeated failures, but you can still get all of your other certs like normal. So a large hosting provider doesn’t have to fear that its certificate issuance on behalf of one customer will be affected by renewal failures related to a problem with a different customer’s domain name. The account-specificity of the pause, in turn, means that validation failures from one subscriber or device won’t prevent a different subscriber or device from attempting to validate the same name, as long as the devices in question don’t share a single Let’s Encrypt account.
In September 2024, we began applying our zombie rate limits manually by pausing about 21,000 of the most recurrently-failing account-hostname pairs, those which were consistently repeating the same failed requests many times per day, every day. After implementing that first round of pauses, we immediately saw a significant impact on our failed request rates. As we announced at that time, we also began using a formula to automatically pause other zombie client account-hostname pairs from December 2024 onward. The associated new rate limit is called “Consecutive Authorization Failures per Hostname Per Account” (and is independent of the existing “Authorization Failures per Hostname Per Account” limit, which resets every hour).
This formula relates to the frequency of successive failed issuance requests for the same domain name by the same Let’s Encrypt account. It applies only to failures that happen again and again, with no successful issuances at all in between: a single successful validation immediately resets the rate limit all the way to zero. Like all of our rate limits, this is not a punitive measure but is simply intended to reduce the waste of resources. So, we decided to set the thresholds rather high in the expectation that we would catch only the most disruptive zombie clients, and ultimately only those clients that were extremely unlikely to succeed in the future based on their substantial history of failed requests. We don’t hurry to block requesters as zombies: according to our current formula, client software following the default established by EFF’s Certbot (two renewal attempts per day) would be paused as a zombie only after about ten years of constant failures. More aggressive failed issuance attempts will get a client paused sooner, but clients will generally have to fail hundreds or thousands of attempts in a row before they are paused.
Most subscribers using mainstream client applications with default configurations will never encounter this rate limit, even if they forget to deactivate renewal attempts for domains that are no longer pointed at their servers. As described below, our current limit is already providing noticeable benefits with minimal disruption, and we’re likely to tighten it a bit in the near future, so it will trigger after somewhat fewer consecutive failures.
Self-Service Unpausing
A key feature in our zombie issuance pausing mechanism is self-service unpausing. Whenever an account-hostname pair is paused, any new certificate requests for that hostname submitted by that account are immediately rejected. But this means that the “one successful validation immediately resets the rate limit counter” feature can no longer come into effect: once they’re paused, they can’t even attempt validation anymore.
So every rejection comes with an error message explaining what has happened and a custom link that can be used to immediately unpause that account-hostname pair and remove any other pauses on the same account at the same time. The point of this is that subscribers who notice at some point that issuance is failing and want to intervene to get it working again have a straightforward option to let Let’s Encrypt know that they’re aware of the recurring failures and are still planning to use a particular account. As soon as subscribers notify us via the self-service link, they’ll be able to issue certificates again.
Currently, the user interface for an affected subscriber looks like this:

This link would be provided via an ACME error message in response to any request that was blocked due to a pause account-hostname pair.
As it’s turned out, the unpause option shown above has only been used by about 3% of affected accounts! This goes to show that most of the zombies we’ve paused were, in fact, well and truly forgotten about.
However, the unpause feature is there for whenever it’s needed, and there may be cases when it will become more important. A very large integration could trigger the zombie-related rate limits if a newly-introduced software bug causes what looks like a very high volume of zombie requests in a very short time. In that case, once that bug has been noticed and fixed, an integrator may need to unpause its issuance on behalf of lots of customers at once. Our unpause feature permits unpausing 50,000 domain names on a single account at a time, so even the largest integrators can get themselves unpaused expeditiously in this situation.
Conclusion
We’ve been very happy with the results of our zombie mitigation measures, and, as far as we can tell, there’s been almost no impact for subscribers! Our statistics indicate that we’ve managed to reduce the load on our infrastructure while causing no detectable harm or inconvenience to subscribers’ valid issuance requests.
Since implementing the manual pauses in September and the automated pauses in December, we’ve seen:
- Over 100,000 account-hostname pairs have been paused for excessive failures.
- We received zero (!) associated complaints or support requests.
- About 3,200 people manually unpaused issuance.
- Failed certificate orders fell by about 30% so far, and should continue to fall over time as we fine-tune the rate limit formula and catch more zombie clients.
The new rate limit and the self-service unpause system are also ready to deal with circumstances that might produce more zombie clients in the future. For instance, we’ve announced that we’re going to be discontinuing renewal reminder emails soon. If some subscribers overlook failed renewals in the future, we might see more paused clients that result from unintentional renewal failures. We think taking advantage of the existing self-service unpause feature will be straightforward in that case. But it’s much better to notice problems and get them fixed up front, so please remember to set up your own monitoring to avoid unnoticed renewal failures in the future.
If you’re a subscriber who’s had occasion to use the self-service unpause feature, we’d love your feedback on the Community Forum about your experience using the feature and the circumstances that surrounded your account’s getting paused.
Also, if you’re a Let’s Encrypt client developer, please remember to make renewal requests at a random time (not precisely at midnight) so that the load on our infrastructure is smoothed out. You can also reduce the impact of zombie renewals by repeating failed requests somewhat less frequently over time (a “back-off” strategy), especially if the failure reason makes it look like a domain name may no longer be in use at all.
Wed, 04 Jun 2025 00:00:00 +0000
Sustaining a More Secure Internet: The Power of Recurring Donations
The Certificate Transparency ecosystem has been improving transparency for the web PKI since 2013. It helps make clear exactly what certificates each certificate authority has issued and makes sure errors or compromises of certificate authorities are detectable.
Let’s Encrypt participates in CT both as a certificate issuer and as a log operator. For the past year, we’ve also been running an experiment to help validate a next-generation design for Certificate Transparency logs. That experiment is now nearing a successful conclusion. We’ve demonstrated that the new architecture (called the “Static CT API”) works well, providing greater efficiency and making it easier to run huge and reliable CT log services with comparatively modest resources. The Static CT API also makes it easier to download and share data from CT logs.
The Sunlight log implementation, alongside other Static CT API log implementations, is now on a path to production use. Browsers are now officially accepting Static CT API logs into their log programs as a means to help guarantee that the contents of CA-issued certificates are all publicly disclosed and publicly accessible (see Safari’s and Chrome’s recent announcements), although the browsers also require the continued use of a traditional RFC 6962 log alongside the new type.
All of this is good news for everyone who runs, submits certificates to, or monitors a CT log: as the new architecture gets adopted, we can expect to see more organizations running more logs, at lower cost, and with greater overall capacity to keep up with the large volume of publicly-trusted certificates.
Certificate Transparency
Certificate Transparency (CT) was introduced in 2013 in response to concerns about how Internet users could detect misbehavior and compromise of certificate authorities. Prior to CT, it was possible for a CA to issue an inaccurate or malicious certificate that could be used to attack a relatively small number of users, and that might never come to wider attention. A team led by Google responded to this by creating a transparency log mechanism, where certificate authorities (like Let’s Encrypt) must disclose all of the certificates that we issue by submitting them to public log services. Web browsers now generally reject certificates unless the certificates include cryptographic proof (“Signed Certificate Timestamps”, or SCTs) demonstrating that they were submitted to and accepted by such logs.
The CT logs themselves use a cryptographic append-only ledger to prove that they haven’t deleted or modified their records. There are currently over a dozen CT log services, most of them also run by certificate authorities, including Let’s Encrypt’s own Oak log.
The Static CT API
The original 2013 CT log design has been used with relatively few technical changes since it was first introduced, but several other transparency logging systems have been created in other areas, such as sumdb for Golang, which helps ensure that the contents of Golang package updates are publicly recorded. While they were originally inspired by CT, more-recently invented transparency logs have improved on its design.
The current major evolution of CT was led by Filippo Valsorda, a cryptographer with an interest in transparency log mechanisms, with help from others in the CT ecosystem. Portions of the new design are directly based on sumdb. In addition to designing the new architecture, Valsorda also wrote the implementation that we’ve been using, called Sunlight, with support from Let’s Encrypt. We’re excited to see that there are now at least three other compatible implementations: Google’s trillian-tessera, Cloudflare’s Azul, and an independent project called Itko.
The biggest change for the Static CT API is that logs are now represented, and downloaded by verifiers, as simple collections of flat files (called “tiles,” so some implementers have also been referring to these as “tiled logs” or “tlogs”). Anyone who wants to download log data can do so just by downloading these files. This is great for log operators because these simple file downloads can be distributed in various ways, including caching by a CDN, which was less practical and efficient for the classic CT API.
The new design is also simpler and more efficient from the log operator’s perspective, making it cheaper to run logs. As we said last year, this may enable us and other operators to increase reliability and availability by running several separate logs, likely with lower overall resource requirements than a single traditional log.
Our Sunlight experiment

For the past year, we’ve run three Sunlight logs, called Twig, Willow, and Sycamore. We’ve been logging all of our own issued certificates, which represent a majority of the total volume of all publicly-trusted certificates, into our Sunlight logs. Sunlight logged these certificates quickly and correctly on relatively modest server hardware. Notably, each log’s write side was handled comfortably by just a single server. We also achieved high availability for these log services throughout the course of this experiment. (Because our Sunlight logs are not yet trusted by web browsers, we didn’t include the SCT proofs that they returned to us in the actual certificates we gave out to our subscribers; those proofs wouldn’t have been of use to our subscribers yet and would just have taken up space.)
A potential failure mode of traditional CT logs is that they could be unacceptably slow in incorporating newly-submitted certificates (known as missing the maximum merge delay), which can result in a log becoming distrusted. This isn’t a possibility for our new Sunlight-based logs: they always completely incorporate newly-submitted certificates before returning an SCT to the submitter, so the effective merge delay is zero! Of course, any log can suffer outages for a variety of reasons, but this feature of Sunlight makes it less likely that any outages will be fatal to a log’s continued operation.
We’ve demonstrated that Sunlight and the Static CT API work in practice, and this demonstration has helped to confirm the browser developers’ hope that Static CT API logs can become an officially-supported part of CT. As a result, the major browsers that enforce CT have now permitted Static CT API logs to apply for inclusion in browsers as publicly-trusted logs, and we’re preparing to apply for this status for our Willow and Sycamore logs with the Chrome and Safari CT log programs.
Let’s Encrypt will run at least these two logs, and possibly others over time, for the foreseeable future. Once they’re trusted by browsers, we’ll encourage other CAs to submit to them as well, and we’ll begin including SCTs from these logs in our own certificates (alongside SCTs from traditional CT logs).
How to participate
The new Static CT API and the rollout of tile-based logs will bring various changes and opportunities for community members.
New Certificate Transparency log operators
Companies and non-profit organizations could help support the web PKI by running a CT log and applying for it to be publicly trusted. Implementations like Sunlight will have substantially lower resource requirements than first-generation CT logs, particularly when cached behind a CDN. The biggest resource demands for a log operator will be storage and upstream bandwidth. A publicly-trusted log is also expected to maintain relatively high availability, because CAs need logs to be available in order to continue issuing certificates.
We don’t have statistics to share about the exact resource requirements for such a log yet, but after we have practical experience running a fully publicly-trusted Sunlight log, we should be able to make this more concrete. As noted above, the compute side of the log can be handled by a single server. Sunlight author Filippo Valsorda has recently started running a Sunlight log—also on just a single server—and offered more detailed cost breakdowns for that log’s setup, with an estimated total cost around $10,000 per year. The costs for our production Static CT API logs may be higher than those for Filippo’s log, but still far less than the costs for our traditional RFC 6962 logs.
As with trust decisions about CAs, browser developers are the authorities about which CT logs become publicly trusted. Although any person or organization can run a log, browser developers will generally prefer to trust logs whose continued availability they’re confident of—typically those run by stable organizations with experience running some form of public Internet services. Unlike becoming a certificate authority, running a log does not require a formal audit, as the validation of the log’s availability and correctness can be performed purely by observation.
Certificate authorities
Once the Willow and Sycamore logs are trusted by browsers, our fellow certificate authorities can choose to start logging certificates to them as part of their issuance processes. (Initially, you should still include at least one SCT from a traditional CT log in each certificate.) The details, including the log API endpoints and keys, are available at our CT log page. You can start submitting to these logs right away if you prefer; just bear in mind that the SCTs they return aren’t useful to subscribers yet, and won’t be useful until browsers are updated to trust the new logs.
CT data users
You can monitor CT in order to watch for certificate issuances for your own domain names, or as part of monitoring or security products or services, or for Internet security research purposes. Many of our colleagues have been doing this for some time as a part of various tools they maintain. The Static CT API should make this easier, because you’ll be able to download and share log tiles as sets of ordinary files.
If you already run such monitoring tools, please note that you’ll need to update your data pipeline in order to access Static CT API logs; since the read API is not backwards-compatible, CT API clients will need to be modified to support the new API. Without updated tools, your view of the CT system will become partial!
Also note that getting a complete view of all of CT will still require downloading data from traditional logs, which will probably continue to be true for several years.
Software developers
As logs based on the new API enter production use, it will be important to have tools to interact with and search these logs. We can all benefit from more software that understands how to do this. Since file downloads are such a familiar piece of software functionality, it will probably be easier for developers to develop against the new API compared to the original one.
We’ve also continued to see greater integration of transparency logging tools into other kinds of services, such as software updates. There’s a growing transparency log ecosystem that’s always in need of more tools and integrations. As we mentioned above, transparency logs are increasingly learning from one another, and there are also mechanisms for more direct integration between different kinds of transparency logs (known as “witnessing”). Software developers can help improve different aspects of Internet security by contributing to this active and growing area.
Conclusion
The Certificate Transparency community and larger transparency logging community have experienced a virtuous cycle of innovation, sharing ideas and implementation code between different systems and demonstrating the feasibility of new mechanisms and functionality. With the advent of tile-based logging in CT, the state of the art has moved forward in a way that helps log operators run our logs much more efficiently without compromising security.
We’re proud to have participated in this experiment and the engineering conversation around the evolution of logging architectures. Now that we’ve shown how well the new API really works at scale, we look forward to having publicly-trusted Sunlight logs later this year!
Every night, right around midnight (mainly UTC), a horde of zombies wakes up and clamors for … digital certificates!
The zombies in question are abandoned or misconfigured Internet servers and ACME clients that have been set to request certificates from Let’s Encrypt. As our certificates last for at most 90 days, these zombie clients’ software knows that their certificates are out-of-date and need to be replaced. What they don’t realize is that their quest for new certificates is doomed! These devices are cursed to seek certificates again and again, never receiving them.
But they do use up a lot of certificate authority resources in the process.
The Zombie Client Problem
Unlike a human being, software doesn’t give up in frustration, or try to modify its approach, when it repeatedly fails at the same task. Our emphasis on automation means that the vast majority of Let’s Encrypt certificate renewals are performed by automated software. This is great when those renewals succeed, but it also means that forgotten clients and devices can continue requesting renewals unsuccessfully for months, or even years.
How might that happen? Most often, it happens when a device no longer has a domain name pointed to it. The device itself doesn’t know that this has changed, so it treats renewal failures as transient even though they are actually permanent. For instance:
- An organization may have allowed a domain name registration to lapse because it is no longer needed, but its servers are still configured to request certs for it.
- Or, a home user stopped using a particular dynamic-DNS domain with a network-attached storage device, but is still using that device at home. The device doesn’t realize that the user no longer expects to use the name, so it keeps requesting certs for it.
- Or, a web hosting or CDN customer migrated to a different service provider, but never informed the old service provider. The old service provider’s servers keep requesting certs unsuccessfully. If the customer was in a free service tier, there might not be invoices or charges reminding the customer to cancel the service.
- Or any number of other, subtler changes in a subscriber’s infrastructure, such as changing a firewall rule or some webserver configuration.
At the scale of Let’s Encrypt, which now covers hundreds of millions of names, scenarios like these have become common, and their impact has become substantial. In 2024, we noticed that about half of all certificate requests to the Let’s Encrypt ACME API came from about a million accounts that never successfully complete any validations. Many of these had completed validations and issued certificates sometime in the past, but nowadays every single one of their validation attempts fails, and they show no signs that this will change anytime soon.
Unfortunately, trying to validate those futile requests still uses resources. Our CA software has to generate challenges, reach out and attempt to validate them over the Internet, detect and report failures, and record all of the associated information in our databases and audit logs. And over time, we’ve seen more and more recurring failures: accounts that always fail their issuance requests have been growing at around 18% per year.
In January, we mentioned that we had been addressing the zombie client problem through our rate limit system. This post provides more detail on that progress.
Our Rate Limit Philosophy
If you’ve used Let’s Encrypt as a subscriber, you may have run into one of our rate limits at some point, maybe during your initial setup process. We have eight different kinds of rate limits in place now; as our January post describes, they’ve become more algorithmically sophisticated and grown to address a wider range of problems. A key principle for Let’s Encrypt is that our rate limiting is not a punishment. We don’t think of rate limits as a way of retaliating against a client for misbehavior. Rate limits are simply a tool to maximize the efficient use of our limited resources and prevent people and programs from using up those resources for no constructive purpose.
We’ve consistently tried to design our rate limit mechanisms in line with that philosophy. So if a misconfiguration or misunderstanding has caused excessive requests in the past, we’re still happy to welcome the user in question back and start issuing them certificates again—once the problem has been addressed. We want the rate limits to put a brake on wasteful use of our systems, but not to frustrate users who are actively trying to make Let’s Encrypt work for them.
In addition, we’ve always implemented our rate limits to err on the side of permissiveness. For example, if the Redis instances where rate limits are tracked have an outage or lose data, the system is designed to permit more issuance rather than less issuance as a result.
We wanted to create additional limits that would target zombie clients, but in a correspondingly non-punitive way that would avoid any disruption to valid issuance, and welcome subscribers back quickly if they happened to notice and fix a long-time problem with their setups.
Our Zombie-Related Rate Limits and Their Impact
In planning a new zombie-specific response, we decided on a “pausing” approach, which can temporarily limit an account’s ability to proceed with certificate requests. The core idea is that, if a particular account consistently fails to complete validation for a particular hostname, we’ll pause that account-hostname pair. The pause means that any new order requests from that account for that hostname will be rejected immediately, before we get to the resource-intensive validation phase.
This approach is more finely targeted than pausing an entire account. Pausing account-hostname pairs means that your ability to issue certs for a specific name could be paused due to repeated failures, but you can still get all of your other certs like normal. So a large hosting provider doesn’t have to fear that its certificate issuance on behalf of one customer will be affected by renewal failures related to a problem with a different customer’s domain name. The account-specificity of the pause, in turn, means that validation failures from one subscriber or device won’t prevent a different subscriber or device from attempting to validate the same name, as long as the devices in question don’t share a single Let’s Encrypt account.
In September 2024, we began applying our zombie rate limits manually by pausing about 21,000 of the most recurrently-failing account-hostname pairs, those which were consistently repeating the same failed requests many times per day, every day. After implementing that first round of pauses, we immediately saw a significant impact on our failed request rates. As we announced at that time, we also began using a formula to automatically pause other zombie client account-hostname pairs from December 2024 onward. The associated new rate limit is called “Consecutive Authorization Failures per Hostname Per Account” (and is independent of the existing “Authorization Failures per Hostname Per Account” limit, which resets every hour).
This formula relates to the frequency of successive failed issuance requests for the same domain name by the same Let’s Encrypt account. It applies only to failures that happen again and again, with no successful issuances at all in between: a single successful validation immediately resets the rate limit all the way to zero. Like all of our rate limits, this is not a punitive measure but is simply intended to reduce the waste of resources. So, we decided to set the thresholds rather high in the expectation that we would catch only the most disruptive zombie clients, and ultimately only those clients that were extremely unlikely to succeed in the future based on their substantial history of failed requests. We don’t hurry to block requesters as zombies: according to our current formula, client software following the default established by EFF’s Certbot (two renewal attempts per day) would be paused as a zombie only after about ten years of constant failures. More aggressive failed issuance attempts will get a client paused sooner, but clients will generally have to fail hundreds or thousands of attempts in a row before they are paused.
Most subscribers using mainstream client applications with default configurations will never encounter this rate limit, even if they forget to deactivate renewal attempts for domains that are no longer pointed at their servers. As described below, our current limit is already providing noticeable benefits with minimal disruption, and we’re likely to tighten it a bit in the near future, so it will trigger after somewhat fewer consecutive failures.
Self-Service Unpausing
A key feature in our zombie issuance pausing mechanism is self-service unpausing. Whenever an account-hostname pair is paused, any new certificate requests for that hostname submitted by that account are immediately rejected. But this means that the “one successful validation immediately resets the rate limit counter” feature can no longer come into effect: once they’re paused, they can’t even attempt validation anymore.
So every rejection comes with an error message explaining what has happened and a custom link that can be used to immediately unpause that account-hostname pair and remove any other pauses on the same account at the same time. The point of this is that subscribers who notice at some point that issuance is failing and want to intervene to get it working again have a straightforward option to let Let’s Encrypt know that they’re aware of the recurring failures and are still planning to use a particular account. As soon as subscribers notify us via the self-service link, they’ll be able to issue certificates again.
Currently, the user interface for an affected subscriber looks like this:

This link would be provided via an ACME error message in response to any request that was blocked due to a pause account-hostname pair.
As it’s turned out, the unpause option shown above has only been used by about 3% of affected accounts! This goes to show that most of the zombies we’ve paused were, in fact, well and truly forgotten about.
However, the unpause feature is there for whenever it’s needed, and there may be cases when it will become more important. A very large integration could trigger the zombie-related rate limits if a newly-introduced software bug causes what looks like a very high volume of zombie requests in a very short time. In that case, once that bug has been noticed and fixed, an integrator may need to unpause its issuance on behalf of lots of customers at once. Our unpause feature permits unpausing 50,000 domain names on a single account at a time, so even the largest integrators can get themselves unpaused expeditiously in this situation.
Conclusion
We’ve been very happy with the results of our zombie mitigation measures, and, as far as we can tell, there’s been almost no impact for subscribers! Our statistics indicate that we’ve managed to reduce the load on our infrastructure while causing no detectable harm or inconvenience to subscribers’ valid issuance requests.
Since implementing the manual pauses in September and the automated pauses in December, we’ve seen:
- Over 100,000 account-hostname pairs have been paused for excessive failures.
- We received zero (!) associated complaints or support requests.
- About 3,200 people manually unpaused issuance.
- Failed certificate orders fell by about 30% so far, and should continue to fall over time as we fine-tune the rate limit formula and catch more zombie clients.
The new rate limit and the self-service unpause system are also ready to deal with circumstances that might produce more zombie clients in the future. For instance, we’ve announced that we’re going to be discontinuing renewal reminder emails soon. If some subscribers overlook failed renewals in the future, we might see more paused clients that result from unintentional renewal failures. We think taking advantage of the existing self-service unpause feature will be straightforward in that case. But it’s much better to notice problems and get them fixed up front, so please remember to set up your own monitoring to avoid unnoticed renewal failures in the future.
If you’re a subscriber who’s had occasion to use the self-service unpause feature, we’d love your feedback on the Community Forum about your experience using the feature and the circumstances that surrounded your account’s getting paused.
Also, if you’re a Let’s Encrypt client developer, please remember to make renewal requests at a random time (not precisely at midnight) so that the load on our infrastructure is smoothed out. You can also reduce the impact of zombie renewals by repeating failed requests somewhat less frequently over time (a “back-off” strategy), especially if the failure reason makes it look like a domain name may no longer be in use at all.
At Let’s Encrypt we know that building a secure Internet isn’t just a technical challenge—it’s a long-term commitment. Over the past decade we’ve made enormous strides: from issuing billions of TLS certificates to continually innovating to keep the web safer and more accessible. But none of this would be possible without recurring donations from individuals and organizations around the world.
Recurring donations are more than just financial support; they allow us to plan, innovate, and keep improving with confidence, knowing that month after month, year after year, our supporters are there. This consistent backing empowers us to maintain a secure, privacy-respecting Internet for all.
Our tenth anniversary tagline, Encryption for Everybody, highlights this vision. It’s both a technical goal and a fundamental belief that secure communication should be available to everyone, everywhere.
When we asked our recurring donors why they give, their responses affirmed how essential this commitment is. One longtime supporter shared:
Supporting Let's Encrypt aligns with my belief in a privacy-conscious world, where encrypted communication is the default.
For some, it’s about paying it forward, helping future users benefit as they once did:
For my 18th birthday, I got my last name as a domain. As a young tech enthusiast with little money, Let's Encrypt made it possible for me to get a TLS certificate and learn about technology. Back then, I was a student using it for free. Now that I have a stable income, donating is my way of giving back and helping others have the same opportunities I did.
The next decade of Let’s Encrypt will likely be about maintaining that commitment to encryption for everybody. It’s about ensuring that our work remains reliable, accessible, and—most importantly—supported by people who believe in what we do. To everyone who’s been part of this journey, thank you. We couldn’t do it without you.
During Let’s Encrypt’s 10th Anniversary Year, we’re celebrating our community and reflecting on our journey. We’d be thrilled to hear from you. Connect with us on LinkedIn, our community forum, or email us at outreach@letsencrypt.org. Let’s keep building a secure Internet together!
Let’s Encrypt is a project of the nonprofit Internet Security Research Group, a 501(c)(3) nonprofit. To support our work, visit letsencrypt.org/donate.
Wed, 21 May 2025 00:00:00 +0000
Ending TLS Client Authentication Certificate Support in 2026
The Certificate Transparency ecosystem has been improving transparency for the web PKI since 2013. It helps make clear exactly what certificates each certificate authority has issued and makes sure errors or compromises of certificate authorities are detectable.
Let’s Encrypt participates in CT both as a certificate issuer and as a log operator. For the past year, we’ve also been running an experiment to help validate a next-generation design for Certificate Transparency logs. That experiment is now nearing a successful conclusion. We’ve demonstrated that the new architecture (called the “Static CT API”) works well, providing greater efficiency and making it easier to run huge and reliable CT log services with comparatively modest resources. The Static CT API also makes it easier to download and share data from CT logs.
The Sunlight log implementation, alongside other Static CT API log implementations, is now on a path to production use. Browsers are now officially accepting Static CT API logs into their log programs as a means to help guarantee that the contents of CA-issued certificates are all publicly disclosed and publicly accessible (see Safari’s and Chrome’s recent announcements), although the browsers also require the continued use of a traditional RFC 6962 log alongside the new type.
All of this is good news for everyone who runs, submits certificates to, or monitors a CT log: as the new architecture gets adopted, we can expect to see more organizations running more logs, at lower cost, and with greater overall capacity to keep up with the large volume of publicly-trusted certificates.
Certificate Transparency
Certificate Transparency (CT) was introduced in 2013 in response to concerns about how Internet users could detect misbehavior and compromise of certificate authorities. Prior to CT, it was possible for a CA to issue an inaccurate or malicious certificate that could be used to attack a relatively small number of users, and that might never come to wider attention. A team led by Google responded to this by creating a transparency log mechanism, where certificate authorities (like Let’s Encrypt) must disclose all of the certificates that we issue by submitting them to public log services. Web browsers now generally reject certificates unless the certificates include cryptographic proof (“Signed Certificate Timestamps”, or SCTs) demonstrating that they were submitted to and accepted by such logs.
The CT logs themselves use a cryptographic append-only ledger to prove that they haven’t deleted or modified their records. There are currently over a dozen CT log services, most of them also run by certificate authorities, including Let’s Encrypt’s own Oak log.
The Static CT API
The original 2013 CT log design has been used with relatively few technical changes since it was first introduced, but several other transparency logging systems have been created in other areas, such as sumdb for Golang, which helps ensure that the contents of Golang package updates are publicly recorded. While they were originally inspired by CT, more-recently invented transparency logs have improved on its design.
The current major evolution of CT was led by Filippo Valsorda, a cryptographer with an interest in transparency log mechanisms, with help from others in the CT ecosystem. Portions of the new design are directly based on sumdb. In addition to designing the new architecture, Valsorda also wrote the implementation that we’ve been using, called Sunlight, with support from Let’s Encrypt. We’re excited to see that there are now at least three other compatible implementations: Google’s trillian-tessera, Cloudflare’s Azul, and an independent project called Itko.
The biggest change for the Static CT API is that logs are now represented, and downloaded by verifiers, as simple collections of flat files (called “tiles,” so some implementers have also been referring to these as “tiled logs” or “tlogs”). Anyone who wants to download log data can do so just by downloading these files. This is great for log operators because these simple file downloads can be distributed in various ways, including caching by a CDN, which was less practical and efficient for the classic CT API.
The new design is also simpler and more efficient from the log operator’s perspective, making it cheaper to run logs. As we said last year, this may enable us and other operators to increase reliability and availability by running several separate logs, likely with lower overall resource requirements than a single traditional log.
Our Sunlight experiment

For the past year, we’ve run three Sunlight logs, called Twig, Willow, and Sycamore. We’ve been logging all of our own issued certificates, which represent a majority of the total volume of all publicly-trusted certificates, into our Sunlight logs. Sunlight logged these certificates quickly and correctly on relatively modest server hardware. Notably, each log’s write side was handled comfortably by just a single server. We also achieved high availability for these log services throughout the course of this experiment. (Because our Sunlight logs are not yet trusted by web browsers, we didn’t include the SCT proofs that they returned to us in the actual certificates we gave out to our subscribers; those proofs wouldn’t have been of use to our subscribers yet and would just have taken up space.)
A potential failure mode of traditional CT logs is that they could be unacceptably slow in incorporating newly-submitted certificates (known as missing the maximum merge delay), which can result in a log becoming distrusted. This isn’t a possibility for our new Sunlight-based logs: they always completely incorporate newly-submitted certificates before returning an SCT to the submitter, so the effective merge delay is zero! Of course, any log can suffer outages for a variety of reasons, but this feature of Sunlight makes it less likely that any outages will be fatal to a log’s continued operation.
We’ve demonstrated that Sunlight and the Static CT API work in practice, and this demonstration has helped to confirm the browser developers’ hope that Static CT API logs can become an officially-supported part of CT. As a result, the major browsers that enforce CT have now permitted Static CT API logs to apply for inclusion in browsers as publicly-trusted logs, and we’re preparing to apply for this status for our Willow and Sycamore logs with the Chrome and Safari CT log programs.
Let’s Encrypt will run at least these two logs, and possibly others over time, for the foreseeable future. Once they’re trusted by browsers, we’ll encourage other CAs to submit to them as well, and we’ll begin including SCTs from these logs in our own certificates (alongside SCTs from traditional CT logs).
How to participate
The new Static CT API and the rollout of tile-based logs will bring various changes and opportunities for community members.
New Certificate Transparency log operators
Companies and non-profit organizations could help support the web PKI by running a CT log and applying for it to be publicly trusted. Implementations like Sunlight will have substantially lower resource requirements than first-generation CT logs, particularly when cached behind a CDN. The biggest resource demands for a log operator will be storage and upstream bandwidth. A publicly-trusted log is also expected to maintain relatively high availability, because CAs need logs to be available in order to continue issuing certificates.
We don’t have statistics to share about the exact resource requirements for such a log yet, but after we have practical experience running a fully publicly-trusted Sunlight log, we should be able to make this more concrete. As noted above, the compute side of the log can be handled by a single server. Sunlight author Filippo Valsorda has recently started running a Sunlight log—also on just a single server—and offered more detailed cost breakdowns for that log’s setup, with an estimated total cost around $10,000 per year. The costs for our production Static CT API logs may be higher than those for Filippo’s log, but still far less than the costs for our traditional RFC 6962 logs.
As with trust decisions about CAs, browser developers are the authorities about which CT logs become publicly trusted. Although any person or organization can run a log, browser developers will generally prefer to trust logs whose continued availability they’re confident of—typically those run by stable organizations with experience running some form of public Internet services. Unlike becoming a certificate authority, running a log does not require a formal audit, as the validation of the log’s availability and correctness can be performed purely by observation.
Certificate authorities
Once the Willow and Sycamore logs are trusted by browsers, our fellow certificate authorities can choose to start logging certificates to them as part of their issuance processes. (Initially, you should still include at least one SCT from a traditional CT log in each certificate.) The details, including the log API endpoints and keys, are available at our CT log page. You can start submitting to these logs right away if you prefer; just bear in mind that the SCTs they return aren’t useful to subscribers yet, and won’t be useful until browsers are updated to trust the new logs.
CT data users
You can monitor CT in order to watch for certificate issuances for your own domain names, or as part of monitoring or security products or services, or for Internet security research purposes. Many of our colleagues have been doing this for some time as a part of various tools they maintain. The Static CT API should make this easier, because you’ll be able to download and share log tiles as sets of ordinary files.
If you already run such monitoring tools, please note that you’ll need to update your data pipeline in order to access Static CT API logs; since the read API is not backwards-compatible, CT API clients will need to be modified to support the new API. Without updated tools, your view of the CT system will become partial!
Also note that getting a complete view of all of CT will still require downloading data from traditional logs, which will probably continue to be true for several years.
Software developers
As logs based on the new API enter production use, it will be important to have tools to interact with and search these logs. We can all benefit from more software that understands how to do this. Since file downloads are such a familiar piece of software functionality, it will probably be easier for developers to develop against the new API compared to the original one.
We’ve also continued to see greater integration of transparency logging tools into other kinds of services, such as software updates. There’s a growing transparency log ecosystem that’s always in need of more tools and integrations. As we mentioned above, transparency logs are increasingly learning from one another, and there are also mechanisms for more direct integration between different kinds of transparency logs (known as “witnessing”). Software developers can help improve different aspects of Internet security by contributing to this active and growing area.
Conclusion
The Certificate Transparency community and larger transparency logging community have experienced a virtuous cycle of innovation, sharing ideas and implementation code between different systems and demonstrating the feasibility of new mechanisms and functionality. With the advent of tile-based logging in CT, the state of the art has moved forward in a way that helps log operators run our logs much more efficiently without compromising security.
We’re proud to have participated in this experiment and the engineering conversation around the evolution of logging architectures. Now that we’ve shown how well the new API really works at scale, we look forward to having publicly-trusted Sunlight logs later this year!
Every night, right around midnight (mainly UTC), a horde of zombies wakes up and clamors for … digital certificates!
The zombies in question are abandoned or misconfigured Internet servers and ACME clients that have been set to request certificates from Let’s Encrypt. As our certificates last for at most 90 days, these zombie clients’ software knows that their certificates are out-of-date and need to be replaced. What they don’t realize is that their quest for new certificates is doomed! These devices are cursed to seek certificates again and again, never receiving them.
But they do use up a lot of certificate authority resources in the process.
The Zombie Client Problem
Unlike a human being, software doesn’t give up in frustration, or try to modify its approach, when it repeatedly fails at the same task. Our emphasis on automation means that the vast majority of Let’s Encrypt certificate renewals are performed by automated software. This is great when those renewals succeed, but it also means that forgotten clients and devices can continue requesting renewals unsuccessfully for months, or even years.
How might that happen? Most often, it happens when a device no longer has a domain name pointed to it. The device itself doesn’t know that this has changed, so it treats renewal failures as transient even though they are actually permanent. For instance:
- An organization may have allowed a domain name registration to lapse because it is no longer needed, but its servers are still configured to request certs for it.
- Or, a home user stopped using a particular dynamic-DNS domain with a network-attached storage device, but is still using that device at home. The device doesn’t realize that the user no longer expects to use the name, so it keeps requesting certs for it.
- Or, a web hosting or CDN customer migrated to a different service provider, but never informed the old service provider. The old service provider’s servers keep requesting certs unsuccessfully. If the customer was in a free service tier, there might not be invoices or charges reminding the customer to cancel the service.
- Or any number of other, subtler changes in a subscriber’s infrastructure, such as changing a firewall rule or some webserver configuration.
At the scale of Let’s Encrypt, which now covers hundreds of millions of names, scenarios like these have become common, and their impact has become substantial. In 2024, we noticed that about half of all certificate requests to the Let’s Encrypt ACME API came from about a million accounts that never successfully complete any validations. Many of these had completed validations and issued certificates sometime in the past, but nowadays every single one of their validation attempts fails, and they show no signs that this will change anytime soon.
Unfortunately, trying to validate those futile requests still uses resources. Our CA software has to generate challenges, reach out and attempt to validate them over the Internet, detect and report failures, and record all of the associated information in our databases and audit logs. And over time, we’ve seen more and more recurring failures: accounts that always fail their issuance requests have been growing at around 18% per year.
In January, we mentioned that we had been addressing the zombie client problem through our rate limit system. This post provides more detail on that progress.
Our Rate Limit Philosophy
If you’ve used Let’s Encrypt as a subscriber, you may have run into one of our rate limits at some point, maybe during your initial setup process. We have eight different kinds of rate limits in place now; as our January post describes, they’ve become more algorithmically sophisticated and grown to address a wider range of problems. A key principle for Let’s Encrypt is that our rate limiting is not a punishment. We don’t think of rate limits as a way of retaliating against a client for misbehavior. Rate limits are simply a tool to maximize the efficient use of our limited resources and prevent people and programs from using up those resources for no constructive purpose.
We’ve consistently tried to design our rate limit mechanisms in line with that philosophy. So if a misconfiguration or misunderstanding has caused excessive requests in the past, we’re still happy to welcome the user in question back and start issuing them certificates again—once the problem has been addressed. We want the rate limits to put a brake on wasteful use of our systems, but not to frustrate users who are actively trying to make Let’s Encrypt work for them.
In addition, we’ve always implemented our rate limits to err on the side of permissiveness. For example, if the Redis instances where rate limits are tracked have an outage or lose data, the system is designed to permit more issuance rather than less issuance as a result.
We wanted to create additional limits that would target zombie clients, but in a correspondingly non-punitive way that would avoid any disruption to valid issuance, and welcome subscribers back quickly if they happened to notice and fix a long-time problem with their setups.
Our Zombie-Related Rate Limits and Their Impact
In planning a new zombie-specific response, we decided on a “pausing” approach, which can temporarily limit an account’s ability to proceed with certificate requests. The core idea is that, if a particular account consistently fails to complete validation for a particular hostname, we’ll pause that account-hostname pair. The pause means that any new order requests from that account for that hostname will be rejected immediately, before we get to the resource-intensive validation phase.
This approach is more finely targeted than pausing an entire account. Pausing account-hostname pairs means that your ability to issue certs for a specific name could be paused due to repeated failures, but you can still get all of your other certs like normal. So a large hosting provider doesn’t have to fear that its certificate issuance on behalf of one customer will be affected by renewal failures related to a problem with a different customer’s domain name. The account-specificity of the pause, in turn, means that validation failures from one subscriber or device won’t prevent a different subscriber or device from attempting to validate the same name, as long as the devices in question don’t share a single Let’s Encrypt account.
In September 2024, we began applying our zombie rate limits manually by pausing about 21,000 of the most recurrently-failing account-hostname pairs, those which were consistently repeating the same failed requests many times per day, every day. After implementing that first round of pauses, we immediately saw a significant impact on our failed request rates. As we announced at that time, we also began using a formula to automatically pause other zombie client account-hostname pairs from December 2024 onward. The associated new rate limit is called “Consecutive Authorization Failures per Hostname Per Account” (and is independent of the existing “Authorization Failures per Hostname Per Account” limit, which resets every hour).
This formula relates to the frequency of successive failed issuance requests for the same domain name by the same Let’s Encrypt account. It applies only to failures that happen again and again, with no successful issuances at all in between: a single successful validation immediately resets the rate limit all the way to zero. Like all of our rate limits, this is not a punitive measure but is simply intended to reduce the waste of resources. So, we decided to set the thresholds rather high in the expectation that we would catch only the most disruptive zombie clients, and ultimately only those clients that were extremely unlikely to succeed in the future based on their substantial history of failed requests. We don’t hurry to block requesters as zombies: according to our current formula, client software following the default established by EFF’s Certbot (two renewal attempts per day) would be paused as a zombie only after about ten years of constant failures. More aggressive failed issuance attempts will get a client paused sooner, but clients will generally have to fail hundreds or thousands of attempts in a row before they are paused.
Most subscribers using mainstream client applications with default configurations will never encounter this rate limit, even if they forget to deactivate renewal attempts for domains that are no longer pointed at their servers. As described below, our current limit is already providing noticeable benefits with minimal disruption, and we’re likely to tighten it a bit in the near future, so it will trigger after somewhat fewer consecutive failures.
Self-Service Unpausing
A key feature in our zombie issuance pausing mechanism is self-service unpausing. Whenever an account-hostname pair is paused, any new certificate requests for that hostname submitted by that account are immediately rejected. But this means that the “one successful validation immediately resets the rate limit counter” feature can no longer come into effect: once they’re paused, they can’t even attempt validation anymore.
So every rejection comes with an error message explaining what has happened and a custom link that can be used to immediately unpause that account-hostname pair and remove any other pauses on the same account at the same time. The point of this is that subscribers who notice at some point that issuance is failing and want to intervene to get it working again have a straightforward option to let Let’s Encrypt know that they’re aware of the recurring failures and are still planning to use a particular account. As soon as subscribers notify us via the self-service link, they’ll be able to issue certificates again.
Currently, the user interface for an affected subscriber looks like this:

This link would be provided via an ACME error message in response to any request that was blocked due to a pause account-hostname pair.
As it’s turned out, the unpause option shown above has only been used by about 3% of affected accounts! This goes to show that most of the zombies we’ve paused were, in fact, well and truly forgotten about.
However, the unpause feature is there for whenever it’s needed, and there may be cases when it will become more important. A very large integration could trigger the zombie-related rate limits if a newly-introduced software bug causes what looks like a very high volume of zombie requests in a very short time. In that case, once that bug has been noticed and fixed, an integrator may need to unpause its issuance on behalf of lots of customers at once. Our unpause feature permits unpausing 50,000 domain names on a single account at a time, so even the largest integrators can get themselves unpaused expeditiously in this situation.
Conclusion
We’ve been very happy with the results of our zombie mitigation measures, and, as far as we can tell, there’s been almost no impact for subscribers! Our statistics indicate that we’ve managed to reduce the load on our infrastructure while causing no detectable harm or inconvenience to subscribers’ valid issuance requests.
Since implementing the manual pauses in September and the automated pauses in December, we’ve seen:
- Over 100,000 account-hostname pairs have been paused for excessive failures.
- We received zero (!) associated complaints or support requests.
- About 3,200 people manually unpaused issuance.
- Failed certificate orders fell by about 30% so far, and should continue to fall over time as we fine-tune the rate limit formula and catch more zombie clients.
The new rate limit and the self-service unpause system are also ready to deal with circumstances that might produce more zombie clients in the future. For instance, we’ve announced that we’re going to be discontinuing renewal reminder emails soon. If some subscribers overlook failed renewals in the future, we might see more paused clients that result from unintentional renewal failures. We think taking advantage of the existing self-service unpause feature will be straightforward in that case. But it’s much better to notice problems and get them fixed up front, so please remember to set up your own monitoring to avoid unnoticed renewal failures in the future.
If you’re a subscriber who’s had occasion to use the self-service unpause feature, we’d love your feedback on the Community Forum about your experience using the feature and the circumstances that surrounded your account’s getting paused.
Also, if you’re a Let’s Encrypt client developer, please remember to make renewal requests at a random time (not precisely at midnight) so that the load on our infrastructure is smoothed out. You can also reduce the impact of zombie renewals by repeating failed requests somewhat less frequently over time (a “back-off” strategy), especially if the failure reason makes it look like a domain name may no longer be in use at all.
At Let’s Encrypt we know that building a secure Internet isn’t just a technical challenge—it’s a long-term commitment. Over the past decade we’ve made enormous strides: from issuing billions of TLS certificates to continually innovating to keep the web safer and more accessible. But none of this would be possible without recurring donations from individuals and organizations around the world.
Recurring donations are more than just financial support; they allow us to plan, innovate, and keep improving with confidence, knowing that month after month, year after year, our supporters are there. This consistent backing empowers us to maintain a secure, privacy-respecting Internet for all.
Our tenth anniversary tagline, Encryption for Everybody, highlights this vision. It’s both a technical goal and a fundamental belief that secure communication should be available to everyone, everywhere.
When we asked our recurring donors why they give, their responses affirmed how essential this commitment is. One longtime supporter shared:
Supporting Let's Encrypt aligns with my belief in a privacy-conscious world, where encrypted communication is the default.
For some, it’s about paying it forward, helping future users benefit as they once did:
For my 18th birthday, I got my last name as a domain. As a young tech enthusiast with little money, Let's Encrypt made it possible for me to get a TLS certificate and learn about technology. Back then, I was a student using it for free. Now that I have a stable income, donating is my way of giving back and helping others have the same opportunities I did.
The next decade of Let’s Encrypt will likely be about maintaining that commitment to encryption for everybody. It’s about ensuring that our work remains reliable, accessible, and—most importantly—supported by people who believe in what we do. To everyone who’s been part of this journey, thank you. We couldn’t do it without you.
During Let’s Encrypt’s 10th Anniversary Year, we’re celebrating our community and reflecting on our journey. We’d be thrilled to hear from you. Connect with us on LinkedIn, our community forum, or email us at outreach@letsencrypt.org. Let’s keep building a secure Internet together!
Let’s Encrypt is a project of the nonprofit Internet Security Research Group, a 501(c)(3) nonprofit. To support our work, visit letsencrypt.org/donate.
Let’s Encrypt will no longer include the “TLS Client Authentication” Extended Key Usage (EKU) in our certificates beginning in 2026. Most users who use Let’s Encrypt to secure websites won’t be affected and won’t need to take any action. However, if you use Let’s Encrypt certificates as client certificates to authenticate to a server, this change may impact you.
To minimize disruption, Let’s Encrypt will roll this change out in multiple stages, using ACME Profiles:
- Today: Let’s Encrypt already excludes the Client Authentication EKU on our
tlsserver
ACME profile. You can verify compatibility by issuing certificates with this profile now. - October 1, 2025: Let’s Encrypt will launch a new
tlsclient
ACME profile which will retain the TLS Client Authentication EKU. Users who need additional time to migrate can opt-in to this profile. - February 11, 2026: the default
classic
ACME profile will no longer contain the Client Authentication EKU. - May 13, 2026: the
tlsclient
ACME profile will no longer be available and no further certificates with the Client Authentication EKU will be issued.
Once this is completed, Let’s Encrypt will switch to issuing with new intermediate Certificate Authorities which also do not contain the TLS Client Authentication EKU.
For some background information, all certificates include a list of intended uses, known as Extended Key Usages (EKU). Let’s Encrypt certificates have included two EKUs: TLS Server Authentication and TLS Client Authentication.
- TLS Server Authentication is used to authenticate connections to TLS Servers, like websites.
- TLS Client Authentication is used by clients to authenticate themselves to a server. This feature is not typically used on the web, and is not required on the certificates used on a website.
After this change is complete, only TLS Server Authentication will be available from Let’s Encrypt.
This change is prompted by changes to Google Chrome’s root program requirements, which impose a June 2026 deadline to split TLS Client and Server Authentication into separate PKIs. Many uses of client authentication are better served by a private certificate authority, and so Let’s Encrypt is discontinuing support for TLS Client Authentication ahead of this deadline.
Wed, 14 May 2025 00:00:00 +0000
How Pebble Supports ACME Client Developers
The Certificate Transparency ecosystem has been improving transparency for the web PKI since 2013. It helps make clear exactly what certificates each certificate authority has issued and makes sure errors or compromises of certificate authorities are detectable.
Let’s Encrypt participates in CT both as a certificate issuer and as a log operator. For the past year, we’ve also been running an experiment to help validate a next-generation design for Certificate Transparency logs. That experiment is now nearing a successful conclusion. We’ve demonstrated that the new architecture (called the “Static CT API”) works well, providing greater efficiency and making it easier to run huge and reliable CT log services with comparatively modest resources. The Static CT API also makes it easier to download and share data from CT logs.
The Sunlight log implementation, alongside other Static CT API log implementations, is now on a path to production use. Browsers are now officially accepting Static CT API logs into their log programs as a means to help guarantee that the contents of CA-issued certificates are all publicly disclosed and publicly accessible (see Safari’s and Chrome’s recent announcements), although the browsers also require the continued use of a traditional RFC 6962 log alongside the new type.
All of this is good news for everyone who runs, submits certificates to, or monitors a CT log: as the new architecture gets adopted, we can expect to see more organizations running more logs, at lower cost, and with greater overall capacity to keep up with the large volume of publicly-trusted certificates.
Certificate Transparency
Certificate Transparency (CT) was introduced in 2013 in response to concerns about how Internet users could detect misbehavior and compromise of certificate authorities. Prior to CT, it was possible for a CA to issue an inaccurate or malicious certificate that could be used to attack a relatively small number of users, and that might never come to wider attention. A team led by Google responded to this by creating a transparency log mechanism, where certificate authorities (like Let’s Encrypt) must disclose all of the certificates that we issue by submitting them to public log services. Web browsers now generally reject certificates unless the certificates include cryptographic proof (“Signed Certificate Timestamps”, or SCTs) demonstrating that they were submitted to and accepted by such logs.
The CT logs themselves use a cryptographic append-only ledger to prove that they haven’t deleted or modified their records. There are currently over a dozen CT log services, most of them also run by certificate authorities, including Let’s Encrypt’s own Oak log.
The Static CT API
The original 2013 CT log design has been used with relatively few technical changes since it was first introduced, but several other transparency logging systems have been created in other areas, such as sumdb for Golang, which helps ensure that the contents of Golang package updates are publicly recorded. While they were originally inspired by CT, more-recently invented transparency logs have improved on its design.
The current major evolution of CT was led by Filippo Valsorda, a cryptographer with an interest in transparency log mechanisms, with help from others in the CT ecosystem. Portions of the new design are directly based on sumdb. In addition to designing the new architecture, Valsorda also wrote the implementation that we’ve been using, called Sunlight, with support from Let’s Encrypt. We’re excited to see that there are now at least three other compatible implementations: Google’s trillian-tessera, Cloudflare’s Azul, and an independent project called Itko.
The biggest change for the Static CT API is that logs are now represented, and downloaded by verifiers, as simple collections of flat files (called “tiles,” so some implementers have also been referring to these as “tiled logs” or “tlogs”). Anyone who wants to download log data can do so just by downloading these files. This is great for log operators because these simple file downloads can be distributed in various ways, including caching by a CDN, which was less practical and efficient for the classic CT API.
The new design is also simpler and more efficient from the log operator’s perspective, making it cheaper to run logs. As we said last year, this may enable us and other operators to increase reliability and availability by running several separate logs, likely with lower overall resource requirements than a single traditional log.
Our Sunlight experiment

For the past year, we’ve run three Sunlight logs, called Twig, Willow, and Sycamore. We’ve been logging all of our own issued certificates, which represent a majority of the total volume of all publicly-trusted certificates, into our Sunlight logs. Sunlight logged these certificates quickly and correctly on relatively modest server hardware. Notably, each log’s write side was handled comfortably by just a single server. We also achieved high availability for these log services throughout the course of this experiment. (Because our Sunlight logs are not yet trusted by web browsers, we didn’t include the SCT proofs that they returned to us in the actual certificates we gave out to our subscribers; those proofs wouldn’t have been of use to our subscribers yet and would just have taken up space.)
A potential failure mode of traditional CT logs is that they could be unacceptably slow in incorporating newly-submitted certificates (known as missing the maximum merge delay), which can result in a log becoming distrusted. This isn’t a possibility for our new Sunlight-based logs: they always completely incorporate newly-submitted certificates before returning an SCT to the submitter, so the effective merge delay is zero! Of course, any log can suffer outages for a variety of reasons, but this feature of Sunlight makes it less likely that any outages will be fatal to a log’s continued operation.
We’ve demonstrated that Sunlight and the Static CT API work in practice, and this demonstration has helped to confirm the browser developers’ hope that Static CT API logs can become an officially-supported part of CT. As a result, the major browsers that enforce CT have now permitted Static CT API logs to apply for inclusion in browsers as publicly-trusted logs, and we’re preparing to apply for this status for our Willow and Sycamore logs with the Chrome and Safari CT log programs.
Let’s Encrypt will run at least these two logs, and possibly others over time, for the foreseeable future. Once they’re trusted by browsers, we’ll encourage other CAs to submit to them as well, and we’ll begin including SCTs from these logs in our own certificates (alongside SCTs from traditional CT logs).
How to participate
The new Static CT API and the rollout of tile-based logs will bring various changes and opportunities for community members.
New Certificate Transparency log operators
Companies and non-profit organizations could help support the web PKI by running a CT log and applying for it to be publicly trusted. Implementations like Sunlight will have substantially lower resource requirements than first-generation CT logs, particularly when cached behind a CDN. The biggest resource demands for a log operator will be storage and upstream bandwidth. A publicly-trusted log is also expected to maintain relatively high availability, because CAs need logs to be available in order to continue issuing certificates.
We don’t have statistics to share about the exact resource requirements for such a log yet, but after we have practical experience running a fully publicly-trusted Sunlight log, we should be able to make this more concrete. As noted above, the compute side of the log can be handled by a single server. Sunlight author Filippo Valsorda has recently started running a Sunlight log—also on just a single server—and offered more detailed cost breakdowns for that log’s setup, with an estimated total cost around $10,000 per year. The costs for our production Static CT API logs may be higher than those for Filippo’s log, but still far less than the costs for our traditional RFC 6962 logs.
As with trust decisions about CAs, browser developers are the authorities about which CT logs become publicly trusted. Although any person or organization can run a log, browser developers will generally prefer to trust logs whose continued availability they’re confident of—typically those run by stable organizations with experience running some form of public Internet services. Unlike becoming a certificate authority, running a log does not require a formal audit, as the validation of the log’s availability and correctness can be performed purely by observation.
Certificate authorities
Once the Willow and Sycamore logs are trusted by browsers, our fellow certificate authorities can choose to start logging certificates to them as part of their issuance processes. (Initially, you should still include at least one SCT from a traditional CT log in each certificate.) The details, including the log API endpoints and keys, are available at our CT log page. You can start submitting to these logs right away if you prefer; just bear in mind that the SCTs they return aren’t useful to subscribers yet, and won’t be useful until browsers are updated to trust the new logs.
CT data users
You can monitor CT in order to watch for certificate issuances for your own domain names, or as part of monitoring or security products or services, or for Internet security research purposes. Many of our colleagues have been doing this for some time as a part of various tools they maintain. The Static CT API should make this easier, because you’ll be able to download and share log tiles as sets of ordinary files.
If you already run such monitoring tools, please note that you’ll need to update your data pipeline in order to access Static CT API logs; since the read API is not backwards-compatible, CT API clients will need to be modified to support the new API. Without updated tools, your view of the CT system will become partial!
Also note that getting a complete view of all of CT will still require downloading data from traditional logs, which will probably continue to be true for several years.
Software developers
As logs based on the new API enter production use, it will be important to have tools to interact with and search these logs. We can all benefit from more software that understands how to do this. Since file downloads are such a familiar piece of software functionality, it will probably be easier for developers to develop against the new API compared to the original one.
We’ve also continued to see greater integration of transparency logging tools into other kinds of services, such as software updates. There’s a growing transparency log ecosystem that’s always in need of more tools and integrations. As we mentioned above, transparency logs are increasingly learning from one another, and there are also mechanisms for more direct integration between different kinds of transparency logs (known as “witnessing”). Software developers can help improve different aspects of Internet security by contributing to this active and growing area.
Conclusion
The Certificate Transparency community and larger transparency logging community have experienced a virtuous cycle of innovation, sharing ideas and implementation code between different systems and demonstrating the feasibility of new mechanisms and functionality. With the advent of tile-based logging in CT, the state of the art has moved forward in a way that helps log operators run our logs much more efficiently without compromising security.
We’re proud to have participated in this experiment and the engineering conversation around the evolution of logging architectures. Now that we’ve shown how well the new API really works at scale, we look forward to having publicly-trusted Sunlight logs later this year!
Every night, right around midnight (mainly UTC), a horde of zombies wakes up and clamors for … digital certificates!
The zombies in question are abandoned or misconfigured Internet servers and ACME clients that have been set to request certificates from Let’s Encrypt. As our certificates last for at most 90 days, these zombie clients’ software knows that their certificates are out-of-date and need to be replaced. What they don’t realize is that their quest for new certificates is doomed! These devices are cursed to seek certificates again and again, never receiving them.
But they do use up a lot of certificate authority resources in the process.
The Zombie Client Problem
Unlike a human being, software doesn’t give up in frustration, or try to modify its approach, when it repeatedly fails at the same task. Our emphasis on automation means that the vast majority of Let’s Encrypt certificate renewals are performed by automated software. This is great when those renewals succeed, but it also means that forgotten clients and devices can continue requesting renewals unsuccessfully for months, or even years.
How might that happen? Most often, it happens when a device no longer has a domain name pointed to it. The device itself doesn’t know that this has changed, so it treats renewal failures as transient even though they are actually permanent. For instance:
- An organization may have allowed a domain name registration to lapse because it is no longer needed, but its servers are still configured to request certs for it.
- Or, a home user stopped using a particular dynamic-DNS domain with a network-attached storage device, but is still using that device at home. The device doesn’t realize that the user no longer expects to use the name, so it keeps requesting certs for it.
- Or, a web hosting or CDN customer migrated to a different service provider, but never informed the old service provider. The old service provider’s servers keep requesting certs unsuccessfully. If the customer was in a free service tier, there might not be invoices or charges reminding the customer to cancel the service.
- Or any number of other, subtler changes in a subscriber’s infrastructure, such as changing a firewall rule or some webserver configuration.
At the scale of Let’s Encrypt, which now covers hundreds of millions of names, scenarios like these have become common, and their impact has become substantial. In 2024, we noticed that about half of all certificate requests to the Let’s Encrypt ACME API came from about a million accounts that never successfully complete any validations. Many of these had completed validations and issued certificates sometime in the past, but nowadays every single one of their validation attempts fails, and they show no signs that this will change anytime soon.
Unfortunately, trying to validate those futile requests still uses resources. Our CA software has to generate challenges, reach out and attempt to validate them over the Internet, detect and report failures, and record all of the associated information in our databases and audit logs. And over time, we’ve seen more and more recurring failures: accounts that always fail their issuance requests have been growing at around 18% per year.
In January, we mentioned that we had been addressing the zombie client problem through our rate limit system. This post provides more detail on that progress.
Our Rate Limit Philosophy
If you’ve used Let’s Encrypt as a subscriber, you may have run into one of our rate limits at some point, maybe during your initial setup process. We have eight different kinds of rate limits in place now; as our January post describes, they’ve become more algorithmically sophisticated and grown to address a wider range of problems. A key principle for Let’s Encrypt is that our rate limiting is not a punishment. We don’t think of rate limits as a way of retaliating against a client for misbehavior. Rate limits are simply a tool to maximize the efficient use of our limited resources and prevent people and programs from using up those resources for no constructive purpose.
We’ve consistently tried to design our rate limit mechanisms in line with that philosophy. So if a misconfiguration or misunderstanding has caused excessive requests in the past, we’re still happy to welcome the user in question back and start issuing them certificates again—once the problem has been addressed. We want the rate limits to put a brake on wasteful use of our systems, but not to frustrate users who are actively trying to make Let’s Encrypt work for them.
In addition, we’ve always implemented our rate limits to err on the side of permissiveness. For example, if the Redis instances where rate limits are tracked have an outage or lose data, the system is designed to permit more issuance rather than less issuance as a result.
We wanted to create additional limits that would target zombie clients, but in a correspondingly non-punitive way that would avoid any disruption to valid issuance, and welcome subscribers back quickly if they happened to notice and fix a long-time problem with their setups.
Our Zombie-Related Rate Limits and Their Impact
In planning a new zombie-specific response, we decided on a “pausing” approach, which can temporarily limit an account’s ability to proceed with certificate requests. The core idea is that, if a particular account consistently fails to complete validation for a particular hostname, we’ll pause that account-hostname pair. The pause means that any new order requests from that account for that hostname will be rejected immediately, before we get to the resource-intensive validation phase.
This approach is more finely targeted than pausing an entire account. Pausing account-hostname pairs means that your ability to issue certs for a specific name could be paused due to repeated failures, but you can still get all of your other certs like normal. So a large hosting provider doesn’t have to fear that its certificate issuance on behalf of one customer will be affected by renewal failures related to a problem with a different customer’s domain name. The account-specificity of the pause, in turn, means that validation failures from one subscriber or device won’t prevent a different subscriber or device from attempting to validate the same name, as long as the devices in question don’t share a single Let’s Encrypt account.
In September 2024, we began applying our zombie rate limits manually by pausing about 21,000 of the most recurrently-failing account-hostname pairs, those which were consistently repeating the same failed requests many times per day, every day. After implementing that first round of pauses, we immediately saw a significant impact on our failed request rates. As we announced at that time, we also began using a formula to automatically pause other zombie client account-hostname pairs from December 2024 onward. The associated new rate limit is called “Consecutive Authorization Failures per Hostname Per Account” (and is independent of the existing “Authorization Failures per Hostname Per Account” limit, which resets every hour).
This formula relates to the frequency of successive failed issuance requests for the same domain name by the same Let’s Encrypt account. It applies only to failures that happen again and again, with no successful issuances at all in between: a single successful validation immediately resets the rate limit all the way to zero. Like all of our rate limits, this is not a punitive measure but is simply intended to reduce the waste of resources. So, we decided to set the thresholds rather high in the expectation that we would catch only the most disruptive zombie clients, and ultimately only those clients that were extremely unlikely to succeed in the future based on their substantial history of failed requests. We don’t hurry to block requesters as zombies: according to our current formula, client software following the default established by EFF’s Certbot (two renewal attempts per day) would be paused as a zombie only after about ten years of constant failures. More aggressive failed issuance attempts will get a client paused sooner, but clients will generally have to fail hundreds or thousands of attempts in a row before they are paused.
Most subscribers using mainstream client applications with default configurations will never encounter this rate limit, even if they forget to deactivate renewal attempts for domains that are no longer pointed at their servers. As described below, our current limit is already providing noticeable benefits with minimal disruption, and we’re likely to tighten it a bit in the near future, so it will trigger after somewhat fewer consecutive failures.
Self-Service Unpausing
A key feature in our zombie issuance pausing mechanism is self-service unpausing. Whenever an account-hostname pair is paused, any new certificate requests for that hostname submitted by that account are immediately rejected. But this means that the “one successful validation immediately resets the rate limit counter” feature can no longer come into effect: once they’re paused, they can’t even attempt validation anymore.
So every rejection comes with an error message explaining what has happened and a custom link that can be used to immediately unpause that account-hostname pair and remove any other pauses on the same account at the same time. The point of this is that subscribers who notice at some point that issuance is failing and want to intervene to get it working again have a straightforward option to let Let’s Encrypt know that they’re aware of the recurring failures and are still planning to use a particular account. As soon as subscribers notify us via the self-service link, they’ll be able to issue certificates again.
Currently, the user interface for an affected subscriber looks like this:

This link would be provided via an ACME error message in response to any request that was blocked due to a pause account-hostname pair.
As it’s turned out, the unpause option shown above has only been used by about 3% of affected accounts! This goes to show that most of the zombies we’ve paused were, in fact, well and truly forgotten about.
However, the unpause feature is there for whenever it’s needed, and there may be cases when it will become more important. A very large integration could trigger the zombie-related rate limits if a newly-introduced software bug causes what looks like a very high volume of zombie requests in a very short time. In that case, once that bug has been noticed and fixed, an integrator may need to unpause its issuance on behalf of lots of customers at once. Our unpause feature permits unpausing 50,000 domain names on a single account at a time, so even the largest integrators can get themselves unpaused expeditiously in this situation.
Conclusion
We’ve been very happy with the results of our zombie mitigation measures, and, as far as we can tell, there’s been almost no impact for subscribers! Our statistics indicate that we’ve managed to reduce the load on our infrastructure while causing no detectable harm or inconvenience to subscribers’ valid issuance requests.
Since implementing the manual pauses in September and the automated pauses in December, we’ve seen:
- Over 100,000 account-hostname pairs have been paused for excessive failures.
- We received zero (!) associated complaints or support requests.
- About 3,200 people manually unpaused issuance.
- Failed certificate orders fell by about 30% so far, and should continue to fall over time as we fine-tune the rate limit formula and catch more zombie clients.
The new rate limit and the self-service unpause system are also ready to deal with circumstances that might produce more zombie clients in the future. For instance, we’ve announced that we’re going to be discontinuing renewal reminder emails soon. If some subscribers overlook failed renewals in the future, we might see more paused clients that result from unintentional renewal failures. We think taking advantage of the existing self-service unpause feature will be straightforward in that case. But it’s much better to notice problems and get them fixed up front, so please remember to set up your own monitoring to avoid unnoticed renewal failures in the future.
If you’re a subscriber who’s had occasion to use the self-service unpause feature, we’d love your feedback on the Community Forum about your experience using the feature and the circumstances that surrounded your account’s getting paused.
Also, if you’re a Let’s Encrypt client developer, please remember to make renewal requests at a random time (not precisely at midnight) so that the load on our infrastructure is smoothed out. You can also reduce the impact of zombie renewals by repeating failed requests somewhat less frequently over time (a “back-off” strategy), especially if the failure reason makes it look like a domain name may no longer be in use at all.
At Let’s Encrypt we know that building a secure Internet isn’t just a technical challenge—it’s a long-term commitment. Over the past decade we’ve made enormous strides: from issuing billions of TLS certificates to continually innovating to keep the web safer and more accessible. But none of this would be possible without recurring donations from individuals and organizations around the world.
Recurring donations are more than just financial support; they allow us to plan, innovate, and keep improving with confidence, knowing that month after month, year after year, our supporters are there. This consistent backing empowers us to maintain a secure, privacy-respecting Internet for all.
Our tenth anniversary tagline, Encryption for Everybody, highlights this vision. It’s both a technical goal and a fundamental belief that secure communication should be available to everyone, everywhere.
When we asked our recurring donors why they give, their responses affirmed how essential this commitment is. One longtime supporter shared:
Supporting Let's Encrypt aligns with my belief in a privacy-conscious world, where encrypted communication is the default.
For some, it’s about paying it forward, helping future users benefit as they once did:
For my 18th birthday, I got my last name as a domain. As a young tech enthusiast with little money, Let's Encrypt made it possible for me to get a TLS certificate and learn about technology. Back then, I was a student using it for free. Now that I have a stable income, donating is my way of giving back and helping others have the same opportunities I did.
The next decade of Let’s Encrypt will likely be about maintaining that commitment to encryption for everybody. It’s about ensuring that our work remains reliable, accessible, and—most importantly—supported by people who believe in what we do. To everyone who’s been part of this journey, thank you. We couldn’t do it without you.
During Let’s Encrypt’s 10th Anniversary Year, we’re celebrating our community and reflecting on our journey. We’d be thrilled to hear from you. Connect with us on LinkedIn, our community forum, or email us at outreach@letsencrypt.org. Let’s keep building a secure Internet together!
Let’s Encrypt is a project of the nonprofit Internet Security Research Group, a 501(c)(3) nonprofit. To support our work, visit letsencrypt.org/donate.
Let’s Encrypt will no longer include the “TLS Client Authentication” Extended Key Usage (EKU) in our certificates beginning in 2026. Most users who use Let’s Encrypt to secure websites won’t be affected and won’t need to take any action. However, if you use Let’s Encrypt certificates as client certificates to authenticate to a server, this change may impact you.
To minimize disruption, Let’s Encrypt will roll this change out in multiple stages, using ACME Profiles:
- Today: Let’s Encrypt already excludes the Client Authentication EKU on our
tlsserver
ACME profile. You can verify compatibility by issuing certificates with this profile now. - October 1, 2025: Let’s Encrypt will launch a new
tlsclient
ACME profile which will retain the TLS Client Authentication EKU. Users who need additional time to migrate can opt-in to this profile. - February 11, 2026: the default
classic
ACME profile will no longer contain the Client Authentication EKU. - May 13, 2026: the
tlsclient
ACME profile will no longer be available and no further certificates with the Client Authentication EKU will be issued.
Once this is completed, Let’s Encrypt will switch to issuing with new intermediate Certificate Authorities which also do not contain the TLS Client Authentication EKU.
For some background information, all certificates include a list of intended uses, known as Extended Key Usages (EKU). Let’s Encrypt certificates have included two EKUs: TLS Server Authentication and TLS Client Authentication.
- TLS Server Authentication is used to authenticate connections to TLS Servers, like websites.
- TLS Client Authentication is used by clients to authenticate themselves to a server. This feature is not typically used on the web, and is not required on the certificates used on a website.
After this change is complete, only TLS Server Authentication will be available from Let’s Encrypt.
This change is prompted by changes to Google Chrome’s root program requirements, which impose a June 2026 deadline to split TLS Client and Server Authentication into separate PKIs. Many uses of client authentication are better served by a private certificate authority, and so Let’s Encrypt is discontinuing support for TLS Client Authentication ahead of this deadline.
How Pebble Supports ACME Client Developers
Together with the IETF community, we created the ACME standard to support completely automated certificate issuance. This open standard is now supported by dozens of clients. On the server side, did you know that we have not one but two open-source ACME server implementations?
The big implementation, which we use ourselves in production, is called Boulder. Boulder handles all of the facets and details needed for a production certificate authority, including policy compliance, database interfaces, challenge verifications, and logging. You can adapt and use Boulder yourself if you need to run a real certificate authority, including an internal, non-publicly-trusted ACME certificate authority within an organization.
The small implementation is called Pebble. It’s meant entirely for testing, not for use as a real certificate authority, and we and ACME client developers use it for various automated and manual testing purposes. For example, Certbot has used Pebble in its development process for years in order to perform a series of basic but realistic checks of the ability to request and obtain certificates from an ACME server.
Pebble is Easy to Use for ACME Client Testing
For any developer or team creating an ACME client application, Pebble solves a range of problems along the lines of “how do I check whether I’ve implemented ACME correctly, so that I could actually get certificates from a CA, without necessarily using a real domain name, and without running into CA rate limits during my routine testing?” Pebble is quick and easy to set up if you need to test an ACME client’s functionality.
It runs in RAM without dependencies or persistence; you won’t need to set up a database or a configuration for it. You can get Pebble running with a single golang command in just a few seconds, and immediately start making local ACME requests. That’s suitable for inclusion in a client’s integration test suite, making much more realistic integration tests possible without needing to worry about real domains, CA rate limits, or network outages.
We see Pebble getting used in the official test suites for ACME clients including getssl, Lego, Certbot, simp_le, and others. In many cases, every change committed to the ACME client’s code base is automatically tested against Pebble.
Pebble is Intentionally Different From Boulder
Pebble is also deliberately different from Boulder in some places in order to provide clients with an opportunity to interoperate with slightly different ACME implementations. The Pebble code explains that
[I]n places where the ACME specification allows customization/CA choice Pebble aims to make choices different from Boulder. For instance, Pebble changes the path structures for its resources and directory endpoints to differ from Boulder. The goal is to emphasize client specification compatibility and to avoid "over-fitting" on Boulder and the Let's Encrypt production service.
For instance, the Let’s Encrypt service currently offers its newAccount
resource at the path /acme/new-acct
, whereas Pebble uses a different name /sign-me-up
, so clients will be reminded to check the directory rather than assuming a specific path. Other substantive differences include:
- Pebble rejects 5% of all requests as having a invalid nonce, even if the nonce was otherwise valid, so clients can test how they respond this error condition
- Pebble only reuses valid authorizations 50% of the time, so clients can check their ability to perform validations when they might not have expected to
- Pebble truncates timestamps to a different degree of precision than Boulder
- Unlike Boulder, Pebble respects the notBefore and notAfter fields of new-order requests
The ability of ACME clients to work with both versions is a good test of their conformance to the ACME specification, rather than making assumptions about the current behavior of the Let’s Encrypt service in particular. This helps ensure that clients will work properly with other ACME CAs, and also with future versions of Let’s Encrypt’s own API.
Pebble is Useful to Both Let’s Encrypt and Client Developers as ACME Evolves
We often test out new ACME features by implementing them, at least in a simplified form, in Pebble before Boulder. This lets us and client developers experiment with support for those features even before they get rolled out in our staging service. We can do this quickly because a Pebble feature implementation doesn’t have to work with a full-scale CA backend.
We continue to encourage ACME client developers to use a copy of Pebble to test their clients’ functionality and ACME interoperability. It’s convenient and it’s likely to increase the correctness and robustness of their client applications.
Try Out Pebble Yourself
Want to try Pebble with your ACME client right now? On a Unix-like system, you can run
git clone https://github.com/letsencrypt/pebble/
cd pebble
go run ./cmd/pebble
Wait a few seconds; now you have a working ACME CA directory available at https://localhost:14000/dir
! Your local ACME Server can immediately receive requests and issue certificates, though not publicly-trusted ones, of course. (If you prefer, we also offer other options for installing Pebble, like a Docker image.)
We welcome code contributions to Pebble. For example, ACME client developers may want to add simple versions of an ACME feature that’s not currently tested in Pebble in order to make their test suites more comprehensive. Also, if you notice a possibly unintended divergence between Pebble and Boulder or Pebble and the ACME specification, we’d love for you to let us know.
Wed, 30 Apr 2025 00:00:00 +0000
Ten Years of Let's Encrypt: Announcing support from Jeff Atwood
The Certificate Transparency ecosystem has been improving transparency for the web PKI since 2013. It helps make clear exactly what certificates each certificate authority has issued and makes sure errors or compromises of certificate authorities are detectable.
Let’s Encrypt participates in CT both as a certificate issuer and as a log operator. For the past year, we’ve also been running an experiment to help validate a next-generation design for Certificate Transparency logs. That experiment is now nearing a successful conclusion. We’ve demonstrated that the new architecture (called the “Static CT API”) works well, providing greater efficiency and making it easier to run huge and reliable CT log services with comparatively modest resources. The Static CT API also makes it easier to download and share data from CT logs.
The Sunlight log implementation, alongside other Static CT API log implementations, is now on a path to production use. Browsers are now officially accepting Static CT API logs into their log programs as a means to help guarantee that the contents of CA-issued certificates are all publicly disclosed and publicly accessible (see Safari’s and Chrome’s recent announcements), although the browsers also require the continued use of a traditional RFC 6962 log alongside the new type.
All of this is good news for everyone who runs, submits certificates to, or monitors a CT log: as the new architecture gets adopted, we can expect to see more organizations running more logs, at lower cost, and with greater overall capacity to keep up with the large volume of publicly-trusted certificates.
Certificate Transparency
Certificate Transparency (CT) was introduced in 2013 in response to concerns about how Internet users could detect misbehavior and compromise of certificate authorities. Prior to CT, it was possible for a CA to issue an inaccurate or malicious certificate that could be used to attack a relatively small number of users, and that might never come to wider attention. A team led by Google responded to this by creating a transparency log mechanism, where certificate authorities (like Let’s Encrypt) must disclose all of the certificates that we issue by submitting them to public log services. Web browsers now generally reject certificates unless the certificates include cryptographic proof (“Signed Certificate Timestamps”, or SCTs) demonstrating that they were submitted to and accepted by such logs.
The CT logs themselves use a cryptographic append-only ledger to prove that they haven’t deleted or modified their records. There are currently over a dozen CT log services, most of them also run by certificate authorities, including Let’s Encrypt’s own Oak log.
The Static CT API
The original 2013 CT log design has been used with relatively few technical changes since it was first introduced, but several other transparency logging systems have been created in other areas, such as sumdb for Golang, which helps ensure that the contents of Golang package updates are publicly recorded. While they were originally inspired by CT, more-recently invented transparency logs have improved on its design.
The current major evolution of CT was led by Filippo Valsorda, a cryptographer with an interest in transparency log mechanisms, with help from others in the CT ecosystem. Portions of the new design are directly based on sumdb. In addition to designing the new architecture, Valsorda also wrote the implementation that we’ve been using, called Sunlight, with support from Let’s Encrypt. We’re excited to see that there are now at least three other compatible implementations: Google’s trillian-tessera, Cloudflare’s Azul, and an independent project called Itko.
The biggest change for the Static CT API is that logs are now represented, and downloaded by verifiers, as simple collections of flat files (called “tiles,” so some implementers have also been referring to these as “tiled logs” or “tlogs”). Anyone who wants to download log data can do so just by downloading these files. This is great for log operators because these simple file downloads can be distributed in various ways, including caching by a CDN, which was less practical and efficient for the classic CT API.
The new design is also simpler and more efficient from the log operator’s perspective, making it cheaper to run logs. As we said last year, this may enable us and other operators to increase reliability and availability by running several separate logs, likely with lower overall resource requirements than a single traditional log.
Our Sunlight experiment

For the past year, we’ve run three Sunlight logs, called Twig, Willow, and Sycamore. We’ve been logging all of our own issued certificates, which represent a majority of the total volume of all publicly-trusted certificates, into our Sunlight logs. Sunlight logged these certificates quickly and correctly on relatively modest server hardware. Notably, each log’s write side was handled comfortably by just a single server. We also achieved high availability for these log services throughout the course of this experiment. (Because our Sunlight logs are not yet trusted by web browsers, we didn’t include the SCT proofs that they returned to us in the actual certificates we gave out to our subscribers; those proofs wouldn’t have been of use to our subscribers yet and would just have taken up space.)
A potential failure mode of traditional CT logs is that they could be unacceptably slow in incorporating newly-submitted certificates (known as missing the maximum merge delay), which can result in a log becoming distrusted. This isn’t a possibility for our new Sunlight-based logs: they always completely incorporate newly-submitted certificates before returning an SCT to the submitter, so the effective merge delay is zero! Of course, any log can suffer outages for a variety of reasons, but this feature of Sunlight makes it less likely that any outages will be fatal to a log’s continued operation.
We’ve demonstrated that Sunlight and the Static CT API work in practice, and this demonstration has helped to confirm the browser developers’ hope that Static CT API logs can become an officially-supported part of CT. As a result, the major browsers that enforce CT have now permitted Static CT API logs to apply for inclusion in browsers as publicly-trusted logs, and we’re preparing to apply for this status for our Willow and Sycamore logs with the Chrome and Safari CT log programs.
Let’s Encrypt will run at least these two logs, and possibly others over time, for the foreseeable future. Once they’re trusted by browsers, we’ll encourage other CAs to submit to them as well, and we’ll begin including SCTs from these logs in our own certificates (alongside SCTs from traditional CT logs).
How to participate
The new Static CT API and the rollout of tile-based logs will bring various changes and opportunities for community members.
New Certificate Transparency log operators
Companies and non-profit organizations could help support the web PKI by running a CT log and applying for it to be publicly trusted. Implementations like Sunlight will have substantially lower resource requirements than first-generation CT logs, particularly when cached behind a CDN. The biggest resource demands for a log operator will be storage and upstream bandwidth. A publicly-trusted log is also expected to maintain relatively high availability, because CAs need logs to be available in order to continue issuing certificates.
We don’t have statistics to share about the exact resource requirements for such a log yet, but after we have practical experience running a fully publicly-trusted Sunlight log, we should be able to make this more concrete. As noted above, the compute side of the log can be handled by a single server. Sunlight author Filippo Valsorda has recently started running a Sunlight log—also on just a single server—and offered more detailed cost breakdowns for that log’s setup, with an estimated total cost around $10,000 per year. The costs for our production Static CT API logs may be higher than those for Filippo’s log, but still far less than the costs for our traditional RFC 6962 logs.
As with trust decisions about CAs, browser developers are the authorities about which CT logs become publicly trusted. Although any person or organization can run a log, browser developers will generally prefer to trust logs whose continued availability they’re confident of—typically those run by stable organizations with experience running some form of public Internet services. Unlike becoming a certificate authority, running a log does not require a formal audit, as the validation of the log’s availability and correctness can be performed purely by observation.
Certificate authorities
Once the Willow and Sycamore logs are trusted by browsers, our fellow certificate authorities can choose to start logging certificates to them as part of their issuance processes. (Initially, you should still include at least one SCT from a traditional CT log in each certificate.) The details, including the log API endpoints and keys, are available at our CT log page. You can start submitting to these logs right away if you prefer; just bear in mind that the SCTs they return aren’t useful to subscribers yet, and won’t be useful until browsers are updated to trust the new logs.
CT data users
You can monitor CT in order to watch for certificate issuances for your own domain names, or as part of monitoring or security products or services, or for Internet security research purposes. Many of our colleagues have been doing this for some time as a part of various tools they maintain. The Static CT API should make this easier, because you’ll be able to download and share log tiles as sets of ordinary files.
If you already run such monitoring tools, please note that you’ll need to update your data pipeline in order to access Static CT API logs; since the read API is not backwards-compatible, CT API clients will need to be modified to support the new API. Without updated tools, your view of the CT system will become partial!
Also note that getting a complete view of all of CT will still require downloading data from traditional logs, which will probably continue to be true for several years.
Software developers
As logs based on the new API enter production use, it will be important to have tools to interact with and search these logs. We can all benefit from more software that understands how to do this. Since file downloads are such a familiar piece of software functionality, it will probably be easier for developers to develop against the new API compared to the original one.
We’ve also continued to see greater integration of transparency logging tools into other kinds of services, such as software updates. There’s a growing transparency log ecosystem that’s always in need of more tools and integrations. As we mentioned above, transparency logs are increasingly learning from one another, and there are also mechanisms for more direct integration between different kinds of transparency logs (known as “witnessing”). Software developers can help improve different aspects of Internet security by contributing to this active and growing area.
Conclusion
The Certificate Transparency community and larger transparency logging community have experienced a virtuous cycle of innovation, sharing ideas and implementation code between different systems and demonstrating the feasibility of new mechanisms and functionality. With the advent of tile-based logging in CT, the state of the art has moved forward in a way that helps log operators run our logs much more efficiently without compromising security.
We’re proud to have participated in this experiment and the engineering conversation around the evolution of logging architectures. Now that we’ve shown how well the new API really works at scale, we look forward to having publicly-trusted Sunlight logs later this year!
Every night, right around midnight (mainly UTC), a horde of zombies wakes up and clamors for … digital certificates!
The zombies in question are abandoned or misconfigured Internet servers and ACME clients that have been set to request certificates from Let’s Encrypt. As our certificates last for at most 90 days, these zombie clients’ software knows that their certificates are out-of-date and need to be replaced. What they don’t realize is that their quest for new certificates is doomed! These devices are cursed to seek certificates again and again, never receiving them.
But they do use up a lot of certificate authority resources in the process.
The Zombie Client Problem
Unlike a human being, software doesn’t give up in frustration, or try to modify its approach, when it repeatedly fails at the same task. Our emphasis on automation means that the vast majority of Let’s Encrypt certificate renewals are performed by automated software. This is great when those renewals succeed, but it also means that forgotten clients and devices can continue requesting renewals unsuccessfully for months, or even years.
How might that happen? Most often, it happens when a device no longer has a domain name pointed to it. The device itself doesn’t know that this has changed, so it treats renewal failures as transient even though they are actually permanent. For instance:
- An organization may have allowed a domain name registration to lapse because it is no longer needed, but its servers are still configured to request certs for it.
- Or, a home user stopped using a particular dynamic-DNS domain with a network-attached storage device, but is still using that device at home. The device doesn’t realize that the user no longer expects to use the name, so it keeps requesting certs for it.
- Or, a web hosting or CDN customer migrated to a different service provider, but never informed the old service provider. The old service provider’s servers keep requesting certs unsuccessfully. If the customer was in a free service tier, there might not be invoices or charges reminding the customer to cancel the service.
- Or any number of other, subtler changes in a subscriber’s infrastructure, such as changing a firewall rule or some webserver configuration.
At the scale of Let’s Encrypt, which now covers hundreds of millions of names, scenarios like these have become common, and their impact has become substantial. In 2024, we noticed that about half of all certificate requests to the Let’s Encrypt ACME API came from about a million accounts that never successfully complete any validations. Many of these had completed validations and issued certificates sometime in the past, but nowadays every single one of their validation attempts fails, and they show no signs that this will change anytime soon.
Unfortunately, trying to validate those futile requests still uses resources. Our CA software has to generate challenges, reach out and attempt to validate them over the Internet, detect and report failures, and record all of the associated information in our databases and audit logs. And over time, we’ve seen more and more recurring failures: accounts that always fail their issuance requests have been growing at around 18% per year.
In January, we mentioned that we had been addressing the zombie client problem through our rate limit system. This post provides more detail on that progress.
Our Rate Limit Philosophy
If you’ve used Let’s Encrypt as a subscriber, you may have run into one of our rate limits at some point, maybe during your initial setup process. We have eight different kinds of rate limits in place now; as our January post describes, they’ve become more algorithmically sophisticated and grown to address a wider range of problems. A key principle for Let’s Encrypt is that our rate limiting is not a punishment. We don’t think of rate limits as a way of retaliating against a client for misbehavior. Rate limits are simply a tool to maximize the efficient use of our limited resources and prevent people and programs from using up those resources for no constructive purpose.
We’ve consistently tried to design our rate limit mechanisms in line with that philosophy. So if a misconfiguration or misunderstanding has caused excessive requests in the past, we’re still happy to welcome the user in question back and start issuing them certificates again—once the problem has been addressed. We want the rate limits to put a brake on wasteful use of our systems, but not to frustrate users who are actively trying to make Let’s Encrypt work for them.
In addition, we’ve always implemented our rate limits to err on the side of permissiveness. For example, if the Redis instances where rate limits are tracked have an outage or lose data, the system is designed to permit more issuance rather than less issuance as a result.
We wanted to create additional limits that would target zombie clients, but in a correspondingly non-punitive way that would avoid any disruption to valid issuance, and welcome subscribers back quickly if they happened to notice and fix a long-time problem with their setups.
Our Zombie-Related Rate Limits and Their Impact
In planning a new zombie-specific response, we decided on a “pausing” approach, which can temporarily limit an account’s ability to proceed with certificate requests. The core idea is that, if a particular account consistently fails to complete validation for a particular hostname, we’ll pause that account-hostname pair. The pause means that any new order requests from that account for that hostname will be rejected immediately, before we get to the resource-intensive validation phase.
This approach is more finely targeted than pausing an entire account. Pausing account-hostname pairs means that your ability to issue certs for a specific name could be paused due to repeated failures, but you can still get all of your other certs like normal. So a large hosting provider doesn’t have to fear that its certificate issuance on behalf of one customer will be affected by renewal failures related to a problem with a different customer’s domain name. The account-specificity of the pause, in turn, means that validation failures from one subscriber or device won’t prevent a different subscriber or device from attempting to validate the same name, as long as the devices in question don’t share a single Let’s Encrypt account.
In September 2024, we began applying our zombie rate limits manually by pausing about 21,000 of the most recurrently-failing account-hostname pairs, those which were consistently repeating the same failed requests many times per day, every day. After implementing that first round of pauses, we immediately saw a significant impact on our failed request rates. As we announced at that time, we also began using a formula to automatically pause other zombie client account-hostname pairs from December 2024 onward. The associated new rate limit is called “Consecutive Authorization Failures per Hostname Per Account” (and is independent of the existing “Authorization Failures per Hostname Per Account” limit, which resets every hour).
This formula relates to the frequency of successive failed issuance requests for the same domain name by the same Let’s Encrypt account. It applies only to failures that happen again and again, with no successful issuances at all in between: a single successful validation immediately resets the rate limit all the way to zero. Like all of our rate limits, this is not a punitive measure but is simply intended to reduce the waste of resources. So, we decided to set the thresholds rather high in the expectation that we would catch only the most disruptive zombie clients, and ultimately only those clients that were extremely unlikely to succeed in the future based on their substantial history of failed requests. We don’t hurry to block requesters as zombies: according to our current formula, client software following the default established by EFF’s Certbot (two renewal attempts per day) would be paused as a zombie only after about ten years of constant failures. More aggressive failed issuance attempts will get a client paused sooner, but clients will generally have to fail hundreds or thousands of attempts in a row before they are paused.
Most subscribers using mainstream client applications with default configurations will never encounter this rate limit, even if they forget to deactivate renewal attempts for domains that are no longer pointed at their servers. As described below, our current limit is already providing noticeable benefits with minimal disruption, and we’re likely to tighten it a bit in the near future, so it will trigger after somewhat fewer consecutive failures.
Self-Service Unpausing
A key feature in our zombie issuance pausing mechanism is self-service unpausing. Whenever an account-hostname pair is paused, any new certificate requests for that hostname submitted by that account are immediately rejected. But this means that the “one successful validation immediately resets the rate limit counter” feature can no longer come into effect: once they’re paused, they can’t even attempt validation anymore.
So every rejection comes with an error message explaining what has happened and a custom link that can be used to immediately unpause that account-hostname pair and remove any other pauses on the same account at the same time. The point of this is that subscribers who notice at some point that issuance is failing and want to intervene to get it working again have a straightforward option to let Let’s Encrypt know that they’re aware of the recurring failures and are still planning to use a particular account. As soon as subscribers notify us via the self-service link, they’ll be able to issue certificates again.
Currently, the user interface for an affected subscriber looks like this:

This link would be provided via an ACME error message in response to any request that was blocked due to a pause account-hostname pair.
As it’s turned out, the unpause option shown above has only been used by about 3% of affected accounts! This goes to show that most of the zombies we’ve paused were, in fact, well and truly forgotten about.
However, the unpause feature is there for whenever it’s needed, and there may be cases when it will become more important. A very large integration could trigger the zombie-related rate limits if a newly-introduced software bug causes what looks like a very high volume of zombie requests in a very short time. In that case, once that bug has been noticed and fixed, an integrator may need to unpause its issuance on behalf of lots of customers at once. Our unpause feature permits unpausing 50,000 domain names on a single account at a time, so even the largest integrators can get themselves unpaused expeditiously in this situation.
Conclusion
We’ve been very happy with the results of our zombie mitigation measures, and, as far as we can tell, there’s been almost no impact for subscribers! Our statistics indicate that we’ve managed to reduce the load on our infrastructure while causing no detectable harm or inconvenience to subscribers’ valid issuance requests.
Since implementing the manual pauses in September and the automated pauses in December, we’ve seen:
- Over 100,000 account-hostname pairs have been paused for excessive failures.
- We received zero (!) associated complaints or support requests.
- About 3,200 people manually unpaused issuance.
- Failed certificate orders fell by about 30% so far, and should continue to fall over time as we fine-tune the rate limit formula and catch more zombie clients.
The new rate limit and the self-service unpause system are also ready to deal with circumstances that might produce more zombie clients in the future. For instance, we’ve announced that we’re going to be discontinuing renewal reminder emails soon. If some subscribers overlook failed renewals in the future, we might see more paused clients that result from unintentional renewal failures. We think taking advantage of the existing self-service unpause feature will be straightforward in that case. But it’s much better to notice problems and get them fixed up front, so please remember to set up your own monitoring to avoid unnoticed renewal failures in the future.
If you’re a subscriber who’s had occasion to use the self-service unpause feature, we’d love your feedback on the Community Forum about your experience using the feature and the circumstances that surrounded your account’s getting paused.
Also, if you’re a Let’s Encrypt client developer, please remember to make renewal requests at a random time (not precisely at midnight) so that the load on our infrastructure is smoothed out. You can also reduce the impact of zombie renewals by repeating failed requests somewhat less frequently over time (a “back-off” strategy), especially if the failure reason makes it look like a domain name may no longer be in use at all.
At Let’s Encrypt we know that building a secure Internet isn’t just a technical challenge—it’s a long-term commitment. Over the past decade we’ve made enormous strides: from issuing billions of TLS certificates to continually innovating to keep the web safer and more accessible. But none of this would be possible without recurring donations from individuals and organizations around the world.
Recurring donations are more than just financial support; they allow us to plan, innovate, and keep improving with confidence, knowing that month after month, year after year, our supporters are there. This consistent backing empowers us to maintain a secure, privacy-respecting Internet for all.
Our tenth anniversary tagline, Encryption for Everybody, highlights this vision. It’s both a technical goal and a fundamental belief that secure communication should be available to everyone, everywhere.
When we asked our recurring donors why they give, their responses affirmed how essential this commitment is. One longtime supporter shared:
Supporting Let's Encrypt aligns with my belief in a privacy-conscious world, where encrypted communication is the default.
For some, it’s about paying it forward, helping future users benefit as they once did:
For my 18th birthday, I got my last name as a domain. As a young tech enthusiast with little money, Let's Encrypt made it possible for me to get a TLS certificate and learn about technology. Back then, I was a student using it for free. Now that I have a stable income, donating is my way of giving back and helping others have the same opportunities I did.
The next decade of Let’s Encrypt will likely be about maintaining that commitment to encryption for everybody. It’s about ensuring that our work remains reliable, accessible, and—most importantly—supported by people who believe in what we do. To everyone who’s been part of this journey, thank you. We couldn’t do it without you.
During Let’s Encrypt’s 10th Anniversary Year, we’re celebrating our community and reflecting on our journey. We’d be thrilled to hear from you. Connect with us on LinkedIn, our community forum, or email us at outreach@letsencrypt.org. Let’s keep building a secure Internet together!
Let’s Encrypt is a project of the nonprofit Internet Security Research Group, a 501(c)(3) nonprofit. To support our work, visit letsencrypt.org/donate.
Let’s Encrypt will no longer include the “TLS Client Authentication” Extended Key Usage (EKU) in our certificates beginning in 2026. Most users who use Let’s Encrypt to secure websites won’t be affected and won’t need to take any action. However, if you use Let’s Encrypt certificates as client certificates to authenticate to a server, this change may impact you.
To minimize disruption, Let’s Encrypt will roll this change out in multiple stages, using ACME Profiles:
- Today: Let’s Encrypt already excludes the Client Authentication EKU on our
tlsserver
ACME profile. You can verify compatibility by issuing certificates with this profile now. - October 1, 2025: Let’s Encrypt will launch a new
tlsclient
ACME profile which will retain the TLS Client Authentication EKU. Users who need additional time to migrate can opt-in to this profile. - February 11, 2026: the default
classic
ACME profile will no longer contain the Client Authentication EKU. - May 13, 2026: the
tlsclient
ACME profile will no longer be available and no further certificates with the Client Authentication EKU will be issued.
Once this is completed, Let’s Encrypt will switch to issuing with new intermediate Certificate Authorities which also do not contain the TLS Client Authentication EKU.
For some background information, all certificates include a list of intended uses, known as Extended Key Usages (EKU). Let’s Encrypt certificates have included two EKUs: TLS Server Authentication and TLS Client Authentication.
- TLS Server Authentication is used to authenticate connections to TLS Servers, like websites.
- TLS Client Authentication is used by clients to authenticate themselves to a server. This feature is not typically used on the web, and is not required on the certificates used on a website.
After this change is complete, only TLS Server Authentication will be available from Let’s Encrypt.
This change is prompted by changes to Google Chrome’s root program requirements, which impose a June 2026 deadline to split TLS Client and Server Authentication into separate PKIs. Many uses of client authentication are better served by a private certificate authority, and so Let’s Encrypt is discontinuing support for TLS Client Authentication ahead of this deadline.
How Pebble Supports ACME Client Developers
Together with the IETF community, we created the ACME standard to support completely automated certificate issuance. This open standard is now supported by dozens of clients. On the server side, did you know that we have not one but two open-source ACME server implementations?
The big implementation, which we use ourselves in production, is called Boulder. Boulder handles all of the facets and details needed for a production certificate authority, including policy compliance, database interfaces, challenge verifications, and logging. You can adapt and use Boulder yourself if you need to run a real certificate authority, including an internal, non-publicly-trusted ACME certificate authority within an organization.
The small implementation is called Pebble. It’s meant entirely for testing, not for use as a real certificate authority, and we and ACME client developers use it for various automated and manual testing purposes. For example, Certbot has used Pebble in its development process for years in order to perform a series of basic but realistic checks of the ability to request and obtain certificates from an ACME server.
Pebble is Easy to Use for ACME Client Testing
For any developer or team creating an ACME client application, Pebble solves a range of problems along the lines of “how do I check whether I’ve implemented ACME correctly, so that I could actually get certificates from a CA, without necessarily using a real domain name, and without running into CA rate limits during my routine testing?” Pebble is quick and easy to set up if you need to test an ACME client’s functionality.
It runs in RAM without dependencies or persistence; you won’t need to set up a database or a configuration for it. You can get Pebble running with a single golang command in just a few seconds, and immediately start making local ACME requests. That’s suitable for inclusion in a client’s integration test suite, making much more realistic integration tests possible without needing to worry about real domains, CA rate limits, or network outages.
We see Pebble getting used in the official test suites for ACME clients including getssl, Lego, Certbot, simp_le, and others. In many cases, every change committed to the ACME client’s code base is automatically tested against Pebble.
Pebble is Intentionally Different From Boulder
Pebble is also deliberately different from Boulder in some places in order to provide clients with an opportunity to interoperate with slightly different ACME implementations. The Pebble code explains that
[I]n places where the ACME specification allows customization/CA choice Pebble aims to make choices different from Boulder. For instance, Pebble changes the path structures for its resources and directory endpoints to differ from Boulder. The goal is to emphasize client specification compatibility and to avoid "over-fitting" on Boulder and the Let's Encrypt production service.
For instance, the Let’s Encrypt service currently offers its newAccount
resource at the path /acme/new-acct
, whereas Pebble uses a different name /sign-me-up
, so clients will be reminded to check the directory rather than assuming a specific path. Other substantive differences include:
- Pebble rejects 5% of all requests as having a invalid nonce, even if the nonce was otherwise valid, so clients can test how they respond this error condition
- Pebble only reuses valid authorizations 50% of the time, so clients can check their ability to perform validations when they might not have expected to
- Pebble truncates timestamps to a different degree of precision than Boulder
- Unlike Boulder, Pebble respects the notBefore and notAfter fields of new-order requests
The ability of ACME clients to work with both versions is a good test of their conformance to the ACME specification, rather than making assumptions about the current behavior of the Let’s Encrypt service in particular. This helps ensure that clients will work properly with other ACME CAs, and also with future versions of Let’s Encrypt’s own API.
Pebble is Useful to Both Let’s Encrypt and Client Developers as ACME Evolves
We often test out new ACME features by implementing them, at least in a simplified form, in Pebble before Boulder. This lets us and client developers experiment with support for those features even before they get rolled out in our staging service. We can do this quickly because a Pebble feature implementation doesn’t have to work with a full-scale CA backend.
We continue to encourage ACME client developers to use a copy of Pebble to test their clients’ functionality and ACME interoperability. It’s convenient and it’s likely to increase the correctness and robustness of their client applications.
Try Out Pebble Yourself
Want to try Pebble with your ACME client right now? On a Unix-like system, you can run
git clone https://github.com/letsencrypt/pebble/
cd pebble
go run ./cmd/pebble
Wait a few seconds; now you have a working ACME CA directory available at https://localhost:14000/dir
! Your local ACME Server can immediately receive requests and issue certificates, though not publicly-trusted ones, of course. (If you prefer, we also offer other options for installing Pebble, like a Docker image.)
We welcome code contributions to Pebble. For example, ACME client developers may want to add simple versions of an ACME feature that’s not currently tested in Pebble in order to make their test suites more comprehensive. Also, if you notice a possibly unintended divergence between Pebble and Boulder or Pebble and the ACME specification, we’d love for you to let us know.
As we touched on in our first blog post highlighting ten years of Let’s Encrypt: Just as remarkable to us as the technical innovations behind proliferating TLS at scale is, so too is the sustained generosity we have benefited from throughout our first decade.
With that sense of gratitude top of mind, we are proud to announce a contribution of $1,000,000 from Jeff Atwood. Jeff has been a longtime supporter of our work, beginning many years ago with Discourse providing our community forum pro bono; something Discourse still provides to this day. As best we can tell, our forum has helped hundreds of thousands of people get up and running with Let’s Encrypt—an impact that has helped billions of people use an Internet that’s more secure and privacy-respecting thanks to widely adopted TLS.
When we first spoke with Jeff about the road ahead for Let’s Encrypt back in 2023, we knew a few things wouldn’t change no matter how the Internet changes over the next decade:
- Free TLS is the only way to ensure it is and remains accessible to as many people as possible.
- Let’s Encrypt is here to provide a reliable, trusted, and sound service no matter the scale.
- Generosity from our global community of supporters will be how we sustain our work.
We’re proud that Jeff not only agrees, but has chosen to support us in such a meaningful way. In discussing how Jeff might want us to best celebrate his generosity and recognize his commitment to our work, he shared:
Let's Encrypt is a golden example of how creating inalienable good is possible with the right approach and the right values. And while I'm excited about the work Let's Encrypt has done, I am eager to see their work continue to keep up with the growing Web; to sustain encryption for everybody at Internet scale. To do so is going to take more than me—it's going to take a community of people committed to this work. I am confident Let's Encrypt is a project that deserves all of our support, in ways both large and small.
Indeed, this contribution is significant because of its scale, but more importantly because of its signal: a signal that supporting the not-so-glamorous but oh-so-nerdy work of encryption at scale matters to the lives of billions of people every day; a signal that supporting free privacy and security afforded by TLS for all of the Internet’s five billion users just makes sense.
Ten years ago we set out to build a better Internet through easy to use TLS. If you or your organization have supported us throughout the years, thank you for joining Jeff in believing in the work of Let’s Encrypt. For a deeper dive into the impact of Let’s Encrypt and ISRG’s other projects, take a look at our most recent annual report.
Let’s Encrypt is a project of the nonprofit Internet Security Research Group, a 501(c)(3) nonprofit committed to protecting Internet users by lowering monetary, technological, and informational barriers to a more secure and privacy-respecting Internet. For more, visit abetterinternet.org. Press inquiries can be sent to press@abetterinternet.org
Tue, 18 Mar 2025 00:00:00 +0000
We Issued Our First Six Day Cert
The Certificate Transparency ecosystem has been improving transparency for the web PKI since 2013. It helps make clear exactly what certificates each certificate authority has issued and makes sure errors or compromises of certificate authorities are detectable.
Let’s Encrypt participates in CT both as a certificate issuer and as a log operator. For the past year, we’ve also been running an experiment to help validate a next-generation design for Certificate Transparency logs. That experiment is now nearing a successful conclusion. We’ve demonstrated that the new architecture (called the “Static CT API”) works well, providing greater efficiency and making it easier to run huge and reliable CT log services with comparatively modest resources. The Static CT API also makes it easier to download and share data from CT logs.
The Sunlight log implementation, alongside other Static CT API log implementations, is now on a path to production use. Browsers are now officially accepting Static CT API logs into their log programs as a means to help guarantee that the contents of CA-issued certificates are all publicly disclosed and publicly accessible (see Safari’s and Chrome’s recent announcements), although the browsers also require the continued use of a traditional RFC 6962 log alongside the new type.
All of this is good news for everyone who runs, submits certificates to, or monitors a CT log: as the new architecture gets adopted, we can expect to see more organizations running more logs, at lower cost, and with greater overall capacity to keep up with the large volume of publicly-trusted certificates.
Certificate Transparency
Certificate Transparency (CT) was introduced in 2013 in response to concerns about how Internet users could detect misbehavior and compromise of certificate authorities. Prior to CT, it was possible for a CA to issue an inaccurate or malicious certificate that could be used to attack a relatively small number of users, and that might never come to wider attention. A team led by Google responded to this by creating a transparency log mechanism, where certificate authorities (like Let’s Encrypt) must disclose all of the certificates that we issue by submitting them to public log services. Web browsers now generally reject certificates unless the certificates include cryptographic proof (“Signed Certificate Timestamps”, or SCTs) demonstrating that they were submitted to and accepted by such logs.
The CT logs themselves use a cryptographic append-only ledger to prove that they haven’t deleted or modified their records. There are currently over a dozen CT log services, most of them also run by certificate authorities, including Let’s Encrypt’s own Oak log.
The Static CT API
The original 2013 CT log design has been used with relatively few technical changes since it was first introduced, but several other transparency logging systems have been created in other areas, such as sumdb for Golang, which helps ensure that the contents of Golang package updates are publicly recorded. While they were originally inspired by CT, more-recently invented transparency logs have improved on its design.
The current major evolution of CT was led by Filippo Valsorda, a cryptographer with an interest in transparency log mechanisms, with help from others in the CT ecosystem. Portions of the new design are directly based on sumdb. In addition to designing the new architecture, Valsorda also wrote the implementation that we’ve been using, called Sunlight, with support from Let’s Encrypt. We’re excited to see that there are now at least three other compatible implementations: Google’s trillian-tessera, Cloudflare’s Azul, and an independent project called Itko.
The biggest change for the Static CT API is that logs are now represented, and downloaded by verifiers, as simple collections of flat files (called “tiles,” so some implementers have also been referring to these as “tiled logs” or “tlogs”). Anyone who wants to download log data can do so just by downloading these files. This is great for log operators because these simple file downloads can be distributed in various ways, including caching by a CDN, which was less practical and efficient for the classic CT API.
The new design is also simpler and more efficient from the log operator’s perspective, making it cheaper to run logs. As we said last year, this may enable us and other operators to increase reliability and availability by running several separate logs, likely with lower overall resource requirements than a single traditional log.
Our Sunlight experiment

For the past year, we’ve run three Sunlight logs, called Twig, Willow, and Sycamore. We’ve been logging all of our own issued certificates, which represent a majority of the total volume of all publicly-trusted certificates, into our Sunlight logs. Sunlight logged these certificates quickly and correctly on relatively modest server hardware. Notably, each log’s write side was handled comfortably by just a single server. We also achieved high availability for these log services throughout the course of this experiment. (Because our Sunlight logs are not yet trusted by web browsers, we didn’t include the SCT proofs that they returned to us in the actual certificates we gave out to our subscribers; those proofs wouldn’t have been of use to our subscribers yet and would just have taken up space.)
A potential failure mode of traditional CT logs is that they could be unacceptably slow in incorporating newly-submitted certificates (known as missing the maximum merge delay), which can result in a log becoming distrusted. This isn’t a possibility for our new Sunlight-based logs: they always completely incorporate newly-submitted certificates before returning an SCT to the submitter, so the effective merge delay is zero! Of course, any log can suffer outages for a variety of reasons, but this feature of Sunlight makes it less likely that any outages will be fatal to a log’s continued operation.
We’ve demonstrated that Sunlight and the Static CT API work in practice, and this demonstration has helped to confirm the browser developers’ hope that Static CT API logs can become an officially-supported part of CT. As a result, the major browsers that enforce CT have now permitted Static CT API logs to apply for inclusion in browsers as publicly-trusted logs, and we’re preparing to apply for this status for our Willow and Sycamore logs with the Chrome and Safari CT log programs.
Let’s Encrypt will run at least these two logs, and possibly others over time, for the foreseeable future. Once they’re trusted by browsers, we’ll encourage other CAs to submit to them as well, and we’ll begin including SCTs from these logs in our own certificates (alongside SCTs from traditional CT logs).
How to participate
The new Static CT API and the rollout of tile-based logs will bring various changes and opportunities for community members.
New Certificate Transparency log operators
Companies and non-profit organizations could help support the web PKI by running a CT log and applying for it to be publicly trusted. Implementations like Sunlight will have substantially lower resource requirements than first-generation CT logs, particularly when cached behind a CDN. The biggest resource demands for a log operator will be storage and upstream bandwidth. A publicly-trusted log is also expected to maintain relatively high availability, because CAs need logs to be available in order to continue issuing certificates.
We don’t have statistics to share about the exact resource requirements for such a log yet, but after we have practical experience running a fully publicly-trusted Sunlight log, we should be able to make this more concrete. As noted above, the compute side of the log can be handled by a single server. Sunlight author Filippo Valsorda has recently started running a Sunlight log—also on just a single server—and offered more detailed cost breakdowns for that log’s setup, with an estimated total cost around $10,000 per year. The costs for our production Static CT API logs may be higher than those for Filippo’s log, but still far less than the costs for our traditional RFC 6962 logs.
As with trust decisions about CAs, browser developers are the authorities about which CT logs become publicly trusted. Although any person or organization can run a log, browser developers will generally prefer to trust logs whose continued availability they’re confident of—typically those run by stable organizations with experience running some form of public Internet services. Unlike becoming a certificate authority, running a log does not require a formal audit, as the validation of the log’s availability and correctness can be performed purely by observation.
Certificate authorities
Once the Willow and Sycamore logs are trusted by browsers, our fellow certificate authorities can choose to start logging certificates to them as part of their issuance processes. (Initially, you should still include at least one SCT from a traditional CT log in each certificate.) The details, including the log API endpoints and keys, are available at our CT log page. You can start submitting to these logs right away if you prefer; just bear in mind that the SCTs they return aren’t useful to subscribers yet, and won’t be useful until browsers are updated to trust the new logs.
CT data users
You can monitor CT in order to watch for certificate issuances for your own domain names, or as part of monitoring or security products or services, or for Internet security research purposes. Many of our colleagues have been doing this for some time as a part of various tools they maintain. The Static CT API should make this easier, because you’ll be able to download and share log tiles as sets of ordinary files.
If you already run such monitoring tools, please note that you’ll need to update your data pipeline in order to access Static CT API logs; since the read API is not backwards-compatible, CT API clients will need to be modified to support the new API. Without updated tools, your view of the CT system will become partial!
Also note that getting a complete view of all of CT will still require downloading data from traditional logs, which will probably continue to be true for several years.
Software developers
As logs based on the new API enter production use, it will be important to have tools to interact with and search these logs. We can all benefit from more software that understands how to do this. Since file downloads are such a familiar piece of software functionality, it will probably be easier for developers to develop against the new API compared to the original one.
We’ve also continued to see greater integration of transparency logging tools into other kinds of services, such as software updates. There’s a growing transparency log ecosystem that’s always in need of more tools and integrations. As we mentioned above, transparency logs are increasingly learning from one another, and there are also mechanisms for more direct integration between different kinds of transparency logs (known as “witnessing”). Software developers can help improve different aspects of Internet security by contributing to this active and growing area.
Conclusion
The Certificate Transparency community and larger transparency logging community have experienced a virtuous cycle of innovation, sharing ideas and implementation code between different systems and demonstrating the feasibility of new mechanisms and functionality. With the advent of tile-based logging in CT, the state of the art has moved forward in a way that helps log operators run our logs much more efficiently without compromising security.
We’re proud to have participated in this experiment and the engineering conversation around the evolution of logging architectures. Now that we’ve shown how well the new API really works at scale, we look forward to having publicly-trusted Sunlight logs later this year!
Every night, right around midnight (mainly UTC), a horde of zombies wakes up and clamors for … digital certificates!
The zombies in question are abandoned or misconfigured Internet servers and ACME clients that have been set to request certificates from Let’s Encrypt. As our certificates last for at most 90 days, these zombie clients’ software knows that their certificates are out-of-date and need to be replaced. What they don’t realize is that their quest for new certificates is doomed! These devices are cursed to seek certificates again and again, never receiving them.
But they do use up a lot of certificate authority resources in the process.
The Zombie Client Problem
Unlike a human being, software doesn’t give up in frustration, or try to modify its approach, when it repeatedly fails at the same task. Our emphasis on automation means that the vast majority of Let’s Encrypt certificate renewals are performed by automated software. This is great when those renewals succeed, but it also means that forgotten clients and devices can continue requesting renewals unsuccessfully for months, or even years.
How might that happen? Most often, it happens when a device no longer has a domain name pointed to it. The device itself doesn’t know that this has changed, so it treats renewal failures as transient even though they are actually permanent. For instance:
- An organization may have allowed a domain name registration to lapse because it is no longer needed, but its servers are still configured to request certs for it.
- Or, a home user stopped using a particular dynamic-DNS domain with a network-attached storage device, but is still using that device at home. The device doesn’t realize that the user no longer expects to use the name, so it keeps requesting certs for it.
- Or, a web hosting or CDN customer migrated to a different service provider, but never informed the old service provider. The old service provider’s servers keep requesting certs unsuccessfully. If the customer was in a free service tier, there might not be invoices or charges reminding the customer to cancel the service.
- Or any number of other, subtler changes in a subscriber’s infrastructure, such as changing a firewall rule or some webserver configuration.
At the scale of Let’s Encrypt, which now covers hundreds of millions of names, scenarios like these have become common, and their impact has become substantial. In 2024, we noticed that about half of all certificate requests to the Let’s Encrypt ACME API came from about a million accounts that never successfully complete any validations. Many of these had completed validations and issued certificates sometime in the past, but nowadays every single one of their validation attempts fails, and they show no signs that this will change anytime soon.
Unfortunately, trying to validate those futile requests still uses resources. Our CA software has to generate challenges, reach out and attempt to validate them over the Internet, detect and report failures, and record all of the associated information in our databases and audit logs. And over time, we’ve seen more and more recurring failures: accounts that always fail their issuance requests have been growing at around 18% per year.
In January, we mentioned that we had been addressing the zombie client problem through our rate limit system. This post provides more detail on that progress.
Our Rate Limit Philosophy
If you’ve used Let’s Encrypt as a subscriber, you may have run into one of our rate limits at some point, maybe during your initial setup process. We have eight different kinds of rate limits in place now; as our January post describes, they’ve become more algorithmically sophisticated and grown to address a wider range of problems. A key principle for Let’s Encrypt is that our rate limiting is not a punishment. We don’t think of rate limits as a way of retaliating against a client for misbehavior. Rate limits are simply a tool to maximize the efficient use of our limited resources and prevent people and programs from using up those resources for no constructive purpose.
We’ve consistently tried to design our rate limit mechanisms in line with that philosophy. So if a misconfiguration or misunderstanding has caused excessive requests in the past, we’re still happy to welcome the user in question back and start issuing them certificates again—once the problem has been addressed. We want the rate limits to put a brake on wasteful use of our systems, but not to frustrate users who are actively trying to make Let’s Encrypt work for them.
In addition, we’ve always implemented our rate limits to err on the side of permissiveness. For example, if the Redis instances where rate limits are tracked have an outage or lose data, the system is designed to permit more issuance rather than less issuance as a result.
We wanted to create additional limits that would target zombie clients, but in a correspondingly non-punitive way that would avoid any disruption to valid issuance, and welcome subscribers back quickly if they happened to notice and fix a long-time problem with their setups.
Our Zombie-Related Rate Limits and Their Impact
In planning a new zombie-specific response, we decided on a “pausing” approach, which can temporarily limit an account’s ability to proceed with certificate requests. The core idea is that, if a particular account consistently fails to complete validation for a particular hostname, we’ll pause that account-hostname pair. The pause means that any new order requests from that account for that hostname will be rejected immediately, before we get to the resource-intensive validation phase.
This approach is more finely targeted than pausing an entire account. Pausing account-hostname pairs means that your ability to issue certs for a specific name could be paused due to repeated failures, but you can still get all of your other certs like normal. So a large hosting provider doesn’t have to fear that its certificate issuance on behalf of one customer will be affected by renewal failures related to a problem with a different customer’s domain name. The account-specificity of the pause, in turn, means that validation failures from one subscriber or device won’t prevent a different subscriber or device from attempting to validate the same name, as long as the devices in question don’t share a single Let’s Encrypt account.
In September 2024, we began applying our zombie rate limits manually by pausing about 21,000 of the most recurrently-failing account-hostname pairs, those which were consistently repeating the same failed requests many times per day, every day. After implementing that first round of pauses, we immediately saw a significant impact on our failed request rates. As we announced at that time, we also began using a formula to automatically pause other zombie client account-hostname pairs from December 2024 onward. The associated new rate limit is called “Consecutive Authorization Failures per Hostname Per Account” (and is independent of the existing “Authorization Failures per Hostname Per Account” limit, which resets every hour).
This formula relates to the frequency of successive failed issuance requests for the same domain name by the same Let’s Encrypt account. It applies only to failures that happen again and again, with no successful issuances at all in between: a single successful validation immediately resets the rate limit all the way to zero. Like all of our rate limits, this is not a punitive measure but is simply intended to reduce the waste of resources. So, we decided to set the thresholds rather high in the expectation that we would catch only the most disruptive zombie clients, and ultimately only those clients that were extremely unlikely to succeed in the future based on their substantial history of failed requests. We don’t hurry to block requesters as zombies: according to our current formula, client software following the default established by EFF’s Certbot (two renewal attempts per day) would be paused as a zombie only after about ten years of constant failures. More aggressive failed issuance attempts will get a client paused sooner, but clients will generally have to fail hundreds or thousands of attempts in a row before they are paused.
Most subscribers using mainstream client applications with default configurations will never encounter this rate limit, even if they forget to deactivate renewal attempts for domains that are no longer pointed at their servers. As described below, our current limit is already providing noticeable benefits with minimal disruption, and we’re likely to tighten it a bit in the near future, so it will trigger after somewhat fewer consecutive failures.
Self-Service Unpausing
A key feature in our zombie issuance pausing mechanism is self-service unpausing. Whenever an account-hostname pair is paused, any new certificate requests for that hostname submitted by that account are immediately rejected. But this means that the “one successful validation immediately resets the rate limit counter” feature can no longer come into effect: once they’re paused, they can’t even attempt validation anymore.
So every rejection comes with an error message explaining what has happened and a custom link that can be used to immediately unpause that account-hostname pair and remove any other pauses on the same account at the same time. The point of this is that subscribers who notice at some point that issuance is failing and want to intervene to get it working again have a straightforward option to let Let’s Encrypt know that they’re aware of the recurring failures and are still planning to use a particular account. As soon as subscribers notify us via the self-service link, they’ll be able to issue certificates again.
Currently, the user interface for an affected subscriber looks like this:

This link would be provided via an ACME error message in response to any request that was blocked due to a pause account-hostname pair.
As it’s turned out, the unpause option shown above has only been used by about 3% of affected accounts! This goes to show that most of the zombies we’ve paused were, in fact, well and truly forgotten about.
However, the unpause feature is there for whenever it’s needed, and there may be cases when it will become more important. A very large integration could trigger the zombie-related rate limits if a newly-introduced software bug causes what looks like a very high volume of zombie requests in a very short time. In that case, once that bug has been noticed and fixed, an integrator may need to unpause its issuance on behalf of lots of customers at once. Our unpause feature permits unpausing 50,000 domain names on a single account at a time, so even the largest integrators can get themselves unpaused expeditiously in this situation.
Conclusion
We’ve been very happy with the results of our zombie mitigation measures, and, as far as we can tell, there’s been almost no impact for subscribers! Our statistics indicate that we’ve managed to reduce the load on our infrastructure while causing no detectable harm or inconvenience to subscribers’ valid issuance requests.
Since implementing the manual pauses in September and the automated pauses in December, we’ve seen:
- Over 100,000 account-hostname pairs have been paused for excessive failures.
- We received zero (!) associated complaints or support requests.
- About 3,200 people manually unpaused issuance.
- Failed certificate orders fell by about 30% so far, and should continue to fall over time as we fine-tune the rate limit formula and catch more zombie clients.
The new rate limit and the self-service unpause system are also ready to deal with circumstances that might produce more zombie clients in the future. For instance, we’ve announced that we’re going to be discontinuing renewal reminder emails soon. If some subscribers overlook failed renewals in the future, we might see more paused clients that result from unintentional renewal failures. We think taking advantage of the existing self-service unpause feature will be straightforward in that case. But it’s much better to notice problems and get them fixed up front, so please remember to set up your own monitoring to avoid unnoticed renewal failures in the future.
If you’re a subscriber who’s had occasion to use the self-service unpause feature, we’d love your feedback on the Community Forum about your experience using the feature and the circumstances that surrounded your account’s getting paused.
Also, if you’re a Let’s Encrypt client developer, please remember to make renewal requests at a random time (not precisely at midnight) so that the load on our infrastructure is smoothed out. You can also reduce the impact of zombie renewals by repeating failed requests somewhat less frequently over time (a “back-off” strategy), especially if the failure reason makes it look like a domain name may no longer be in use at all.
At Let’s Encrypt we know that building a secure Internet isn’t just a technical challenge—it’s a long-term commitment. Over the past decade we’ve made enormous strides: from issuing billions of TLS certificates to continually innovating to keep the web safer and more accessible. But none of this would be possible without recurring donations from individuals and organizations around the world.
Recurring donations are more than just financial support; they allow us to plan, innovate, and keep improving with confidence, knowing that month after month, year after year, our supporters are there. This consistent backing empowers us to maintain a secure, privacy-respecting Internet for all.
Our tenth anniversary tagline, Encryption for Everybody, highlights this vision. It’s both a technical goal and a fundamental belief that secure communication should be available to everyone, everywhere.
When we asked our recurring donors why they give, their responses affirmed how essential this commitment is. One longtime supporter shared:
Supporting Let's Encrypt aligns with my belief in a privacy-conscious world, where encrypted communication is the default.
For some, it’s about paying it forward, helping future users benefit as they once did:
For my 18th birthday, I got my last name as a domain. As a young tech enthusiast with little money, Let's Encrypt made it possible for me to get a TLS certificate and learn about technology. Back then, I was a student using it for free. Now that I have a stable income, donating is my way of giving back and helping others have the same opportunities I did.
The next decade of Let’s Encrypt will likely be about maintaining that commitment to encryption for everybody. It’s about ensuring that our work remains reliable, accessible, and—most importantly—supported by people who believe in what we do. To everyone who’s been part of this journey, thank you. We couldn’t do it without you.
During Let’s Encrypt’s 10th Anniversary Year, we’re celebrating our community and reflecting on our journey. We’d be thrilled to hear from you. Connect with us on LinkedIn, our community forum, or email us at outreach@letsencrypt.org. Let’s keep building a secure Internet together!
Let’s Encrypt is a project of the nonprofit Internet Security Research Group, a 501(c)(3) nonprofit. To support our work, visit letsencrypt.org/donate.
Let’s Encrypt will no longer include the “TLS Client Authentication” Extended Key Usage (EKU) in our certificates beginning in 2026. Most users who use Let’s Encrypt to secure websites won’t be affected and won’t need to take any action. However, if you use Let’s Encrypt certificates as client certificates to authenticate to a server, this change may impact you.
To minimize disruption, Let’s Encrypt will roll this change out in multiple stages, using ACME Profiles:
- Today: Let’s Encrypt already excludes the Client Authentication EKU on our
tlsserver
ACME profile. You can verify compatibility by issuing certificates with this profile now. - October 1, 2025: Let’s Encrypt will launch a new
tlsclient
ACME profile which will retain the TLS Client Authentication EKU. Users who need additional time to migrate can opt-in to this profile. - February 11, 2026: the default
classic
ACME profile will no longer contain the Client Authentication EKU. - May 13, 2026: the
tlsclient
ACME profile will no longer be available and no further certificates with the Client Authentication EKU will be issued.
Once this is completed, Let’s Encrypt will switch to issuing with new intermediate Certificate Authorities which also do not contain the TLS Client Authentication EKU.
For some background information, all certificates include a list of intended uses, known as Extended Key Usages (EKU). Let’s Encrypt certificates have included two EKUs: TLS Server Authentication and TLS Client Authentication.
- TLS Server Authentication is used to authenticate connections to TLS Servers, like websites.
- TLS Client Authentication is used by clients to authenticate themselves to a server. This feature is not typically used on the web, and is not required on the certificates used on a website.
After this change is complete, only TLS Server Authentication will be available from Let’s Encrypt.
This change is prompted by changes to Google Chrome’s root program requirements, which impose a June 2026 deadline to split TLS Client and Server Authentication into separate PKIs. Many uses of client authentication are better served by a private certificate authority, and so Let’s Encrypt is discontinuing support for TLS Client Authentication ahead of this deadline.
How Pebble Supports ACME Client Developers
Together with the IETF community, we created the ACME standard to support completely automated certificate issuance. This open standard is now supported by dozens of clients. On the server side, did you know that we have not one but two open-source ACME server implementations?
The big implementation, which we use ourselves in production, is called Boulder. Boulder handles all of the facets and details needed for a production certificate authority, including policy compliance, database interfaces, challenge verifications, and logging. You can adapt and use Boulder yourself if you need to run a real certificate authority, including an internal, non-publicly-trusted ACME certificate authority within an organization.
The small implementation is called Pebble. It’s meant entirely for testing, not for use as a real certificate authority, and we and ACME client developers use it for various automated and manual testing purposes. For example, Certbot has used Pebble in its development process for years in order to perform a series of basic but realistic checks of the ability to request and obtain certificates from an ACME server.
Pebble is Easy to Use for ACME Client Testing
For any developer or team creating an ACME client application, Pebble solves a range of problems along the lines of “how do I check whether I’ve implemented ACME correctly, so that I could actually get certificates from a CA, without necessarily using a real domain name, and without running into CA rate limits during my routine testing?” Pebble is quick and easy to set up if you need to test an ACME client’s functionality.
It runs in RAM without dependencies or persistence; you won’t need to set up a database or a configuration for it. You can get Pebble running with a single golang command in just a few seconds, and immediately start making local ACME requests. That’s suitable for inclusion in a client’s integration test suite, making much more realistic integration tests possible without needing to worry about real domains, CA rate limits, or network outages.
We see Pebble getting used in the official test suites for ACME clients including getssl, Lego, Certbot, simp_le, and others. In many cases, every change committed to the ACME client’s code base is automatically tested against Pebble.
Pebble is Intentionally Different From Boulder
Pebble is also deliberately different from Boulder in some places in order to provide clients with an opportunity to interoperate with slightly different ACME implementations. The Pebble code explains that
[I]n places where the ACME specification allows customization/CA choice Pebble aims to make choices different from Boulder. For instance, Pebble changes the path structures for its resources and directory endpoints to differ from Boulder. The goal is to emphasize client specification compatibility and to avoid "over-fitting" on Boulder and the Let's Encrypt production service.
For instance, the Let’s Encrypt service currently offers its newAccount
resource at the path /acme/new-acct
, whereas Pebble uses a different name /sign-me-up
, so clients will be reminded to check the directory rather than assuming a specific path. Other substantive differences include:
- Pebble rejects 5% of all requests as having a invalid nonce, even if the nonce was otherwise valid, so clients can test how they respond this error condition
- Pebble only reuses valid authorizations 50% of the time, so clients can check their ability to perform validations when they might not have expected to
- Pebble truncates timestamps to a different degree of precision than Boulder
- Unlike Boulder, Pebble respects the notBefore and notAfter fields of new-order requests
The ability of ACME clients to work with both versions is a good test of their conformance to the ACME specification, rather than making assumptions about the current behavior of the Let’s Encrypt service in particular. This helps ensure that clients will work properly with other ACME CAs, and also with future versions of Let’s Encrypt’s own API.
Pebble is Useful to Both Let’s Encrypt and Client Developers as ACME Evolves
We often test out new ACME features by implementing them, at least in a simplified form, in Pebble before Boulder. This lets us and client developers experiment with support for those features even before they get rolled out in our staging service. We can do this quickly because a Pebble feature implementation doesn’t have to work with a full-scale CA backend.
We continue to encourage ACME client developers to use a copy of Pebble to test their clients’ functionality and ACME interoperability. It’s convenient and it’s likely to increase the correctness and robustness of their client applications.
Try Out Pebble Yourself
Want to try Pebble with your ACME client right now? On a Unix-like system, you can run
git clone https://github.com/letsencrypt/pebble/
cd pebble
go run ./cmd/pebble
Wait a few seconds; now you have a working ACME CA directory available at https://localhost:14000/dir
! Your local ACME Server can immediately receive requests and issue certificates, though not publicly-trusted ones, of course. (If you prefer, we also offer other options for installing Pebble, like a Docker image.)
We welcome code contributions to Pebble. For example, ACME client developers may want to add simple versions of an ACME feature that’s not currently tested in Pebble in order to make their test suites more comprehensive. Also, if you notice a possibly unintended divergence between Pebble and Boulder or Pebble and the ACME specification, we’d love for you to let us know.
As we touched on in our first blog post highlighting ten years of Let’s Encrypt: Just as remarkable to us as the technical innovations behind proliferating TLS at scale is, so too is the sustained generosity we have benefited from throughout our first decade.
With that sense of gratitude top of mind, we are proud to announce a contribution of $1,000,000 from Jeff Atwood. Jeff has been a longtime supporter of our work, beginning many years ago with Discourse providing our community forum pro bono; something Discourse still provides to this day. As best we can tell, our forum has helped hundreds of thousands of people get up and running with Let’s Encrypt—an impact that has helped billions of people use an Internet that’s more secure and privacy-respecting thanks to widely adopted TLS.
When we first spoke with Jeff about the road ahead for Let’s Encrypt back in 2023, we knew a few things wouldn’t change no matter how the Internet changes over the next decade:
- Free TLS is the only way to ensure it is and remains accessible to as many people as possible.
- Let’s Encrypt is here to provide a reliable, trusted, and sound service no matter the scale.
- Generosity from our global community of supporters will be how we sustain our work.
We’re proud that Jeff not only agrees, but has chosen to support us in such a meaningful way. In discussing how Jeff might want us to best celebrate his generosity and recognize his commitment to our work, he shared:
Let's Encrypt is a golden example of how creating inalienable good is possible with the right approach and the right values. And while I'm excited about the work Let's Encrypt has done, I am eager to see their work continue to keep up with the growing Web; to sustain encryption for everybody at Internet scale. To do so is going to take more than me—it's going to take a community of people committed to this work. I am confident Let's Encrypt is a project that deserves all of our support, in ways both large and small.
Indeed, this contribution is significant because of its scale, but more importantly because of its signal: a signal that supporting the not-so-glamorous but oh-so-nerdy work of encryption at scale matters to the lives of billions of people every day; a signal that supporting free privacy and security afforded by TLS for all of the Internet’s five billion users just makes sense.
Ten years ago we set out to build a better Internet through easy to use TLS. If you or your organization have supported us throughout the years, thank you for joining Jeff in believing in the work of Let’s Encrypt. For a deeper dive into the impact of Let’s Encrypt and ISRG’s other projects, take a look at our most recent annual report.
Let’s Encrypt is a project of the nonprofit Internet Security Research Group, a 501(c)(3) nonprofit committed to protecting Internet users by lowering monetary, technological, and informational barriers to a more secure and privacy-respecting Internet. For more, visit abetterinternet.org. Press inquiries can be sent to press@abetterinternet.org
Earlier this year we announced our intention to introduce short-lived certificates with lifetimes of six days as an option for our subscribers. Yesterday we issued our first short-lived certificate. You can see the certificate at the bottom of our post, or here thanks to Certificate Transparency logs. We issued it to ourselves and then immediately revoked it so we can observe the certificate’s whole lifecycle. This is the first step towards making short-lived certificates available to all subscribers.
The next step is for us to make short-lived certificates available to a small set of our subscribers so we can make sure our systems scale as expected prior to general availability. We expect this next phase to begin during Q2 of this year.
We expect short-lived certificates to be generally available by the end of this year.
How To Get Six-Day Certificates
Once short-lived certificates are an option for you, you’ll need to use an ACME client that supports ACME certificate profiles and select the short-lived certificate profile (“shortlived”). The lego
client recently added this functionality.
In the meantime, the best way to prepare to take advantage of short-lived certificates is to make sure your ACME client is reliably renewing certificates in an automated fashion. If that’s working well then there should be no costs to switching to short-lived certificates.
You’ll also want to be sure your ACME client is running frequently - both for the sake of renewing short-lived certificates and so as to take advantage of ACME Renewal Information (ARI). ARI allows Let’s Encrypt to notify your client if it should renew early for some reason. ARI checks should happen at least once per day, and short-lived certificates should be renewed every two to three days, so we recommend having your client run at least once per day.
Shorter Certificate Lifetimes Are Good for Security
When the private key associated with a certificate is compromised, the recommendation has always been to have the certificate revoked so that people will know not to use it. Unfortunately, certificate revocation doesn’t work very well. This means that certificates with compromised keys (or other issues) may continue to be used until they expire. The longer the lifetime of the certificate, the longer the potential for use of a problematic certificate.
The primary advantage of short-lived certificates is that they greatly reduce the potential compromise window because they expire relatively quickly. This reduces the need for certificate revocation, which has historically been unreliable. Our six-day certificates will not include OCSP or CRL URLs. Additionally, short-lived certificates practically require automation, and we believe that automating certificate issuance is important for security.
Questions
If you have questions or comments about our plans, feel free to let us know on our community forums.
We’d like to thank Open Technology Fund for supporting this work.
Our First 6-Day Certificate
PEM format:
-----BEGIN CERTIFICATE-----
MIIDSzCCAtGgAwIBAgISA7CwFcGk4mQWEXMacRtxHeDvMAoGCCqGSM49BAMDMDIx
CzAJBgNVBAYTAlVTMRYwFAYDVQQKEw1MZXQncyBFbmNyeXB0MQswCQYDVQQDEwJF
NjAeFw0yNTAyMTkxNzMwMDFaFw0yNTAyMjYwOTMwMDBaMAAwWTATBgcqhkjOPQIB
BggqhkjOPQMBBwNCAAQoSItt2V1aocI5dxrKR8iLfmm0KiVvOhiwKByzu2kLeC7C
0BdfAgtwdICdkuEhAXokhXLq6DNZZgmh5T4flVwZo4IB9zCCAfMwDgYDVR0PAQH/
BAQDAgeAMBMGA1UdJQQMMAoGCCsGAQUFBwMBMAwGA1UdEwEB/wQCMAAwHwYDVR0j
BBgwFoAUkydGmAOpUWiOmNbEQkjbI79YlNIwVQYIKwYBBQUHAQEESTBHMCEGCCsG
AQUFBzABhhVodHRwOi8vZTYuby5sZW5jci5vcmcwIgYIKwYBBQUHMAKGFmh0dHA6
Ly9lNi5pLmxlbmNyLm9yZy8wKAYDVR0RAQH/BB4wHIIaaGVsbG93b3JsZC5sZXRz
ZW5jcnlwdC5vcmcwEwYDVR0gBAwwCjAIBgZngQwBAgEwggEFBgorBgEEAdZ5AgQC
BIH2BIHzAPEAdgDM+w9qhXEJZf6Vm1PO6bJ8IumFXA2XjbapflTA/kwNsAAAAZUf
d/zOAAAEAwBHMEUCIFNd51TfSNiJrO+294t49C5ANc4oC7gTUzf7xnlNlhKsAiEA
wi5hfiC9SsKLxlTQ0sctUxhLmdYh40r6ECWQS/yWw2AAdwDgkrP8DB3I52g2H95h
uZZNClJ4GYpy1nLEsE2lbW9UBAAAAZUfd/0TAAAEAwBIMEYCIQCs2NuZIUIloOaH
1t9eXDKb8bjoWESBPsK4i2BxMvEIswIhAOMNaQNyr1YkzrcNUz15qGV0oVLg5BJN
+ikWxXOdcRHFMAoGCCqGSM49BAMDA2gAMGUCMDANqy7G09AIwzXcd7SNl7uFwhC+
xlfduvp1PeEDHc/FA9K3mRYkGXuKtzNdOh7wcAIxALjEMDmBQiwXbB447oGkaZAe
0rqxA3EtNV5wj0obeObluj/NgUsVEG9OqiBIoggFRw==
-----END CERTIFICATE-----
openssl x509 -text
output:
Certificate:
Data:
Version: 3 (0x2)
Serial Number:
03:b0:b0:15:c1:a4:e2:64:16:11:73:1a:71:1b:71:1d:e0:ef
Signature Algorithm: ecdsa-with-SHA384
Issuer: C=US, O=Let's Encrypt, CN=E6
Validity
Not Before: Feb 19 17:30:01 2025 GMT
Not After : Feb 26 09:30:00 2025 GMT
Subject:
Subject Public Key Info:
Public Key Algorithm: id-ecPublicKey
Public-Key: (256 bit)
pub:
04:28:48:8b:6d:d9:5d:5a:a1:c2:39:77:1a:ca:47:
c8:8b:7e:69:b4:2a:25:6f:3a:18:b0:28:1c:b3:bb:
69:0b:78:2e:c2:d0:17:5f:02:0b:70:74:80:9d:92:
e1:21:01:7a:24:85:72:ea:e8:33:59:66:09:a1:e5:
3e:1f:95:5c:19
ASN1 OID: prime256v1
NIST CURVE: P-256
X509v3 extensions:
X509v3 Key Usage: critical
Digital Signature
X509v3 Extended Key Usage:
TLS Web Server Authentication
X509v3 Basic Constraints: critical
CA:FALSE
X509v3 Authority Key Identifier:
93:27:46:98:03:A9:51:68:8E:98:D6:C4:42:48:DB:23:BF:58:94:D2
Authority Information Access:
OCSP - URI:http://e6.o.lencr.org
CA Issuers - URI:http://e6.i.lencr.org/
X509v3 Subject Alternative Name: critical
DNS:helloworld.letsencrypt.org
X509v3 Certificate Policies:
Policy: 2.23.140.1.2.1
CT Precertificate SCTs:
Signed Certificate Timestamp:
Version : v1 (0x0)
Log ID : CC:FB:0F:6A:85:71:09:65:FE:95:9B:53:CE:E9:B2:7C:
22:E9:85:5C:0D:97:8D:B6:A9:7E:54:C0:FE:4C:0D:B0
Timestamp : Feb 19 18:28:32.078 2025 GMT
Extensions: none
Signature : ecdsa-with-SHA256
30:45:02:20:53:5D:E7:54:DF:48:D8:89:AC:EF:B6:F7:
8B:78:F4:2E:40:35:CE:28:0B:B8:13:53:37:FB:C6:79:
4D:96:12:AC:02:21:00:C2:2E:61:7E:20:BD:4A:C2:8B:
C6:54:D0:D2:C7:2D:53:18:4B:99:D6:21:E3:4A:FA:10:
25:90:4B:FC:96:C3:60
Signed Certificate Timestamp:
Version : v1 (0x0)
Log ID : E0:92:B3:FC:0C:1D:C8:E7:68:36:1F:DE:61:B9:96:4D:
0A:52:78:19:8A:72:D6:72:C4:B0:4D:A5:6D:6F:54:04
Timestamp : Feb 19 18:28:32.147 2025 GMT
Extensions: none
Signature : ecdsa-with-SHA256
30:46:02:21:00:AC:D8:DB:99:21:42:25:A0:E6:87:D6:
DF:5E:5C:32:9B:F1:B8:E8:58:44:81:3E:C2:B8:8B:60:
71:32:F1:08:B3:02:21:00:E3:0D:69:03:72:AF:56:24:
CE:B7:0D:53:3D:79:A8:65:74:A1:52:E0:E4:12:4D:FA:
29:16:C5:73:9D:71:11:C5
Signature Algorithm: ecdsa-with-SHA384
Signature Value:
30:65:02:30:30:0d:ab:2e:c6:d3:d0:08:c3:35:dc:77:b4:8d:
97:bb:85:c2:10:be:c6:57:dd:ba:fa:75:3d:e1:03:1d:cf:c5:
03:d2:b7:99:16:24:19:7b:8a:b7:33:5d:3a:1e:f0:70:02:31:
00:b8:c4:30:39:81:42:2c:17:6c:1e:38:ee:81:a4:69:90:1e:
d2:ba:b1:03:71:2d:35:5e:70:8f:4a:1b:78:e6:e5:ba:3f:cd:
81:4b:15:10:6f:4e:aa:20:48:a2:08:05:47
Thu, 20 Feb 2025 00:00:00 +0000
Encryption for Everybody
The Certificate Transparency ecosystem has been improving transparency for the web PKI since 2013. It helps make clear exactly what certificates each certificate authority has issued and makes sure errors or compromises of certificate authorities are detectable.
Let’s Encrypt participates in CT both as a certificate issuer and as a log operator. For the past year, we’ve also been running an experiment to help validate a next-generation design for Certificate Transparency logs. That experiment is now nearing a successful conclusion. We’ve demonstrated that the new architecture (called the “Static CT API”) works well, providing greater efficiency and making it easier to run huge and reliable CT log services with comparatively modest resources. The Static CT API also makes it easier to download and share data from CT logs.
The Sunlight log implementation, alongside other Static CT API log implementations, is now on a path to production use. Browsers are now officially accepting Static CT API logs into their log programs as a means to help guarantee that the contents of CA-issued certificates are all publicly disclosed and publicly accessible (see Safari’s and Chrome’s recent announcements), although the browsers also require the continued use of a traditional RFC 6962 log alongside the new type.
All of this is good news for everyone who runs, submits certificates to, or monitors a CT log: as the new architecture gets adopted, we can expect to see more organizations running more logs, at lower cost, and with greater overall capacity to keep up with the large volume of publicly-trusted certificates.
Certificate Transparency
Certificate Transparency (CT) was introduced in 2013 in response to concerns about how Internet users could detect misbehavior and compromise of certificate authorities. Prior to CT, it was possible for a CA to issue an inaccurate or malicious certificate that could be used to attack a relatively small number of users, and that might never come to wider attention. A team led by Google responded to this by creating a transparency log mechanism, where certificate authorities (like Let’s Encrypt) must disclose all of the certificates that we issue by submitting them to public log services. Web browsers now generally reject certificates unless the certificates include cryptographic proof (“Signed Certificate Timestamps”, or SCTs) demonstrating that they were submitted to and accepted by such logs.
The CT logs themselves use a cryptographic append-only ledger to prove that they haven’t deleted or modified their records. There are currently over a dozen CT log services, most of them also run by certificate authorities, including Let’s Encrypt’s own Oak log.
The Static CT API
The original 2013 CT log design has been used with relatively few technical changes since it was first introduced, but several other transparency logging systems have been created in other areas, such as sumdb for Golang, which helps ensure that the contents of Golang package updates are publicly recorded. While they were originally inspired by CT, more-recently invented transparency logs have improved on its design.
The current major evolution of CT was led by Filippo Valsorda, a cryptographer with an interest in transparency log mechanisms, with help from others in the CT ecosystem. Portions of the new design are directly based on sumdb. In addition to designing the new architecture, Valsorda also wrote the implementation that we’ve been using, called Sunlight, with support from Let’s Encrypt. We’re excited to see that there are now at least three other compatible implementations: Google’s trillian-tessera, Cloudflare’s Azul, and an independent project called Itko.
The biggest change for the Static CT API is that logs are now represented, and downloaded by verifiers, as simple collections of flat files (called “tiles,” so some implementers have also been referring to these as “tiled logs” or “tlogs”). Anyone who wants to download log data can do so just by downloading these files. This is great for log operators because these simple file downloads can be distributed in various ways, including caching by a CDN, which was less practical and efficient for the classic CT API.
The new design is also simpler and more efficient from the log operator’s perspective, making it cheaper to run logs. As we said last year, this may enable us and other operators to increase reliability and availability by running several separate logs, likely with lower overall resource requirements than a single traditional log.
Our Sunlight experiment

For the past year, we’ve run three Sunlight logs, called Twig, Willow, and Sycamore. We’ve been logging all of our own issued certificates, which represent a majority of the total volume of all publicly-trusted certificates, into our Sunlight logs. Sunlight logged these certificates quickly and correctly on relatively modest server hardware. Notably, each log’s write side was handled comfortably by just a single server. We also achieved high availability for these log services throughout the course of this experiment. (Because our Sunlight logs are not yet trusted by web browsers, we didn’t include the SCT proofs that they returned to us in the actual certificates we gave out to our subscribers; those proofs wouldn’t have been of use to our subscribers yet and would just have taken up space.)
A potential failure mode of traditional CT logs is that they could be unacceptably slow in incorporating newly-submitted certificates (known as missing the maximum merge delay), which can result in a log becoming distrusted. This isn’t a possibility for our new Sunlight-based logs: they always completely incorporate newly-submitted certificates before returning an SCT to the submitter, so the effective merge delay is zero! Of course, any log can suffer outages for a variety of reasons, but this feature of Sunlight makes it less likely that any outages will be fatal to a log’s continued operation.
We’ve demonstrated that Sunlight and the Static CT API work in practice, and this demonstration has helped to confirm the browser developers’ hope that Static CT API logs can become an officially-supported part of CT. As a result, the major browsers that enforce CT have now permitted Static CT API logs to apply for inclusion in browsers as publicly-trusted logs, and we’re preparing to apply for this status for our Willow and Sycamore logs with the Chrome and Safari CT log programs.
Let’s Encrypt will run at least these two logs, and possibly others over time, for the foreseeable future. Once they’re trusted by browsers, we’ll encourage other CAs to submit to them as well, and we’ll begin including SCTs from these logs in our own certificates (alongside SCTs from traditional CT logs).
How to participate
The new Static CT API and the rollout of tile-based logs will bring various changes and opportunities for community members.
New Certificate Transparency log operators
Companies and non-profit organizations could help support the web PKI by running a CT log and applying for it to be publicly trusted. Implementations like Sunlight will have substantially lower resource requirements than first-generation CT logs, particularly when cached behind a CDN. The biggest resource demands for a log operator will be storage and upstream bandwidth. A publicly-trusted log is also expected to maintain relatively high availability, because CAs need logs to be available in order to continue issuing certificates.
We don’t have statistics to share about the exact resource requirements for such a log yet, but after we have practical experience running a fully publicly-trusted Sunlight log, we should be able to make this more concrete. As noted above, the compute side of the log can be handled by a single server. Sunlight author Filippo Valsorda has recently started running a Sunlight log—also on just a single server—and offered more detailed cost breakdowns for that log’s setup, with an estimated total cost around $10,000 per year. The costs for our production Static CT API logs may be higher than those for Filippo’s log, but still far less than the costs for our traditional RFC 6962 logs.
As with trust decisions about CAs, browser developers are the authorities about which CT logs become publicly trusted. Although any person or organization can run a log, browser developers will generally prefer to trust logs whose continued availability they’re confident of—typically those run by stable organizations with experience running some form of public Internet services. Unlike becoming a certificate authority, running a log does not require a formal audit, as the validation of the log’s availability and correctness can be performed purely by observation.
Certificate authorities
Once the Willow and Sycamore logs are trusted by browsers, our fellow certificate authorities can choose to start logging certificates to them as part of their issuance processes. (Initially, you should still include at least one SCT from a traditional CT log in each certificate.) The details, including the log API endpoints and keys, are available at our CT log page. You can start submitting to these logs right away if you prefer; just bear in mind that the SCTs they return aren’t useful to subscribers yet, and won’t be useful until browsers are updated to trust the new logs.
CT data users
You can monitor CT in order to watch for certificate issuances for your own domain names, or as part of monitoring or security products or services, or for Internet security research purposes. Many of our colleagues have been doing this for some time as a part of various tools they maintain. The Static CT API should make this easier, because you’ll be able to download and share log tiles as sets of ordinary files.
If you already run such monitoring tools, please note that you’ll need to update your data pipeline in order to access Static CT API logs; since the read API is not backwards-compatible, CT API clients will need to be modified to support the new API. Without updated tools, your view of the CT system will become partial!
Also note that getting a complete view of all of CT will still require downloading data from traditional logs, which will probably continue to be true for several years.
Software developers
As logs based on the new API enter production use, it will be important to have tools to interact with and search these logs. We can all benefit from more software that understands how to do this. Since file downloads are such a familiar piece of software functionality, it will probably be easier for developers to develop against the new API compared to the original one.
We’ve also continued to see greater integration of transparency logging tools into other kinds of services, such as software updates. There’s a growing transparency log ecosystem that’s always in need of more tools and integrations. As we mentioned above, transparency logs are increasingly learning from one another, and there are also mechanisms for more direct integration between different kinds of transparency logs (known as “witnessing”). Software developers can help improve different aspects of Internet security by contributing to this active and growing area.
Conclusion
The Certificate Transparency community and larger transparency logging community have experienced a virtuous cycle of innovation, sharing ideas and implementation code between different systems and demonstrating the feasibility of new mechanisms and functionality. With the advent of tile-based logging in CT, the state of the art has moved forward in a way that helps log operators run our logs much more efficiently without compromising security.
We’re proud to have participated in this experiment and the engineering conversation around the evolution of logging architectures. Now that we’ve shown how well the new API really works at scale, we look forward to having publicly-trusted Sunlight logs later this year!
Every night, right around midnight (mainly UTC), a horde of zombies wakes up and clamors for … digital certificates!
The zombies in question are abandoned or misconfigured Internet servers and ACME clients that have been set to request certificates from Let’s Encrypt. As our certificates last for at most 90 days, these zombie clients’ software knows that their certificates are out-of-date and need to be replaced. What they don’t realize is that their quest for new certificates is doomed! These devices are cursed to seek certificates again and again, never receiving them.
But they do use up a lot of certificate authority resources in the process.
The Zombie Client Problem
Unlike a human being, software doesn’t give up in frustration, or try to modify its approach, when it repeatedly fails at the same task. Our emphasis on automation means that the vast majority of Let’s Encrypt certificate renewals are performed by automated software. This is great when those renewals succeed, but it also means that forgotten clients and devices can continue requesting renewals unsuccessfully for months, or even years.
How might that happen? Most often, it happens when a device no longer has a domain name pointed to it. The device itself doesn’t know that this has changed, so it treats renewal failures as transient even though they are actually permanent. For instance:
- An organization may have allowed a domain name registration to lapse because it is no longer needed, but its servers are still configured to request certs for it.
- Or, a home user stopped using a particular dynamic-DNS domain with a network-attached storage device, but is still using that device at home. The device doesn’t realize that the user no longer expects to use the name, so it keeps requesting certs for it.
- Or, a web hosting or CDN customer migrated to a different service provider, but never informed the old service provider. The old service provider’s servers keep requesting certs unsuccessfully. If the customer was in a free service tier, there might not be invoices or charges reminding the customer to cancel the service.
- Or any number of other, subtler changes in a subscriber’s infrastructure, such as changing a firewall rule or some webserver configuration.
At the scale of Let’s Encrypt, which now covers hundreds of millions of names, scenarios like these have become common, and their impact has become substantial. In 2024, we noticed that about half of all certificate requests to the Let’s Encrypt ACME API came from about a million accounts that never successfully complete any validations. Many of these had completed validations and issued certificates sometime in the past, but nowadays every single one of their validation attempts fails, and they show no signs that this will change anytime soon.
Unfortunately, trying to validate those futile requests still uses resources. Our CA software has to generate challenges, reach out and attempt to validate them over the Internet, detect and report failures, and record all of the associated information in our databases and audit logs. And over time, we’ve seen more and more recurring failures: accounts that always fail their issuance requests have been growing at around 18% per year.
In January, we mentioned that we had been addressing the zombie client problem through our rate limit system. This post provides more detail on that progress.
Our Rate Limit Philosophy
If you’ve used Let’s Encrypt as a subscriber, you may have run into one of our rate limits at some point, maybe during your initial setup process. We have eight different kinds of rate limits in place now; as our January post describes, they’ve become more algorithmically sophisticated and grown to address a wider range of problems. A key principle for Let’s Encrypt is that our rate limiting is not a punishment. We don’t think of rate limits as a way of retaliating against a client for misbehavior. Rate limits are simply a tool to maximize the efficient use of our limited resources and prevent people and programs from using up those resources for no constructive purpose.
We’ve consistently tried to design our rate limit mechanisms in line with that philosophy. So if a misconfiguration or misunderstanding has caused excessive requests in the past, we’re still happy to welcome the user in question back and start issuing them certificates again—once the problem has been addressed. We want the rate limits to put a brake on wasteful use of our systems, but not to frustrate users who are actively trying to make Let’s Encrypt work for them.
In addition, we’ve always implemented our rate limits to err on the side of permissiveness. For example, if the Redis instances where rate limits are tracked have an outage or lose data, the system is designed to permit more issuance rather than less issuance as a result.
We wanted to create additional limits that would target zombie clients, but in a correspondingly non-punitive way that would avoid any disruption to valid issuance, and welcome subscribers back quickly if they happened to notice and fix a long-time problem with their setups.
Our Zombie-Related Rate Limits and Their Impact
In planning a new zombie-specific response, we decided on a “pausing” approach, which can temporarily limit an account’s ability to proceed with certificate requests. The core idea is that, if a particular account consistently fails to complete validation for a particular hostname, we’ll pause that account-hostname pair. The pause means that any new order requests from that account for that hostname will be rejected immediately, before we get to the resource-intensive validation phase.
This approach is more finely targeted than pausing an entire account. Pausing account-hostname pairs means that your ability to issue certs for a specific name could be paused due to repeated failures, but you can still get all of your other certs like normal. So a large hosting provider doesn’t have to fear that its certificate issuance on behalf of one customer will be affected by renewal failures related to a problem with a different customer’s domain name. The account-specificity of the pause, in turn, means that validation failures from one subscriber or device won’t prevent a different subscriber or device from attempting to validate the same name, as long as the devices in question don’t share a single Let’s Encrypt account.
In September 2024, we began applying our zombie rate limits manually by pausing about 21,000 of the most recurrently-failing account-hostname pairs, those which were consistently repeating the same failed requests many times per day, every day. After implementing that first round of pauses, we immediately saw a significant impact on our failed request rates. As we announced at that time, we also began using a formula to automatically pause other zombie client account-hostname pairs from December 2024 onward. The associated new rate limit is called “Consecutive Authorization Failures per Hostname Per Account” (and is independent of the existing “Authorization Failures per Hostname Per Account” limit, which resets every hour).
This formula relates to the frequency of successive failed issuance requests for the same domain name by the same Let’s Encrypt account. It applies only to failures that happen again and again, with no successful issuances at all in between: a single successful validation immediately resets the rate limit all the way to zero. Like all of our rate limits, this is not a punitive measure but is simply intended to reduce the waste of resources. So, we decided to set the thresholds rather high in the expectation that we would catch only the most disruptive zombie clients, and ultimately only those clients that were extremely unlikely to succeed in the future based on their substantial history of failed requests. We don’t hurry to block requesters as zombies: according to our current formula, client software following the default established by EFF’s Certbot (two renewal attempts per day) would be paused as a zombie only after about ten years of constant failures. More aggressive failed issuance attempts will get a client paused sooner, but clients will generally have to fail hundreds or thousands of attempts in a row before they are paused.
Most subscribers using mainstream client applications with default configurations will never encounter this rate limit, even if they forget to deactivate renewal attempts for domains that are no longer pointed at their servers. As described below, our current limit is already providing noticeable benefits with minimal disruption, and we’re likely to tighten it a bit in the near future, so it will trigger after somewhat fewer consecutive failures.
Self-Service Unpausing
A key feature in our zombie issuance pausing mechanism is self-service unpausing. Whenever an account-hostname pair is paused, any new certificate requests for that hostname submitted by that account are immediately rejected. But this means that the “one successful validation immediately resets the rate limit counter” feature can no longer come into effect: once they’re paused, they can’t even attempt validation anymore.
So every rejection comes with an error message explaining what has happened and a custom link that can be used to immediately unpause that account-hostname pair and remove any other pauses on the same account at the same time. The point of this is that subscribers who notice at some point that issuance is failing and want to intervene to get it working again have a straightforward option to let Let’s Encrypt know that they’re aware of the recurring failures and are still planning to use a particular account. As soon as subscribers notify us via the self-service link, they’ll be able to issue certificates again.
Currently, the user interface for an affected subscriber looks like this:

This link would be provided via an ACME error message in response to any request that was blocked due to a pause account-hostname pair.
As it’s turned out, the unpause option shown above has only been used by about 3% of affected accounts! This goes to show that most of the zombies we’ve paused were, in fact, well and truly forgotten about.
However, the unpause feature is there for whenever it’s needed, and there may be cases when it will become more important. A very large integration could trigger the zombie-related rate limits if a newly-introduced software bug causes what looks like a very high volume of zombie requests in a very short time. In that case, once that bug has been noticed and fixed, an integrator may need to unpause its issuance on behalf of lots of customers at once. Our unpause feature permits unpausing 50,000 domain names on a single account at a time, so even the largest integrators can get themselves unpaused expeditiously in this situation.
Conclusion
We’ve been very happy with the results of our zombie mitigation measures, and, as far as we can tell, there’s been almost no impact for subscribers! Our statistics indicate that we’ve managed to reduce the load on our infrastructure while causing no detectable harm or inconvenience to subscribers’ valid issuance requests.
Since implementing the manual pauses in September and the automated pauses in December, we’ve seen:
- Over 100,000 account-hostname pairs have been paused for excessive failures.
- We received zero (!) associated complaints or support requests.
- About 3,200 people manually unpaused issuance.
- Failed certificate orders fell by about 30% so far, and should continue to fall over time as we fine-tune the rate limit formula and catch more zombie clients.
The new rate limit and the self-service unpause system are also ready to deal with circumstances that might produce more zombie clients in the future. For instance, we’ve announced that we’re going to be discontinuing renewal reminder emails soon. If some subscribers overlook failed renewals in the future, we might see more paused clients that result from unintentional renewal failures. We think taking advantage of the existing self-service unpause feature will be straightforward in that case. But it’s much better to notice problems and get them fixed up front, so please remember to set up your own monitoring to avoid unnoticed renewal failures in the future.
If you’re a subscriber who’s had occasion to use the self-service unpause feature, we’d love your feedback on the Community Forum about your experience using the feature and the circumstances that surrounded your account’s getting paused.
Also, if you’re a Let’s Encrypt client developer, please remember to make renewal requests at a random time (not precisely at midnight) so that the load on our infrastructure is smoothed out. You can also reduce the impact of zombie renewals by repeating failed requests somewhat less frequently over time (a “back-off” strategy), especially if the failure reason makes it look like a domain name may no longer be in use at all.
At Let’s Encrypt we know that building a secure Internet isn’t just a technical challenge—it’s a long-term commitment. Over the past decade we’ve made enormous strides: from issuing billions of TLS certificates to continually innovating to keep the web safer and more accessible. But none of this would be possible without recurring donations from individuals and organizations around the world.
Recurring donations are more than just financial support; they allow us to plan, innovate, and keep improving with confidence, knowing that month after month, year after year, our supporters are there. This consistent backing empowers us to maintain a secure, privacy-respecting Internet for all.
Our tenth anniversary tagline, Encryption for Everybody, highlights this vision. It’s both a technical goal and a fundamental belief that secure communication should be available to everyone, everywhere.
When we asked our recurring donors why they give, their responses affirmed how essential this commitment is. One longtime supporter shared:
Supporting Let's Encrypt aligns with my belief in a privacy-conscious world, where encrypted communication is the default.
For some, it’s about paying it forward, helping future users benefit as they once did:
For my 18th birthday, I got my last name as a domain. As a young tech enthusiast with little money, Let's Encrypt made it possible for me to get a TLS certificate and learn about technology. Back then, I was a student using it for free. Now that I have a stable income, donating is my way of giving back and helping others have the same opportunities I did.
The next decade of Let’s Encrypt will likely be about maintaining that commitment to encryption for everybody. It’s about ensuring that our work remains reliable, accessible, and—most importantly—supported by people who believe in what we do. To everyone who’s been part of this journey, thank you. We couldn’t do it without you.
During Let’s Encrypt’s 10th Anniversary Year, we’re celebrating our community and reflecting on our journey. We’d be thrilled to hear from you. Connect with us on LinkedIn, our community forum, or email us at outreach@letsencrypt.org. Let’s keep building a secure Internet together!
Let’s Encrypt is a project of the nonprofit Internet Security Research Group, a 501(c)(3) nonprofit. To support our work, visit letsencrypt.org/donate.
Let’s Encrypt will no longer include the “TLS Client Authentication” Extended Key Usage (EKU) in our certificates beginning in 2026. Most users who use Let’s Encrypt to secure websites won’t be affected and won’t need to take any action. However, if you use Let’s Encrypt certificates as client certificates to authenticate to a server, this change may impact you.
To minimize disruption, Let’s Encrypt will roll this change out in multiple stages, using ACME Profiles:
- Today: Let’s Encrypt already excludes the Client Authentication EKU on our
tlsserver
ACME profile. You can verify compatibility by issuing certificates with this profile now. - October 1, 2025: Let’s Encrypt will launch a new
tlsclient
ACME profile which will retain the TLS Client Authentication EKU. Users who need additional time to migrate can opt-in to this profile. - February 11, 2026: the default
classic
ACME profile will no longer contain the Client Authentication EKU. - May 13, 2026: the
tlsclient
ACME profile will no longer be available and no further certificates with the Client Authentication EKU will be issued.
Once this is completed, Let’s Encrypt will switch to issuing with new intermediate Certificate Authorities which also do not contain the TLS Client Authentication EKU.
For some background information, all certificates include a list of intended uses, known as Extended Key Usages (EKU). Let’s Encrypt certificates have included two EKUs: TLS Server Authentication and TLS Client Authentication.
- TLS Server Authentication is used to authenticate connections to TLS Servers, like websites.
- TLS Client Authentication is used by clients to authenticate themselves to a server. This feature is not typically used on the web, and is not required on the certificates used on a website.
After this change is complete, only TLS Server Authentication will be available from Let’s Encrypt.
This change is prompted by changes to Google Chrome’s root program requirements, which impose a June 2026 deadline to split TLS Client and Server Authentication into separate PKIs. Many uses of client authentication are better served by a private certificate authority, and so Let’s Encrypt is discontinuing support for TLS Client Authentication ahead of this deadline.
How Pebble Supports ACME Client Developers
Together with the IETF community, we created the ACME standard to support completely automated certificate issuance. This open standard is now supported by dozens of clients. On the server side, did you know that we have not one but two open-source ACME server implementations?
The big implementation, which we use ourselves in production, is called Boulder. Boulder handles all of the facets and details needed for a production certificate authority, including policy compliance, database interfaces, challenge verifications, and logging. You can adapt and use Boulder yourself if you need to run a real certificate authority, including an internal, non-publicly-trusted ACME certificate authority within an organization.
The small implementation is called Pebble. It’s meant entirely for testing, not for use as a real certificate authority, and we and ACME client developers use it for various automated and manual testing purposes. For example, Certbot has used Pebble in its development process for years in order to perform a series of basic but realistic checks of the ability to request and obtain certificates from an ACME server.
Pebble is Easy to Use for ACME Client Testing
For any developer or team creating an ACME client application, Pebble solves a range of problems along the lines of “how do I check whether I’ve implemented ACME correctly, so that I could actually get certificates from a CA, without necessarily using a real domain name, and without running into CA rate limits during my routine testing?” Pebble is quick and easy to set up if you need to test an ACME client’s functionality.
It runs in RAM without dependencies or persistence; you won’t need to set up a database or a configuration for it. You can get Pebble running with a single golang command in just a few seconds, and immediately start making local ACME requests. That’s suitable for inclusion in a client’s integration test suite, making much more realistic integration tests possible without needing to worry about real domains, CA rate limits, or network outages.
We see Pebble getting used in the official test suites for ACME clients including getssl, Lego, Certbot, simp_le, and others. In many cases, every change committed to the ACME client’s code base is automatically tested against Pebble.
Pebble is Intentionally Different From Boulder
Pebble is also deliberately different from Boulder in some places in order to provide clients with an opportunity to interoperate with slightly different ACME implementations. The Pebble code explains that
[I]n places where the ACME specification allows customization/CA choice Pebble aims to make choices different from Boulder. For instance, Pebble changes the path structures for its resources and directory endpoints to differ from Boulder. The goal is to emphasize client specification compatibility and to avoid "over-fitting" on Boulder and the Let's Encrypt production service.
For instance, the Let’s Encrypt service currently offers its newAccount
resource at the path /acme/new-acct
, whereas Pebble uses a different name /sign-me-up
, so clients will be reminded to check the directory rather than assuming a specific path. Other substantive differences include:
- Pebble rejects 5% of all requests as having a invalid nonce, even if the nonce was otherwise valid, so clients can test how they respond this error condition
- Pebble only reuses valid authorizations 50% of the time, so clients can check their ability to perform validations when they might not have expected to
- Pebble truncates timestamps to a different degree of precision than Boulder
- Unlike Boulder, Pebble respects the notBefore and notAfter fields of new-order requests
The ability of ACME clients to work with both versions is a good test of their conformance to the ACME specification, rather than making assumptions about the current behavior of the Let’s Encrypt service in particular. This helps ensure that clients will work properly with other ACME CAs, and also with future versions of Let’s Encrypt’s own API.
Pebble is Useful to Both Let’s Encrypt and Client Developers as ACME Evolves
We often test out new ACME features by implementing them, at least in a simplified form, in Pebble before Boulder. This lets us and client developers experiment with support for those features even before they get rolled out in our staging service. We can do this quickly because a Pebble feature implementation doesn’t have to work with a full-scale CA backend.
We continue to encourage ACME client developers to use a copy of Pebble to test their clients’ functionality and ACME interoperability. It’s convenient and it’s likely to increase the correctness and robustness of their client applications.
Try Out Pebble Yourself
Want to try Pebble with your ACME client right now? On a Unix-like system, you can run
git clone https://github.com/letsencrypt/pebble/
cd pebble
go run ./cmd/pebble
Wait a few seconds; now you have a working ACME CA directory available at https://localhost:14000/dir
! Your local ACME Server can immediately receive requests and issue certificates, though not publicly-trusted ones, of course. (If you prefer, we also offer other options for installing Pebble, like a Docker image.)
We welcome code contributions to Pebble. For example, ACME client developers may want to add simple versions of an ACME feature that’s not currently tested in Pebble in order to make their test suites more comprehensive. Also, if you notice a possibly unintended divergence between Pebble and Boulder or Pebble and the ACME specification, we’d love for you to let us know.
As we touched on in our first blog post highlighting ten years of Let’s Encrypt: Just as remarkable to us as the technical innovations behind proliferating TLS at scale is, so too is the sustained generosity we have benefited from throughout our first decade.
With that sense of gratitude top of mind, we are proud to announce a contribution of $1,000,000 from Jeff Atwood. Jeff has been a longtime supporter of our work, beginning many years ago with Discourse providing our community forum pro bono; something Discourse still provides to this day. As best we can tell, our forum has helped hundreds of thousands of people get up and running with Let’s Encrypt—an impact that has helped billions of people use an Internet that’s more secure and privacy-respecting thanks to widely adopted TLS.
When we first spoke with Jeff about the road ahead for Let’s Encrypt back in 2023, we knew a few things wouldn’t change no matter how the Internet changes over the next decade:
- Free TLS is the only way to ensure it is and remains accessible to as many people as possible.
- Let’s Encrypt is here to provide a reliable, trusted, and sound service no matter the scale.
- Generosity from our global community of supporters will be how we sustain our work.
We’re proud that Jeff not only agrees, but has chosen to support us in such a meaningful way. In discussing how Jeff might want us to best celebrate his generosity and recognize his commitment to our work, he shared:
Let's Encrypt is a golden example of how creating inalienable good is possible with the right approach and the right values. And while I'm excited about the work Let's Encrypt has done, I am eager to see their work continue to keep up with the growing Web; to sustain encryption for everybody at Internet scale. To do so is going to take more than me—it's going to take a community of people committed to this work. I am confident Let's Encrypt is a project that deserves all of our support, in ways both large and small.
Indeed, this contribution is significant because of its scale, but more importantly because of its signal: a signal that supporting the not-so-glamorous but oh-so-nerdy work of encryption at scale matters to the lives of billions of people every day; a signal that supporting free privacy and security afforded by TLS for all of the Internet’s five billion users just makes sense.
Ten years ago we set out to build a better Internet through easy to use TLS. If you or your organization have supported us throughout the years, thank you for joining Jeff in believing in the work of Let’s Encrypt. For a deeper dive into the impact of Let’s Encrypt and ISRG’s other projects, take a look at our most recent annual report.
Let’s Encrypt is a project of the nonprofit Internet Security Research Group, a 501(c)(3) nonprofit committed to protecting Internet users by lowering monetary, technological, and informational barriers to a more secure and privacy-respecting Internet. For more, visit abetterinternet.org. Press inquiries can be sent to press@abetterinternet.org
Earlier this year we announced our intention to introduce short-lived certificates with lifetimes of six days as an option for our subscribers. Yesterday we issued our first short-lived certificate. You can see the certificate at the bottom of our post, or here thanks to Certificate Transparency logs. We issued it to ourselves and then immediately revoked it so we can observe the certificate’s whole lifecycle. This is the first step towards making short-lived certificates available to all subscribers.
The next step is for us to make short-lived certificates available to a small set of our subscribers so we can make sure our systems scale as expected prior to general availability. We expect this next phase to begin during Q2 of this year.
We expect short-lived certificates to be generally available by the end of this year.
How To Get Six-Day Certificates
Once short-lived certificates are an option for you, you’ll need to use an ACME client that supports ACME certificate profiles and select the short-lived certificate profile (“shortlived”). The lego
client recently added this functionality.
In the meantime, the best way to prepare to take advantage of short-lived certificates is to make sure your ACME client is reliably renewing certificates in an automated fashion. If that’s working well then there should be no costs to switching to short-lived certificates.
You’ll also want to be sure your ACME client is running frequently - both for the sake of renewing short-lived certificates and so as to take advantage of ACME Renewal Information (ARI). ARI allows Let’s Encrypt to notify your client if it should renew early for some reason. ARI checks should happen at least once per day, and short-lived certificates should be renewed every two to three days, so we recommend having your client run at least once per day.
Shorter Certificate Lifetimes Are Good for Security
When the private key associated with a certificate is compromised, the recommendation has always been to have the certificate revoked so that people will know not to use it. Unfortunately, certificate revocation doesn’t work very well. This means that certificates with compromised keys (or other issues) may continue to be used until they expire. The longer the lifetime of the certificate, the longer the potential for use of a problematic certificate.
The primary advantage of short-lived certificates is that they greatly reduce the potential compromise window because they expire relatively quickly. This reduces the need for certificate revocation, which has historically been unreliable. Our six-day certificates will not include OCSP or CRL URLs. Additionally, short-lived certificates practically require automation, and we believe that automating certificate issuance is important for security.
Questions
If you have questions or comments about our plans, feel free to let us know on our community forums.
We’d like to thank Open Technology Fund for supporting this work.
Our First 6-Day Certificate
PEM format:
-----BEGIN CERTIFICATE-----
MIIDSzCCAtGgAwIBAgISA7CwFcGk4mQWEXMacRtxHeDvMAoGCCqGSM49BAMDMDIx
CzAJBgNVBAYTAlVTMRYwFAYDVQQKEw1MZXQncyBFbmNyeXB0MQswCQYDVQQDEwJF
NjAeFw0yNTAyMTkxNzMwMDFaFw0yNTAyMjYwOTMwMDBaMAAwWTATBgcqhkjOPQIB
BggqhkjOPQMBBwNCAAQoSItt2V1aocI5dxrKR8iLfmm0KiVvOhiwKByzu2kLeC7C
0BdfAgtwdICdkuEhAXokhXLq6DNZZgmh5T4flVwZo4IB9zCCAfMwDgYDVR0PAQH/
BAQDAgeAMBMGA1UdJQQMMAoGCCsGAQUFBwMBMAwGA1UdEwEB/wQCMAAwHwYDVR0j
BBgwFoAUkydGmAOpUWiOmNbEQkjbI79YlNIwVQYIKwYBBQUHAQEESTBHMCEGCCsG
AQUFBzABhhVodHRwOi8vZTYuby5sZW5jci5vcmcwIgYIKwYBBQUHMAKGFmh0dHA6
Ly9lNi5pLmxlbmNyLm9yZy8wKAYDVR0RAQH/BB4wHIIaaGVsbG93b3JsZC5sZXRz
ZW5jcnlwdC5vcmcwEwYDVR0gBAwwCjAIBgZngQwBAgEwggEFBgorBgEEAdZ5AgQC
BIH2BIHzAPEAdgDM+w9qhXEJZf6Vm1PO6bJ8IumFXA2XjbapflTA/kwNsAAAAZUf
d/zOAAAEAwBHMEUCIFNd51TfSNiJrO+294t49C5ANc4oC7gTUzf7xnlNlhKsAiEA
wi5hfiC9SsKLxlTQ0sctUxhLmdYh40r6ECWQS/yWw2AAdwDgkrP8DB3I52g2H95h
uZZNClJ4GYpy1nLEsE2lbW9UBAAAAZUfd/0TAAAEAwBIMEYCIQCs2NuZIUIloOaH
1t9eXDKb8bjoWESBPsK4i2BxMvEIswIhAOMNaQNyr1YkzrcNUz15qGV0oVLg5BJN
+ikWxXOdcRHFMAoGCCqGSM49BAMDA2gAMGUCMDANqy7G09AIwzXcd7SNl7uFwhC+
xlfduvp1PeEDHc/FA9K3mRYkGXuKtzNdOh7wcAIxALjEMDmBQiwXbB447oGkaZAe
0rqxA3EtNV5wj0obeObluj/NgUsVEG9OqiBIoggFRw==
-----END CERTIFICATE-----
openssl x509 -text
output:
Certificate:
Data:
Version: 3 (0x2)
Serial Number:
03:b0:b0:15:c1:a4:e2:64:16:11:73:1a:71:1b:71:1d:e0:ef
Signature Algorithm: ecdsa-with-SHA384
Issuer: C=US, O=Let's Encrypt, CN=E6
Validity
Not Before: Feb 19 17:30:01 2025 GMT
Not After : Feb 26 09:30:00 2025 GMT
Subject:
Subject Public Key Info:
Public Key Algorithm: id-ecPublicKey
Public-Key: (256 bit)
pub:
04:28:48:8b:6d:d9:5d:5a:a1:c2:39:77:1a:ca:47:
c8:8b:7e:69:b4:2a:25:6f:3a:18:b0:28:1c:b3:bb:
69:0b:78:2e:c2:d0:17:5f:02:0b:70:74:80:9d:92:
e1:21:01:7a:24:85:72:ea:e8:33:59:66:09:a1:e5:
3e:1f:95:5c:19
ASN1 OID: prime256v1
NIST CURVE: P-256
X509v3 extensions:
X509v3 Key Usage: critical
Digital Signature
X509v3 Extended Key Usage:
TLS Web Server Authentication
X509v3 Basic Constraints: critical
CA:FALSE
X509v3 Authority Key Identifier:
93:27:46:98:03:A9:51:68:8E:98:D6:C4:42:48:DB:23:BF:58:94:D2
Authority Information Access:
OCSP - URI:http://e6.o.lencr.org
CA Issuers - URI:http://e6.i.lencr.org/
X509v3 Subject Alternative Name: critical
DNS:helloworld.letsencrypt.org
X509v3 Certificate Policies:
Policy: 2.23.140.1.2.1
CT Precertificate SCTs:
Signed Certificate Timestamp:
Version : v1 (0x0)
Log ID : CC:FB:0F:6A:85:71:09:65:FE:95:9B:53:CE:E9:B2:7C:
22:E9:85:5C:0D:97:8D:B6:A9:7E:54:C0:FE:4C:0D:B0
Timestamp : Feb 19 18:28:32.078 2025 GMT
Extensions: none
Signature : ecdsa-with-SHA256
30:45:02:20:53:5D:E7:54:DF:48:D8:89:AC:EF:B6:F7:
8B:78:F4:2E:40:35:CE:28:0B:B8:13:53:37:FB:C6:79:
4D:96:12:AC:02:21:00:C2:2E:61:7E:20:BD:4A:C2:8B:
C6:54:D0:D2:C7:2D:53:18:4B:99:D6:21:E3:4A:FA:10:
25:90:4B:FC:96:C3:60
Signed Certificate Timestamp:
Version : v1 (0x0)
Log ID : E0:92:B3:FC:0C:1D:C8:E7:68:36:1F:DE:61:B9:96:4D:
0A:52:78:19:8A:72:D6:72:C4:B0:4D:A5:6D:6F:54:04
Timestamp : Feb 19 18:28:32.147 2025 GMT
Extensions: none
Signature : ecdsa-with-SHA256
30:46:02:21:00:AC:D8:DB:99:21:42:25:A0:E6:87:D6:
DF:5E:5C:32:9B:F1:B8:E8:58:44:81:3E:C2:B8:8B:60:
71:32:F1:08:B3:02:21:00:E3:0D:69:03:72:AF:56:24:
CE:B7:0D:53:3D:79:A8:65:74:A1:52:E0:E4:12:4D:FA:
29:16:C5:73:9D:71:11:C5
Signature Algorithm: ecdsa-with-SHA384
Signature Value:
30:65:02:30:30:0d:ab:2e:c6:d3:d0:08:c3:35:dc:77:b4:8d:
97:bb:85:c2:10:be:c6:57:dd:ba:fa:75:3d:e1:03:1d:cf:c5:
03:d2:b7:99:16:24:19:7b:8a:b7:33:5d:3a:1e:f0:70:02:31:
00:b8:c4:30:39:81:42:2c:17:6c:1e:38:ee:81:a4:69:90:1e:
d2:ba:b1:03:71:2d:35:5e:70:8f:4a:1b:78:e6:e5:ba:3f:cd:
81:4b:15:10:6f:4e:aa:20:48:a2:08:05:47

2025 marks ten years of Let’s Encrypt. Already this year we’ve taken steps to continue to deliver on our values of user privacy, efficiency, and innovation, all with the intent of continuing to deliver free TLS certificates to as many people as possible; to deliver encryption for everybody.
And while we’re excited about the technical progress we’ll make this year, we’re also going to celebrate this tenth anniversary by highlighting the people around the world who make our impact possible. It’s no small village.
From a community forum that has provided free technical support, to our roster of sponsors who provide vital funding, to the thousands of individual supporters who contribute financially to Let’s Encrypt each year, free TLS at Internet scale works because people have supported it year in, year out, for ten years.
Each month we’ll highlight a different set of people behind our “everybody.” Who do you want to see us highlight? What use cases of Let’s Encrypt have you seen that amazed you? What about our work do you hope we’ll continue or improve as we go forward? Let us know on LinkedIn, or drop a note to outreach@letsencrypt.org.
Encryption for Everybody is our unofficial tagline for this tenth anniversary year. What we love about it is that, yes, it captures our commitment to ensuring anyone around the world can easily get a cert for free. But more importantly, it captures the reality that technical innovation won’t work without people believing in it and supporting it. We’re grateful that, for ten years (and counting!), our community of supporters has made an impact on the lives of billions of Internet users—an impact that’s made theWeb more secure and privacy respecting for everybody, everywhere.
Internet Security Research Group (ISRG) is the parent organization of Let’s Encrypt, Prossimo, and Divvi Up. ISRG is a 501(c)(3) nonprofit. If you’d like to support our work, please consider getting involved, donating, or encouraging your company to become a sponsor.
Fri, 14 Feb 2025 00:00:00 +0000
Scaling Our Rate Limits to Prepare for a Billion Active Certificates
The Certificate Transparency ecosystem has been improving transparency for the web PKI since 2013. It helps make clear exactly what certificates each certificate authority has issued and makes sure errors or compromises of certificate authorities are detectable.
Let’s Encrypt participates in CT both as a certificate issuer and as a log operator. For the past year, we’ve also been running an experiment to help validate a next-generation design for Certificate Transparency logs. That experiment is now nearing a successful conclusion. We’ve demonstrated that the new architecture (called the “Static CT API”) works well, providing greater efficiency and making it easier to run huge and reliable CT log services with comparatively modest resources. The Static CT API also makes it easier to download and share data from CT logs.
The Sunlight log implementation, alongside other Static CT API log implementations, is now on a path to production use. Browsers are now officially accepting Static CT API logs into their log programs as a means to help guarantee that the contents of CA-issued certificates are all publicly disclosed and publicly accessible (see Safari’s and Chrome’s recent announcements), although the browsers also require the continued use of a traditional RFC 6962 log alongside the new type.
All of this is good news for everyone who runs, submits certificates to, or monitors a CT log: as the new architecture gets adopted, we can expect to see more organizations running more logs, at lower cost, and with greater overall capacity to keep up with the large volume of publicly-trusted certificates.
Certificate Transparency
Certificate Transparency (CT) was introduced in 2013 in response to concerns about how Internet users could detect misbehavior and compromise of certificate authorities. Prior to CT, it was possible for a CA to issue an inaccurate or malicious certificate that could be used to attack a relatively small number of users, and that might never come to wider attention. A team led by Google responded to this by creating a transparency log mechanism, where certificate authorities (like Let’s Encrypt) must disclose all of the certificates that we issue by submitting them to public log services. Web browsers now generally reject certificates unless the certificates include cryptographic proof (“Signed Certificate Timestamps”, or SCTs) demonstrating that they were submitted to and accepted by such logs.
The CT logs themselves use a cryptographic append-only ledger to prove that they haven’t deleted or modified their records. There are currently over a dozen CT log services, most of them also run by certificate authorities, including Let’s Encrypt’s own Oak log.
The Static CT API
The original 2013 CT log design has been used with relatively few technical changes since it was first introduced, but several other transparency logging systems have been created in other areas, such as sumdb for Golang, which helps ensure that the contents of Golang package updates are publicly recorded. While they were originally inspired by CT, more-recently invented transparency logs have improved on its design.
The current major evolution of CT was led by Filippo Valsorda, a cryptographer with an interest in transparency log mechanisms, with help from others in the CT ecosystem. Portions of the new design are directly based on sumdb. In addition to designing the new architecture, Valsorda also wrote the implementation that we’ve been using, called Sunlight, with support from Let’s Encrypt. We’re excited to see that there are now at least three other compatible implementations: Google’s trillian-tessera, Cloudflare’s Azul, and an independent project called Itko.
The biggest change for the Static CT API is that logs are now represented, and downloaded by verifiers, as simple collections of flat files (called “tiles,” so some implementers have also been referring to these as “tiled logs” or “tlogs”). Anyone who wants to download log data can do so just by downloading these files. This is great for log operators because these simple file downloads can be distributed in various ways, including caching by a CDN, which was less practical and efficient for the classic CT API.
The new design is also simpler and more efficient from the log operator’s perspective, making it cheaper to run logs. As we said last year, this may enable us and other operators to increase reliability and availability by running several separate logs, likely with lower overall resource requirements than a single traditional log.
Our Sunlight experiment

For the past year, we’ve run three Sunlight logs, called Twig, Willow, and Sycamore. We’ve been logging all of our own issued certificates, which represent a majority of the total volume of all publicly-trusted certificates, into our Sunlight logs. Sunlight logged these certificates quickly and correctly on relatively modest server hardware. Notably, each log’s write side was handled comfortably by just a single server. We also achieved high availability for these log services throughout the course of this experiment. (Because our Sunlight logs are not yet trusted by web browsers, we didn’t include the SCT proofs that they returned to us in the actual certificates we gave out to our subscribers; those proofs wouldn’t have been of use to our subscribers yet and would just have taken up space.)
A potential failure mode of traditional CT logs is that they could be unacceptably slow in incorporating newly-submitted certificates (known as missing the maximum merge delay), which can result in a log becoming distrusted. This isn’t a possibility for our new Sunlight-based logs: they always completely incorporate newly-submitted certificates before returning an SCT to the submitter, so the effective merge delay is zero! Of course, any log can suffer outages for a variety of reasons, but this feature of Sunlight makes it less likely that any outages will be fatal to a log’s continued operation.
We’ve demonstrated that Sunlight and the Static CT API work in practice, and this demonstration has helped to confirm the browser developers’ hope that Static CT API logs can become an officially-supported part of CT. As a result, the major browsers that enforce CT have now permitted Static CT API logs to apply for inclusion in browsers as publicly-trusted logs, and we’re preparing to apply for this status for our Willow and Sycamore logs with the Chrome and Safari CT log programs.
Let’s Encrypt will run at least these two logs, and possibly others over time, for the foreseeable future. Once they’re trusted by browsers, we’ll encourage other CAs to submit to them as well, and we’ll begin including SCTs from these logs in our own certificates (alongside SCTs from traditional CT logs).
How to participate
The new Static CT API and the rollout of tile-based logs will bring various changes and opportunities for community members.
New Certificate Transparency log operators
Companies and non-profit organizations could help support the web PKI by running a CT log and applying for it to be publicly trusted. Implementations like Sunlight will have substantially lower resource requirements than first-generation CT logs, particularly when cached behind a CDN. The biggest resource demands for a log operator will be storage and upstream bandwidth. A publicly-trusted log is also expected to maintain relatively high availability, because CAs need logs to be available in order to continue issuing certificates.
We don’t have statistics to share about the exact resource requirements for such a log yet, but after we have practical experience running a fully publicly-trusted Sunlight log, we should be able to make this more concrete. As noted above, the compute side of the log can be handled by a single server. Sunlight author Filippo Valsorda has recently started running a Sunlight log—also on just a single server—and offered more detailed cost breakdowns for that log’s setup, with an estimated total cost around $10,000 per year. The costs for our production Static CT API logs may be higher than those for Filippo’s log, but still far less than the costs for our traditional RFC 6962 logs.
As with trust decisions about CAs, browser developers are the authorities about which CT logs become publicly trusted. Although any person or organization can run a log, browser developers will generally prefer to trust logs whose continued availability they’re confident of—typically those run by stable organizations with experience running some form of public Internet services. Unlike becoming a certificate authority, running a log does not require a formal audit, as the validation of the log’s availability and correctness can be performed purely by observation.
Certificate authorities
Once the Willow and Sycamore logs are trusted by browsers, our fellow certificate authorities can choose to start logging certificates to them as part of their issuance processes. (Initially, you should still include at least one SCT from a traditional CT log in each certificate.) The details, including the log API endpoints and keys, are available at our CT log page. You can start submitting to these logs right away if you prefer; just bear in mind that the SCTs they return aren’t useful to subscribers yet, and won’t be useful until browsers are updated to trust the new logs.
CT data users
You can monitor CT in order to watch for certificate issuances for your own domain names, or as part of monitoring or security products or services, or for Internet security research purposes. Many of our colleagues have been doing this for some time as a part of various tools they maintain. The Static CT API should make this easier, because you’ll be able to download and share log tiles as sets of ordinary files.
If you already run such monitoring tools, please note that you’ll need to update your data pipeline in order to access Static CT API logs; since the read API is not backwards-compatible, CT API clients will need to be modified to support the new API. Without updated tools, your view of the CT system will become partial!
Also note that getting a complete view of all of CT will still require downloading data from traditional logs, which will probably continue to be true for several years.
Software developers
As logs based on the new API enter production use, it will be important to have tools to interact with and search these logs. We can all benefit from more software that understands how to do this. Since file downloads are such a familiar piece of software functionality, it will probably be easier for developers to develop against the new API compared to the original one.
We’ve also continued to see greater integration of transparency logging tools into other kinds of services, such as software updates. There’s a growing transparency log ecosystem that’s always in need of more tools and integrations. As we mentioned above, transparency logs are increasingly learning from one another, and there are also mechanisms for more direct integration between different kinds of transparency logs (known as “witnessing”). Software developers can help improve different aspects of Internet security by contributing to this active and growing area.
Conclusion
The Certificate Transparency community and larger transparency logging community have experienced a virtuous cycle of innovation, sharing ideas and implementation code between different systems and demonstrating the feasibility of new mechanisms and functionality. With the advent of tile-based logging in CT, the state of the art has moved forward in a way that helps log operators run our logs much more efficiently without compromising security.
We’re proud to have participated in this experiment and the engineering conversation around the evolution of logging architectures. Now that we’ve shown how well the new API really works at scale, we look forward to having publicly-trusted Sunlight logs later this year!
Every night, right around midnight (mainly UTC), a horde of zombies wakes up and clamors for … digital certificates!
The zombies in question are abandoned or misconfigured Internet servers and ACME clients that have been set to request certificates from Let’s Encrypt. As our certificates last for at most 90 days, these zombie clients’ software knows that their certificates are out-of-date and need to be replaced. What they don’t realize is that their quest for new certificates is doomed! These devices are cursed to seek certificates again and again, never receiving them.
But they do use up a lot of certificate authority resources in the process.
The Zombie Client Problem
Unlike a human being, software doesn’t give up in frustration, or try to modify its approach, when it repeatedly fails at the same task. Our emphasis on automation means that the vast majority of Let’s Encrypt certificate renewals are performed by automated software. This is great when those renewals succeed, but it also means that forgotten clients and devices can continue requesting renewals unsuccessfully for months, or even years.
How might that happen? Most often, it happens when a device no longer has a domain name pointed to it. The device itself doesn’t know that this has changed, so it treats renewal failures as transient even though they are actually permanent. For instance:
- An organization may have allowed a domain name registration to lapse because it is no longer needed, but its servers are still configured to request certs for it.
- Or, a home user stopped using a particular dynamic-DNS domain with a network-attached storage device, but is still using that device at home. The device doesn’t realize that the user no longer expects to use the name, so it keeps requesting certs for it.
- Or, a web hosting or CDN customer migrated to a different service provider, but never informed the old service provider. The old service provider’s servers keep requesting certs unsuccessfully. If the customer was in a free service tier, there might not be invoices or charges reminding the customer to cancel the service.
- Or any number of other, subtler changes in a subscriber’s infrastructure, such as changing a firewall rule or some webserver configuration.
At the scale of Let’s Encrypt, which now covers hundreds of millions of names, scenarios like these have become common, and their impact has become substantial. In 2024, we noticed that about half of all certificate requests to the Let’s Encrypt ACME API came from about a million accounts that never successfully complete any validations. Many of these had completed validations and issued certificates sometime in the past, but nowadays every single one of their validation attempts fails, and they show no signs that this will change anytime soon.
Unfortunately, trying to validate those futile requests still uses resources. Our CA software has to generate challenges, reach out and attempt to validate them over the Internet, detect and report failures, and record all of the associated information in our databases and audit logs. And over time, we’ve seen more and more recurring failures: accounts that always fail their issuance requests have been growing at around 18% per year.
In January, we mentioned that we had been addressing the zombie client problem through our rate limit system. This post provides more detail on that progress.
Our Rate Limit Philosophy
If you’ve used Let’s Encrypt as a subscriber, you may have run into one of our rate limits at some point, maybe during your initial setup process. We have eight different kinds of rate limits in place now; as our January post describes, they’ve become more algorithmically sophisticated and grown to address a wider range of problems. A key principle for Let’s Encrypt is that our rate limiting is not a punishment. We don’t think of rate limits as a way of retaliating against a client for misbehavior. Rate limits are simply a tool to maximize the efficient use of our limited resources and prevent people and programs from using up those resources for no constructive purpose.
We’ve consistently tried to design our rate limit mechanisms in line with that philosophy. So if a misconfiguration or misunderstanding has caused excessive requests in the past, we’re still happy to welcome the user in question back and start issuing them certificates again—once the problem has been addressed. We want the rate limits to put a brake on wasteful use of our systems, but not to frustrate users who are actively trying to make Let’s Encrypt work for them.
In addition, we’ve always implemented our rate limits to err on the side of permissiveness. For example, if the Redis instances where rate limits are tracked have an outage or lose data, the system is designed to permit more issuance rather than less issuance as a result.
We wanted to create additional limits that would target zombie clients, but in a correspondingly non-punitive way that would avoid any disruption to valid issuance, and welcome subscribers back quickly if they happened to notice and fix a long-time problem with their setups.
Our Zombie-Related Rate Limits and Their Impact
In planning a new zombie-specific response, we decided on a “pausing” approach, which can temporarily limit an account’s ability to proceed with certificate requests. The core idea is that, if a particular account consistently fails to complete validation for a particular hostname, we’ll pause that account-hostname pair. The pause means that any new order requests from that account for that hostname will be rejected immediately, before we get to the resource-intensive validation phase.
This approach is more finely targeted than pausing an entire account. Pausing account-hostname pairs means that your ability to issue certs for a specific name could be paused due to repeated failures, but you can still get all of your other certs like normal. So a large hosting provider doesn’t have to fear that its certificate issuance on behalf of one customer will be affected by renewal failures related to a problem with a different customer’s domain name. The account-specificity of the pause, in turn, means that validation failures from one subscriber or device won’t prevent a different subscriber or device from attempting to validate the same name, as long as the devices in question don’t share a single Let’s Encrypt account.
In September 2024, we began applying our zombie rate limits manually by pausing about 21,000 of the most recurrently-failing account-hostname pairs, those which were consistently repeating the same failed requests many times per day, every day. After implementing that first round of pauses, we immediately saw a significant impact on our failed request rates. As we announced at that time, we also began using a formula to automatically pause other zombie client account-hostname pairs from December 2024 onward. The associated new rate limit is called “Consecutive Authorization Failures per Hostname Per Account” (and is independent of the existing “Authorization Failures per Hostname Per Account” limit, which resets every hour).
This formula relates to the frequency of successive failed issuance requests for the same domain name by the same Let’s Encrypt account. It applies only to failures that happen again and again, with no successful issuances at all in between: a single successful validation immediately resets the rate limit all the way to zero. Like all of our rate limits, this is not a punitive measure but is simply intended to reduce the waste of resources. So, we decided to set the thresholds rather high in the expectation that we would catch only the most disruptive zombie clients, and ultimately only those clients that were extremely unlikely to succeed in the future based on their substantial history of failed requests. We don’t hurry to block requesters as zombies: according to our current formula, client software following the default established by EFF’s Certbot (two renewal attempts per day) would be paused as a zombie only after about ten years of constant failures. More aggressive failed issuance attempts will get a client paused sooner, but clients will generally have to fail hundreds or thousands of attempts in a row before they are paused.
Most subscribers using mainstream client applications with default configurations will never encounter this rate limit, even if they forget to deactivate renewal attempts for domains that are no longer pointed at their servers. As described below, our current limit is already providing noticeable benefits with minimal disruption, and we’re likely to tighten it a bit in the near future, so it will trigger after somewhat fewer consecutive failures.
Self-Service Unpausing
A key feature in our zombie issuance pausing mechanism is self-service unpausing. Whenever an account-hostname pair is paused, any new certificate requests for that hostname submitted by that account are immediately rejected. But this means that the “one successful validation immediately resets the rate limit counter” feature can no longer come into effect: once they’re paused, they can’t even attempt validation anymore.
So every rejection comes with an error message explaining what has happened and a custom link that can be used to immediately unpause that account-hostname pair and remove any other pauses on the same account at the same time. The point of this is that subscribers who notice at some point that issuance is failing and want to intervene to get it working again have a straightforward option to let Let’s Encrypt know that they’re aware of the recurring failures and are still planning to use a particular account. As soon as subscribers notify us via the self-service link, they’ll be able to issue certificates again.
Currently, the user interface for an affected subscriber looks like this:

This link would be provided via an ACME error message in response to any request that was blocked due to a pause account-hostname pair.
As it’s turned out, the unpause option shown above has only been used by about 3% of affected accounts! This goes to show that most of the zombies we’ve paused were, in fact, well and truly forgotten about.
However, the unpause feature is there for whenever it’s needed, and there may be cases when it will become more important. A very large integration could trigger the zombie-related rate limits if a newly-introduced software bug causes what looks like a very high volume of zombie requests in a very short time. In that case, once that bug has been noticed and fixed, an integrator may need to unpause its issuance on behalf of lots of customers at once. Our unpause feature permits unpausing 50,000 domain names on a single account at a time, so even the largest integrators can get themselves unpaused expeditiously in this situation.
Conclusion
We’ve been very happy with the results of our zombie mitigation measures, and, as far as we can tell, there’s been almost no impact for subscribers! Our statistics indicate that we’ve managed to reduce the load on our infrastructure while causing no detectable harm or inconvenience to subscribers’ valid issuance requests.
Since implementing the manual pauses in September and the automated pauses in December, we’ve seen:
- Over 100,000 account-hostname pairs have been paused for excessive failures.
- We received zero (!) associated complaints or support requests.
- About 3,200 people manually unpaused issuance.
- Failed certificate orders fell by about 30% so far, and should continue to fall over time as we fine-tune the rate limit formula and catch more zombie clients.
The new rate limit and the self-service unpause system are also ready to deal with circumstances that might produce more zombie clients in the future. For instance, we’ve announced that we’re going to be discontinuing renewal reminder emails soon. If some subscribers overlook failed renewals in the future, we might see more paused clients that result from unintentional renewal failures. We think taking advantage of the existing self-service unpause feature will be straightforward in that case. But it’s much better to notice problems and get them fixed up front, so please remember to set up your own monitoring to avoid unnoticed renewal failures in the future.
If you’re a subscriber who’s had occasion to use the self-service unpause feature, we’d love your feedback on the Community Forum about your experience using the feature and the circumstances that surrounded your account’s getting paused.
Also, if you’re a Let’s Encrypt client developer, please remember to make renewal requests at a random time (not precisely at midnight) so that the load on our infrastructure is smoothed out. You can also reduce the impact of zombie renewals by repeating failed requests somewhat less frequently over time (a “back-off” strategy), especially if the failure reason makes it look like a domain name may no longer be in use at all.
At Let’s Encrypt we know that building a secure Internet isn’t just a technical challenge—it’s a long-term commitment. Over the past decade we’ve made enormous strides: from issuing billions of TLS certificates to continually innovating to keep the web safer and more accessible. But none of this would be possible without recurring donations from individuals and organizations around the world.
Recurring donations are more than just financial support; they allow us to plan, innovate, and keep improving with confidence, knowing that month after month, year after year, our supporters are there. This consistent backing empowers us to maintain a secure, privacy-respecting Internet for all.
Our tenth anniversary tagline, Encryption for Everybody, highlights this vision. It’s both a technical goal and a fundamental belief that secure communication should be available to everyone, everywhere.
When we asked our recurring donors why they give, their responses affirmed how essential this commitment is. One longtime supporter shared:
Supporting Let's Encrypt aligns with my belief in a privacy-conscious world, where encrypted communication is the default.
For some, it’s about paying it forward, helping future users benefit as they once did:
For my 18th birthday, I got my last name as a domain. As a young tech enthusiast with little money, Let's Encrypt made it possible for me to get a TLS certificate and learn about technology. Back then, I was a student using it for free. Now that I have a stable income, donating is my way of giving back and helping others have the same opportunities I did.
The next decade of Let’s Encrypt will likely be about maintaining that commitment to encryption for everybody. It’s about ensuring that our work remains reliable, accessible, and—most importantly—supported by people who believe in what we do. To everyone who’s been part of this journey, thank you. We couldn’t do it without you.
During Let’s Encrypt’s 10th Anniversary Year, we’re celebrating our community and reflecting on our journey. We’d be thrilled to hear from you. Connect with us on LinkedIn, our community forum, or email us at outreach@letsencrypt.org. Let’s keep building a secure Internet together!
Let’s Encrypt is a project of the nonprofit Internet Security Research Group, a 501(c)(3) nonprofit. To support our work, visit letsencrypt.org/donate.
Let’s Encrypt will no longer include the “TLS Client Authentication” Extended Key Usage (EKU) in our certificates beginning in 2026. Most users who use Let’s Encrypt to secure websites won’t be affected and won’t need to take any action. However, if you use Let’s Encrypt certificates as client certificates to authenticate to a server, this change may impact you.
To minimize disruption, Let’s Encrypt will roll this change out in multiple stages, using ACME Profiles:
- Today: Let’s Encrypt already excludes the Client Authentication EKU on our
tlsserver
ACME profile. You can verify compatibility by issuing certificates with this profile now. - October 1, 2025: Let’s Encrypt will launch a new
tlsclient
ACME profile which will retain the TLS Client Authentication EKU. Users who need additional time to migrate can opt-in to this profile. - February 11, 2026: the default
classic
ACME profile will no longer contain the Client Authentication EKU. - May 13, 2026: the
tlsclient
ACME profile will no longer be available and no further certificates with the Client Authentication EKU will be issued.
Once this is completed, Let’s Encrypt will switch to issuing with new intermediate Certificate Authorities which also do not contain the TLS Client Authentication EKU.
For some background information, all certificates include a list of intended uses, known as Extended Key Usages (EKU). Let’s Encrypt certificates have included two EKUs: TLS Server Authentication and TLS Client Authentication.
- TLS Server Authentication is used to authenticate connections to TLS Servers, like websites.
- TLS Client Authentication is used by clients to authenticate themselves to a server. This feature is not typically used on the web, and is not required on the certificates used on a website.
After this change is complete, only TLS Server Authentication will be available from Let’s Encrypt.
This change is prompted by changes to Google Chrome’s root program requirements, which impose a June 2026 deadline to split TLS Client and Server Authentication into separate PKIs. Many uses of client authentication are better served by a private certificate authority, and so Let’s Encrypt is discontinuing support for TLS Client Authentication ahead of this deadline.
How Pebble Supports ACME Client Developers
Together with the IETF community, we created the ACME standard to support completely automated certificate issuance. This open standard is now supported by dozens of clients. On the server side, did you know that we have not one but two open-source ACME server implementations?
The big implementation, which we use ourselves in production, is called Boulder. Boulder handles all of the facets and details needed for a production certificate authority, including policy compliance, database interfaces, challenge verifications, and logging. You can adapt and use Boulder yourself if you need to run a real certificate authority, including an internal, non-publicly-trusted ACME certificate authority within an organization.
The small implementation is called Pebble. It’s meant entirely for testing, not for use as a real certificate authority, and we and ACME client developers use it for various automated and manual testing purposes. For example, Certbot has used Pebble in its development process for years in order to perform a series of basic but realistic checks of the ability to request and obtain certificates from an ACME server.
Pebble is Easy to Use for ACME Client Testing
For any developer or team creating an ACME client application, Pebble solves a range of problems along the lines of “how do I check whether I’ve implemented ACME correctly, so that I could actually get certificates from a CA, without necessarily using a real domain name, and without running into CA rate limits during my routine testing?” Pebble is quick and easy to set up if you need to test an ACME client’s functionality.
It runs in RAM without dependencies or persistence; you won’t need to set up a database or a configuration for it. You can get Pebble running with a single golang command in just a few seconds, and immediately start making local ACME requests. That’s suitable for inclusion in a client’s integration test suite, making much more realistic integration tests possible without needing to worry about real domains, CA rate limits, or network outages.
We see Pebble getting used in the official test suites for ACME clients including getssl, Lego, Certbot, simp_le, and others. In many cases, every change committed to the ACME client’s code base is automatically tested against Pebble.
Pebble is Intentionally Different From Boulder
Pebble is also deliberately different from Boulder in some places in order to provide clients with an opportunity to interoperate with slightly different ACME implementations. The Pebble code explains that
[I]n places where the ACME specification allows customization/CA choice Pebble aims to make choices different from Boulder. For instance, Pebble changes the path structures for its resources and directory endpoints to differ from Boulder. The goal is to emphasize client specification compatibility and to avoid "over-fitting" on Boulder and the Let's Encrypt production service.
For instance, the Let’s Encrypt service currently offers its newAccount
resource at the path /acme/new-acct
, whereas Pebble uses a different name /sign-me-up
, so clients will be reminded to check the directory rather than assuming a specific path. Other substantive differences include:
- Pebble rejects 5% of all requests as having a invalid nonce, even if the nonce was otherwise valid, so clients can test how they respond this error condition
- Pebble only reuses valid authorizations 50% of the time, so clients can check their ability to perform validations when they might not have expected to
- Pebble truncates timestamps to a different degree of precision than Boulder
- Unlike Boulder, Pebble respects the notBefore and notAfter fields of new-order requests
The ability of ACME clients to work with both versions is a good test of their conformance to the ACME specification, rather than making assumptions about the current behavior of the Let’s Encrypt service in particular. This helps ensure that clients will work properly with other ACME CAs, and also with future versions of Let’s Encrypt’s own API.
Pebble is Useful to Both Let’s Encrypt and Client Developers as ACME Evolves
We often test out new ACME features by implementing them, at least in a simplified form, in Pebble before Boulder. This lets us and client developers experiment with support for those features even before they get rolled out in our staging service. We can do this quickly because a Pebble feature implementation doesn’t have to work with a full-scale CA backend.
We continue to encourage ACME client developers to use a copy of Pebble to test their clients’ functionality and ACME interoperability. It’s convenient and it’s likely to increase the correctness and robustness of their client applications.
Try Out Pebble Yourself
Want to try Pebble with your ACME client right now? On a Unix-like system, you can run
git clone https://github.com/letsencrypt/pebble/
cd pebble
go run ./cmd/pebble
Wait a few seconds; now you have a working ACME CA directory available at https://localhost:14000/dir
! Your local ACME Server can immediately receive requests and issue certificates, though not publicly-trusted ones, of course. (If you prefer, we also offer other options for installing Pebble, like a Docker image.)
We welcome code contributions to Pebble. For example, ACME client developers may want to add simple versions of an ACME feature that’s not currently tested in Pebble in order to make their test suites more comprehensive. Also, if you notice a possibly unintended divergence between Pebble and Boulder or Pebble and the ACME specification, we’d love for you to let us know.
As we touched on in our first blog post highlighting ten years of Let’s Encrypt: Just as remarkable to us as the technical innovations behind proliferating TLS at scale is, so too is the sustained generosity we have benefited from throughout our first decade.
With that sense of gratitude top of mind, we are proud to announce a contribution of $1,000,000 from Jeff Atwood. Jeff has been a longtime supporter of our work, beginning many years ago with Discourse providing our community forum pro bono; something Discourse still provides to this day. As best we can tell, our forum has helped hundreds of thousands of people get up and running with Let’s Encrypt—an impact that has helped billions of people use an Internet that’s more secure and privacy-respecting thanks to widely adopted TLS.
When we first spoke with Jeff about the road ahead for Let’s Encrypt back in 2023, we knew a few things wouldn’t change no matter how the Internet changes over the next decade:
- Free TLS is the only way to ensure it is and remains accessible to as many people as possible.
- Let’s Encrypt is here to provide a reliable, trusted, and sound service no matter the scale.
- Generosity from our global community of supporters will be how we sustain our work.
We’re proud that Jeff not only agrees, but has chosen to support us in such a meaningful way. In discussing how Jeff might want us to best celebrate his generosity and recognize his commitment to our work, he shared:
Let's Encrypt is a golden example of how creating inalienable good is possible with the right approach and the right values. And while I'm excited about the work Let's Encrypt has done, I am eager to see their work continue to keep up with the growing Web; to sustain encryption for everybody at Internet scale. To do so is going to take more than me—it's going to take a community of people committed to this work. I am confident Let's Encrypt is a project that deserves all of our support, in ways both large and small.
Indeed, this contribution is significant because of its scale, but more importantly because of its signal: a signal that supporting the not-so-glamorous but oh-so-nerdy work of encryption at scale matters to the lives of billions of people every day; a signal that supporting free privacy and security afforded by TLS for all of the Internet’s five billion users just makes sense.
Ten years ago we set out to build a better Internet through easy to use TLS. If you or your organization have supported us throughout the years, thank you for joining Jeff in believing in the work of Let’s Encrypt. For a deeper dive into the impact of Let’s Encrypt and ISRG’s other projects, take a look at our most recent annual report.
Let’s Encrypt is a project of the nonprofit Internet Security Research Group, a 501(c)(3) nonprofit committed to protecting Internet users by lowering monetary, technological, and informational barriers to a more secure and privacy-respecting Internet. For more, visit abetterinternet.org. Press inquiries can be sent to press@abetterinternet.org
Earlier this year we announced our intention to introduce short-lived certificates with lifetimes of six days as an option for our subscribers. Yesterday we issued our first short-lived certificate. You can see the certificate at the bottom of our post, or here thanks to Certificate Transparency logs. We issued it to ourselves and then immediately revoked it so we can observe the certificate’s whole lifecycle. This is the first step towards making short-lived certificates available to all subscribers.
The next step is for us to make short-lived certificates available to a small set of our subscribers so we can make sure our systems scale as expected prior to general availability. We expect this next phase to begin during Q2 of this year.
We expect short-lived certificates to be generally available by the end of this year.
How To Get Six-Day Certificates
Once short-lived certificates are an option for you, you’ll need to use an ACME client that supports ACME certificate profiles and select the short-lived certificate profile (“shortlived”). The lego
client recently added this functionality.
In the meantime, the best way to prepare to take advantage of short-lived certificates is to make sure your ACME client is reliably renewing certificates in an automated fashion. If that’s working well then there should be no costs to switching to short-lived certificates.
You’ll also want to be sure your ACME client is running frequently - both for the sake of renewing short-lived certificates and so as to take advantage of ACME Renewal Information (ARI). ARI allows Let’s Encrypt to notify your client if it should renew early for some reason. ARI checks should happen at least once per day, and short-lived certificates should be renewed every two to three days, so we recommend having your client run at least once per day.
Shorter Certificate Lifetimes Are Good for Security
When the private key associated with a certificate is compromised, the recommendation has always been to have the certificate revoked so that people will know not to use it. Unfortunately, certificate revocation doesn’t work very well. This means that certificates with compromised keys (or other issues) may continue to be used until they expire. The longer the lifetime of the certificate, the longer the potential for use of a problematic certificate.
The primary advantage of short-lived certificates is that they greatly reduce the potential compromise window because they expire relatively quickly. This reduces the need for certificate revocation, which has historically been unreliable. Our six-day certificates will not include OCSP or CRL URLs. Additionally, short-lived certificates practically require automation, and we believe that automating certificate issuance is important for security.
Questions
If you have questions or comments about our plans, feel free to let us know on our community forums.
We’d like to thank Open Technology Fund for supporting this work.
Our First 6-Day Certificate
PEM format:
-----BEGIN CERTIFICATE-----
MIIDSzCCAtGgAwIBAgISA7CwFcGk4mQWEXMacRtxHeDvMAoGCCqGSM49BAMDMDIx
CzAJBgNVBAYTAlVTMRYwFAYDVQQKEw1MZXQncyBFbmNyeXB0MQswCQYDVQQDEwJF
NjAeFw0yNTAyMTkxNzMwMDFaFw0yNTAyMjYwOTMwMDBaMAAwWTATBgcqhkjOPQIB
BggqhkjOPQMBBwNCAAQoSItt2V1aocI5dxrKR8iLfmm0KiVvOhiwKByzu2kLeC7C
0BdfAgtwdICdkuEhAXokhXLq6DNZZgmh5T4flVwZo4IB9zCCAfMwDgYDVR0PAQH/
BAQDAgeAMBMGA1UdJQQMMAoGCCsGAQUFBwMBMAwGA1UdEwEB/wQCMAAwHwYDVR0j
BBgwFoAUkydGmAOpUWiOmNbEQkjbI79YlNIwVQYIKwYBBQUHAQEESTBHMCEGCCsG
AQUFBzABhhVodHRwOi8vZTYuby5sZW5jci5vcmcwIgYIKwYBBQUHMAKGFmh0dHA6
Ly9lNi5pLmxlbmNyLm9yZy8wKAYDVR0RAQH/BB4wHIIaaGVsbG93b3JsZC5sZXRz
ZW5jcnlwdC5vcmcwEwYDVR0gBAwwCjAIBgZngQwBAgEwggEFBgorBgEEAdZ5AgQC
BIH2BIHzAPEAdgDM+w9qhXEJZf6Vm1PO6bJ8IumFXA2XjbapflTA/kwNsAAAAZUf
d/zOAAAEAwBHMEUCIFNd51TfSNiJrO+294t49C5ANc4oC7gTUzf7xnlNlhKsAiEA
wi5hfiC9SsKLxlTQ0sctUxhLmdYh40r6ECWQS/yWw2AAdwDgkrP8DB3I52g2H95h
uZZNClJ4GYpy1nLEsE2lbW9UBAAAAZUfd/0TAAAEAwBIMEYCIQCs2NuZIUIloOaH
1t9eXDKb8bjoWESBPsK4i2BxMvEIswIhAOMNaQNyr1YkzrcNUz15qGV0oVLg5BJN
+ikWxXOdcRHFMAoGCCqGSM49BAMDA2gAMGUCMDANqy7G09AIwzXcd7SNl7uFwhC+
xlfduvp1PeEDHc/FA9K3mRYkGXuKtzNdOh7wcAIxALjEMDmBQiwXbB447oGkaZAe
0rqxA3EtNV5wj0obeObluj/NgUsVEG9OqiBIoggFRw==
-----END CERTIFICATE-----
openssl x509 -text
output:
Certificate:
Data:
Version: 3 (0x2)
Serial Number:
03:b0:b0:15:c1:a4:e2:64:16:11:73:1a:71:1b:71:1d:e0:ef
Signature Algorithm: ecdsa-with-SHA384
Issuer: C=US, O=Let's Encrypt, CN=E6
Validity
Not Before: Feb 19 17:30:01 2025 GMT
Not After : Feb 26 09:30:00 2025 GMT
Subject:
Subject Public Key Info:
Public Key Algorithm: id-ecPublicKey
Public-Key: (256 bit)
pub:
04:28:48:8b:6d:d9:5d:5a:a1:c2:39:77:1a:ca:47:
c8:8b:7e:69:b4:2a:25:6f:3a:18:b0:28:1c:b3:bb:
69:0b:78:2e:c2:d0:17:5f:02:0b:70:74:80:9d:92:
e1:21:01:7a:24:85:72:ea:e8:33:59:66:09:a1:e5:
3e:1f:95:5c:19
ASN1 OID: prime256v1
NIST CURVE: P-256
X509v3 extensions:
X509v3 Key Usage: critical
Digital Signature
X509v3 Extended Key Usage:
TLS Web Server Authentication
X509v3 Basic Constraints: critical
CA:FALSE
X509v3 Authority Key Identifier:
93:27:46:98:03:A9:51:68:8E:98:D6:C4:42:48:DB:23:BF:58:94:D2
Authority Information Access:
OCSP - URI:http://e6.o.lencr.org
CA Issuers - URI:http://e6.i.lencr.org/
X509v3 Subject Alternative Name: critical
DNS:helloworld.letsencrypt.org
X509v3 Certificate Policies:
Policy: 2.23.140.1.2.1
CT Precertificate SCTs:
Signed Certificate Timestamp:
Version : v1 (0x0)
Log ID : CC:FB:0F:6A:85:71:09:65:FE:95:9B:53:CE:E9:B2:7C:
22:E9:85:5C:0D:97:8D:B6:A9:7E:54:C0:FE:4C:0D:B0
Timestamp : Feb 19 18:28:32.078 2025 GMT
Extensions: none
Signature : ecdsa-with-SHA256
30:45:02:20:53:5D:E7:54:DF:48:D8:89:AC:EF:B6:F7:
8B:78:F4:2E:40:35:CE:28:0B:B8:13:53:37:FB:C6:79:
4D:96:12:AC:02:21:00:C2:2E:61:7E:20:BD:4A:C2:8B:
C6:54:D0:D2:C7:2D:53:18:4B:99:D6:21:E3:4A:FA:10:
25:90:4B:FC:96:C3:60
Signed Certificate Timestamp:
Version : v1 (0x0)
Log ID : E0:92:B3:FC:0C:1D:C8:E7:68:36:1F:DE:61:B9:96:4D:
0A:52:78:19:8A:72:D6:72:C4:B0:4D:A5:6D:6F:54:04
Timestamp : Feb 19 18:28:32.147 2025 GMT
Extensions: none
Signature : ecdsa-with-SHA256
30:46:02:21:00:AC:D8:DB:99:21:42:25:A0:E6:87:D6:
DF:5E:5C:32:9B:F1:B8:E8:58:44:81:3E:C2:B8:8B:60:
71:32:F1:08:B3:02:21:00:E3:0D:69:03:72:AF:56:24:
CE:B7:0D:53:3D:79:A8:65:74:A1:52:E0:E4:12:4D:FA:
29:16:C5:73:9D:71:11:C5
Signature Algorithm: ecdsa-with-SHA384
Signature Value:
30:65:02:30:30:0d:ab:2e:c6:d3:d0:08:c3:35:dc:77:b4:8d:
97:bb:85:c2:10:be:c6:57:dd:ba:fa:75:3d:e1:03:1d:cf:c5:
03:d2:b7:99:16:24:19:7b:8a:b7:33:5d:3a:1e:f0:70:02:31:
00:b8:c4:30:39:81:42:2c:17:6c:1e:38:ee:81:a4:69:90:1e:
d2:ba:b1:03:71:2d:35:5e:70:8f:4a:1b:78:e6:e5:ba:3f:cd:
81:4b:15:10:6f:4e:aa:20:48:a2:08:05:47

2025 marks ten years of Let’s Encrypt. Already this year we’ve taken steps to continue to deliver on our values of user privacy, efficiency, and innovation, all with the intent of continuing to deliver free TLS certificates to as many people as possible; to deliver encryption for everybody.
And while we’re excited about the technical progress we’ll make this year, we’re also going to celebrate this tenth anniversary by highlighting the people around the world who make our impact possible. It’s no small village.
From a community forum that has provided free technical support, to our roster of sponsors who provide vital funding, to the thousands of individual supporters who contribute financially to Let’s Encrypt each year, free TLS at Internet scale works because people have supported it year in, year out, for ten years.
Each month we’ll highlight a different set of people behind our “everybody.” Who do you want to see us highlight? What use cases of Let’s Encrypt have you seen that amazed you? What about our work do you hope we’ll continue or improve as we go forward? Let us know on LinkedIn, or drop a note to outreach@letsencrypt.org.
Encryption for Everybody is our unofficial tagline for this tenth anniversary year. What we love about it is that, yes, it captures our commitment to ensuring anyone around the world can easily get a cert for free. But more importantly, it captures the reality that technical innovation won’t work without people believing in it and supporting it. We’re grateful that, for ten years (and counting!), our community of supporters has made an impact on the lives of billions of Internet users—an impact that’s made theWeb more secure and privacy respecting for everybody, everywhere.
Internet Security Research Group (ISRG) is the parent organization of Let’s Encrypt, Prossimo, and Divvi Up. ISRG is a 501(c)(3) nonprofit. If you’d like to support our work, please consider getting involved, donating, or encouraging your company to become a sponsor.
Let’s Encrypt protects a vast portion of the Web by providing TLS certificates to over 550 million websites—a figure that has grown by 42% in the last year alone. We currently issue over 340,000 certificates per hour. To manage this immense traffic and maintain responsiveness under high demand, our infrastructure relies on rate limiting. In 2015, we introduced our first rate limiting system, built on MariaDB. It evolved alongside our rapidly growing service but eventually revealed its limits: straining database servers, forcing long reset times on subscribers, and slowing down every request.
We needed a solution built for the future—one that could scale with demand, reduce the load on MariaDB, and adapt to real-world subscriber request patterns. The result was a new rate limiting system powered by Redis and a proven virtual scheduling algorithm from the mid-90s. Efficient and scalable, and capable of handling over a billion active certificates.
Rate Limiting a Free Service is Hard
In 2015, Let’s Encrypt was in early preview, and we faced a unique challenge. We were poised to become incredibly popular, offering certificates freely and without requiring contact information or email verification. Ensuring fair usage and preventing abuse without traditional safeguards demanded an atypical approach to rate limiting.
We decided to limit the number of certificates issued—per week—for each registered domain. Registered domains are a limited resource with real costs, making them a natural and effective basis for rate limiting—one that mirrors the structure of the Web itself. Specifically, this approach targets the effective Top-Level Domain (eTLD), as defined by the Public Suffix List (PSL), plus one additional label to the left. For example, in new.blog.example.co.uk
, the eTLD is .co.uk
, making example.co.uk
the eTLD+1.
Counting Events Was Easy
For each successfully issued certificate, we logged an entry in a table that recorded the registered domain, the issuance date, and other relevant details. To enforce rate limits, the system scanned this table, counted the rows matching a given registered domain within a specific time window, and compared the total to a configured threshold. This simple design formed the basis for all future rate limits.
Counting a Lot of Events Got Expensive
By 2019, we had added six new rate limits to protect our infrastructure as demand for certificates surged. Enforcing these limits required frequent scans of database tables to count recent matching events. These operations, especially on our heavily-used authorizations table, caused significant overhead, with reads outpacing all other tables—often by an order of magnitude.
Rate limit calculations were performed early in request processing and often. Counting rows in MariaDB, particularly for accounts with rate limit overrides, was inherently expensive and quickly became a scaling bottleneck.
Adding new limits required careful trade-offs. Decisions about whether to reuse existing schema, optimize indexes, or design purpose-built tables helped balance performance, complexity, and long-term maintainability.
Buying Runway — Offloading Reads
In late 2021, we updated our control plane and Boulder—our in-house CA software—to route most API reads, including rate limit checks, to database replicas. This reduced the load on the primary database and improved its overall health. At the same time, however, latency of rate limit checks during peak hours continued to rise, highlighting the limitations of scaling reads alone.
Sliding Windows Got Frustrating
Subscribers were frequently hitting rate limits unexpectedly, leaving them unable to request certificates for days. This issue stemmed from our use of relatively large rate limiting windows—most spanning a week. Subscribers could deplete their entire limit in just a few moments by repeating the same request, and find themselves locked out for the remainder of the week. This approach was inflexible and disruptive, causing unnecessary frustration and delays.
In early 2022, we patched the Duplicate Certificate limit to address this rigidity. Using a naive token-bucket approach, we allowed users to “earn back” requests incrementally, cutting the wait time—once rate limited—to about 1.4 days. The patch worked by fetching recent issuance timestamps and calculating the time between them to grant requests based on the time waited. This change also allowed us to include a Retry-After timestamp in rate limited responses. While this improved the user experience for this one limit, we understood it to be a temporary fix for a system in need of a larger overhaul.
When a Problem Grows Large Enough, It Finds the Time for You
Setting aside time for a complete overhaul of our rate-limiting system wasn’t easy. Our development team, composed of just three permanent engineers, typically juggles several competing priorities. Yet by 2023, our flagging rate limits code had begun to endanger the reliability of our MariaDB databases.
Our authorizations table was now regularly read an order of magnitude more than any other. Individually identifying and deleting unnecessary rows—or specific values—had proved unworkable due to poor MariaDB delete performance. Storage engines like InnoDB must maintain indexes, foreign key constraints, and transaction logs for every deletion, which significantly increases overhead for concurrent transactions and leads to gruelingly slow deletes.
Our SRE team automated the cleanup of old rows for many tables using the PARTITION
command, which worked well for bookkeeping and compliance data. Unfortunately, we couldn’t apply it to most of our purpose-built rate limit tables. These tables depend on ON DUPLICATE KEY UPDATE
, a mechanism that requires the targeted column to be a unique index or primary key, while partitioning demands that the primary key be included in the partitioning key.
Indexes on these tables—such as those tracking requested hostnames—often grew larger than the tables themselves and, in some cases, exceeded the memory of our smaller staging environment databases, eventually forcing us to periodically wipe them entirely.
By late 2023, this cascading confluence of complexities required a reckoning. We set out to design a rate limiting system built for the future.
The Solution: Redis + GCRA
We designed a system from the ground up that combines Redis for storage and the Generic Cell Rate Algorithm (GCRA) for managing request flow.
Why Redis?
Our engineers were already familiar with Redis, having recently deployed it to cache and serve OCSP responses. Its high throughput and low latency made it a candidate for tracking rate limit state as well.
By moving this data from MariaDB to Redis, we could eliminate the need for ever-expanding, purpose-built tables and indexes, significantly reducing read and write pressure. Redis’s feature set made it a perfect fit for the task. Most rate limit data is ephemeral—after a few days, or sometimes just minutes, it becomes irrelevant unless the subscriber calls us again. Redis’s per-key Time-To-Live would allow us to expire this data the moment it was no longer needed.
Redis also supports atomic integer operations, enabling fast, reliable counter updates, even when increments occur concurrently. Its “set if not exist” functionality ensures efficient initialization of keys, while pipeline support allows us to get and set multiple keys in bulk. This combination of familiarity, speed, simplicity, and flexibility made Redis the natural choice.
Why GCRA?
The Generic Cell Rate Algorithm (GCRA) is a virtual scheduling algorithm originally designed for telecommunication networks to regulate traffic and prevent congestion. Unlike traditional sliding window approaches that work in fixed time blocks, GCRA enforces rate limits continuously, making it well-suited to our goals.
A rate limit in GCRA is defined by two parameters: the emission interval and the burst tolerance. The emission interval specifies the minimum time that must pass between consecutive requests to maintain a steady rate. For example, an emission interval of one second allows one request per second on average. The burst tolerance determines how much unused capacity can be drawn on to allow short bursts of requests beyond the steady rate.
When a request is received, GCRA compares the current time to the Theoretical Arrival Time (TAT), which indicates when the next request is allowed under the steady rate. If the current time is greater than or equal to the TAT, the request is permitted, and the TAT is updated by adding the emission interval. If the current time plus the burst tolerance is greater than or equal to the TAT, the request is also permitted. In this case, the TAT is updated by adding the emission interval, reducing the remaining burst capacity.
However, if the current time plus the burst tolerance is less than the TAT, the request exceeds the rate limit and is denied. Conveniently, the difference between the TAT and the current time can then be returned to the subscriber in a Retry-After header, informing their client exactly how long to wait before trying again.
To illustrate, consider a rate limit of one request per second (emission interval = 1s) with a burst tolerance of three requests. Up to three requests can arrive back-to-back, but subsequent requests will be delayed until “now” catches up to the TAT, ensuring that the average rate over time remains one request per second.
What sets GCRA apart is its ability to automatically refill capacity gradually and continuously. Unlike sliding windows, where users must wait for an entire time block to reset, GCRA allows users to retry as soon as enough time has passed to maintain the steady rate. This dynamic pacing reduces frustration and provides a smoother, more predictable experience for subscribers.
GCRA is also storage and computationally efficient. It requires tracking only the TAT—stored as a single Unix timestamp—and performing simple arithmetic to enforce limits. This lightweight design allows it to scale to handle billions of requests, with minimal computational and memory overhead.
The Results: Faster, Smoother, and More Scalable
The transition to Redis and GCRA brought immediate, measurable improvements. We cut database load, improved response times, and delivered consistent performance even during periods of peak traffic. Subscribers now experience smoother, more predictable behavior, while the system’s increased permissiveness allows for certificates that the previous approach would have delayed—all achieved without sacrificing scalability or fairness.
Rate Limit Check Latency
Check latency is the extra time added to each request while verifying rate limit compliance. Under the old MariaDB-based system, these checks slowed noticeably during peak traffic, when database contention caused significant delays. Our new Redis-based system dramatically reduced this overhead. The high-traffic “new-order” endpoint saw the greatest improvement, while the “new-account” endpoint—though considerably lighter in traffic—also benefited, especially callers with IPv6 addresses. These results show that our subscribers now experience consistent response times, even under peak load.
Database Health
Our once strained database servers are now operating with ample headroom. In total, MariaDB operations have dropped by 80%, improving responsiveness, reducing contention, and freeing up resources for mission-critical issuance workflows.
Buffer pool requests have decreased by more than 50%, improving caching efficiency and reducing overall memory pressure.
Reads of the authorizations table—a notorious bottleneck—have dropped by over 99%. Previously, this table outpaced all others by more than two orders of magnitude; now it ranks second (the green line below), just narrowly surpassing our third most-read table.
Tracking Zombie Clients
In late 2024, we turned our new rate limiting system toward a longstanding challenge: “zombie clients.” These requesters repeatedly attempt to issue certificates but fail, often because of expired domains or misconfigured DNS records. Together, they generate nearly half of all order attempts yet almost never succeed. We were able to build on this new infrastructure to record consecutive ACME challenge failures by account/domain pair and automatically “pause” this problematic issuance. The result has been a considerable reduction in resource consumption, freeing database and network capacity without disrupting legitimate traffic.
Scalability on Redis
Before deploying the limits to track zombie clients, we maintained just over 12.6 million unique TATs across several Redis databases. Within 24 hours, that number more than doubled to 26 million, and by the end of the week, it peaked at over 30 million. Yet, even with this sharp increase, there was no noticeable impact on rate limit responsiveness. That’s all we’ll share for now about zombie clients—there’s plenty more to unpack, but we’ll save those insights and figures for a future blog post.
What’s Next?
Scaling our rate limits to keep pace with the growth of the Web is a huge achievement, but there’s still more to do. In the near term, many of our other ACME endpoints rely on load balancers to enforce per-IP limits, which works but gives us little control over the feedback provided to subscribers. We’re looking to deploy this new infrastructure across those endpoints as well. Looking further ahead, we’re exploring how we might redefine our rate limits now that we’re no longer constrained by a system that simply counts events between two points in time.
By adopting Redis and GCRA, we’ve built a flexible, efficient rate limit system that promotes fair usage and enables our infrastructure to handle ever-growing demand. We’ll keep adapting to the ever-evolving Web while honoring our primary goal: giving people the certificates they need, for free, in the most user-friendly way we can.
Thu, 30 Jan 2025 00:00:00 +0000
Ending Support for Expiration Notification Emails
The Certificate Transparency ecosystem has been improving transparency for the web PKI since 2013. It helps make clear exactly what certificates each certificate authority has issued and makes sure errors or compromises of certificate authorities are detectable.
Let’s Encrypt participates in CT both as a certificate issuer and as a log operator. For the past year, we’ve also been running an experiment to help validate a next-generation design for Certificate Transparency logs. That experiment is now nearing a successful conclusion. We’ve demonstrated that the new architecture (called the “Static CT API”) works well, providing greater efficiency and making it easier to run huge and reliable CT log services with comparatively modest resources. The Static CT API also makes it easier to download and share data from CT logs.
The Sunlight log implementation, alongside other Static CT API log implementations, is now on a path to production use. Browsers are now officially accepting Static CT API logs into their log programs as a means to help guarantee that the contents of CA-issued certificates are all publicly disclosed and publicly accessible (see Safari’s and Chrome’s recent announcements), although the browsers also require the continued use of a traditional RFC 6962 log alongside the new type.
All of this is good news for everyone who runs, submits certificates to, or monitors a CT log: as the new architecture gets adopted, we can expect to see more organizations running more logs, at lower cost, and with greater overall capacity to keep up with the large volume of publicly-trusted certificates.
Certificate Transparency
Certificate Transparency (CT) was introduced in 2013 in response to concerns about how Internet users could detect misbehavior and compromise of certificate authorities. Prior to CT, it was possible for a CA to issue an inaccurate or malicious certificate that could be used to attack a relatively small number of users, and that might never come to wider attention. A team led by Google responded to this by creating a transparency log mechanism, where certificate authorities (like Let’s Encrypt) must disclose all of the certificates that we issue by submitting them to public log services. Web browsers now generally reject certificates unless the certificates include cryptographic proof (“Signed Certificate Timestamps”, or SCTs) demonstrating that they were submitted to and accepted by such logs.
The CT logs themselves use a cryptographic append-only ledger to prove that they haven’t deleted or modified their records. There are currently over a dozen CT log services, most of them also run by certificate authorities, including Let’s Encrypt’s own Oak log.
The Static CT API
The original 2013 CT log design has been used with relatively few technical changes since it was first introduced, but several other transparency logging systems have been created in other areas, such as sumdb for Golang, which helps ensure that the contents of Golang package updates are publicly recorded. While they were originally inspired by CT, more-recently invented transparency logs have improved on its design.
The current major evolution of CT was led by Filippo Valsorda, a cryptographer with an interest in transparency log mechanisms, with help from others in the CT ecosystem. Portions of the new design are directly based on sumdb. In addition to designing the new architecture, Valsorda also wrote the implementation that we’ve been using, called Sunlight, with support from Let’s Encrypt. We’re excited to see that there are now at least three other compatible implementations: Google’s trillian-tessera, Cloudflare’s Azul, and an independent project called Itko.
The biggest change for the Static CT API is that logs are now represented, and downloaded by verifiers, as simple collections of flat files (called “tiles,” so some implementers have also been referring to these as “tiled logs” or “tlogs”). Anyone who wants to download log data can do so just by downloading these files. This is great for log operators because these simple file downloads can be distributed in various ways, including caching by a CDN, which was less practical and efficient for the classic CT API.
The new design is also simpler and more efficient from the log operator’s perspective, making it cheaper to run logs. As we said last year, this may enable us and other operators to increase reliability and availability by running several separate logs, likely with lower overall resource requirements than a single traditional log.
Our Sunlight experiment

For the past year, we’ve run three Sunlight logs, called Twig, Willow, and Sycamore. We’ve been logging all of our own issued certificates, which represent a majority of the total volume of all publicly-trusted certificates, into our Sunlight logs. Sunlight logged these certificates quickly and correctly on relatively modest server hardware. Notably, each log’s write side was handled comfortably by just a single server. We also achieved high availability for these log services throughout the course of this experiment. (Because our Sunlight logs are not yet trusted by web browsers, we didn’t include the SCT proofs that they returned to us in the actual certificates we gave out to our subscribers; those proofs wouldn’t have been of use to our subscribers yet and would just have taken up space.)
A potential failure mode of traditional CT logs is that they could be unacceptably slow in incorporating newly-submitted certificates (known as missing the maximum merge delay), which can result in a log becoming distrusted. This isn’t a possibility for our new Sunlight-based logs: they always completely incorporate newly-submitted certificates before returning an SCT to the submitter, so the effective merge delay is zero! Of course, any log can suffer outages for a variety of reasons, but this feature of Sunlight makes it less likely that any outages will be fatal to a log’s continued operation.
We’ve demonstrated that Sunlight and the Static CT API work in practice, and this demonstration has helped to confirm the browser developers’ hope that Static CT API logs can become an officially-supported part of CT. As a result, the major browsers that enforce CT have now permitted Static CT API logs to apply for inclusion in browsers as publicly-trusted logs, and we’re preparing to apply for this status for our Willow and Sycamore logs with the Chrome and Safari CT log programs.
Let’s Encrypt will run at least these two logs, and possibly others over time, for the foreseeable future. Once they’re trusted by browsers, we’ll encourage other CAs to submit to them as well, and we’ll begin including SCTs from these logs in our own certificates (alongside SCTs from traditional CT logs).
How to participate
The new Static CT API and the rollout of tile-based logs will bring various changes and opportunities for community members.
New Certificate Transparency log operators
Companies and non-profit organizations could help support the web PKI by running a CT log and applying for it to be publicly trusted. Implementations like Sunlight will have substantially lower resource requirements than first-generation CT logs, particularly when cached behind a CDN. The biggest resource demands for a log operator will be storage and upstream bandwidth. A publicly-trusted log is also expected to maintain relatively high availability, because CAs need logs to be available in order to continue issuing certificates.
We don’t have statistics to share about the exact resource requirements for such a log yet, but after we have practical experience running a fully publicly-trusted Sunlight log, we should be able to make this more concrete. As noted above, the compute side of the log can be handled by a single server. Sunlight author Filippo Valsorda has recently started running a Sunlight log—also on just a single server—and offered more detailed cost breakdowns for that log’s setup, with an estimated total cost around $10,000 per year. The costs for our production Static CT API logs may be higher than those for Filippo’s log, but still far less than the costs for our traditional RFC 6962 logs.
As with trust decisions about CAs, browser developers are the authorities about which CT logs become publicly trusted. Although any person or organization can run a log, browser developers will generally prefer to trust logs whose continued availability they’re confident of—typically those run by stable organizations with experience running some form of public Internet services. Unlike becoming a certificate authority, running a log does not require a formal audit, as the validation of the log’s availability and correctness can be performed purely by observation.
Certificate authorities
Once the Willow and Sycamore logs are trusted by browsers, our fellow certificate authorities can choose to start logging certificates to them as part of their issuance processes. (Initially, you should still include at least one SCT from a traditional CT log in each certificate.) The details, including the log API endpoints and keys, are available at our CT log page. You can start submitting to these logs right away if you prefer; just bear in mind that the SCTs they return aren’t useful to subscribers yet, and won’t be useful until browsers are updated to trust the new logs.
CT data users
You can monitor CT in order to watch for certificate issuances for your own domain names, or as part of monitoring or security products or services, or for Internet security research purposes. Many of our colleagues have been doing this for some time as a part of various tools they maintain. The Static CT API should make this easier, because you’ll be able to download and share log tiles as sets of ordinary files.
If you already run such monitoring tools, please note that you’ll need to update your data pipeline in order to access Static CT API logs; since the read API is not backwards-compatible, CT API clients will need to be modified to support the new API. Without updated tools, your view of the CT system will become partial!
Also note that getting a complete view of all of CT will still require downloading data from traditional logs, which will probably continue to be true for several years.
Software developers
As logs based on the new API enter production use, it will be important to have tools to interact with and search these logs. We can all benefit from more software that understands how to do this. Since file downloads are such a familiar piece of software functionality, it will probably be easier for developers to develop against the new API compared to the original one.
We’ve also continued to see greater integration of transparency logging tools into other kinds of services, such as software updates. There’s a growing transparency log ecosystem that’s always in need of more tools and integrations. As we mentioned above, transparency logs are increasingly learning from one another, and there are also mechanisms for more direct integration between different kinds of transparency logs (known as “witnessing”). Software developers can help improve different aspects of Internet security by contributing to this active and growing area.
Conclusion
The Certificate Transparency community and larger transparency logging community have experienced a virtuous cycle of innovation, sharing ideas and implementation code between different systems and demonstrating the feasibility of new mechanisms and functionality. With the advent of tile-based logging in CT, the state of the art has moved forward in a way that helps log operators run our logs much more efficiently without compromising security.
We’re proud to have participated in this experiment and the engineering conversation around the evolution of logging architectures. Now that we’ve shown how well the new API really works at scale, we look forward to having publicly-trusted Sunlight logs later this year!
Every night, right around midnight (mainly UTC), a horde of zombies wakes up and clamors for … digital certificates!
The zombies in question are abandoned or misconfigured Internet servers and ACME clients that have been set to request certificates from Let’s Encrypt. As our certificates last for at most 90 days, these zombie clients’ software knows that their certificates are out-of-date and need to be replaced. What they don’t realize is that their quest for new certificates is doomed! These devices are cursed to seek certificates again and again, never receiving them.
But they do use up a lot of certificate authority resources in the process.
The Zombie Client Problem
Unlike a human being, software doesn’t give up in frustration, or try to modify its approach, when it repeatedly fails at the same task. Our emphasis on automation means that the vast majority of Let’s Encrypt certificate renewals are performed by automated software. This is great when those renewals succeed, but it also means that forgotten clients and devices can continue requesting renewals unsuccessfully for months, or even years.
How might that happen? Most often, it happens when a device no longer has a domain name pointed to it. The device itself doesn’t know that this has changed, so it treats renewal failures as transient even though they are actually permanent. For instance:
- An organization may have allowed a domain name registration to lapse because it is no longer needed, but its servers are still configured to request certs for it.
- Or, a home user stopped using a particular dynamic-DNS domain with a network-attached storage device, but is still using that device at home. The device doesn’t realize that the user no longer expects to use the name, so it keeps requesting certs for it.
- Or, a web hosting or CDN customer migrated to a different service provider, but never informed the old service provider. The old service provider’s servers keep requesting certs unsuccessfully. If the customer was in a free service tier, there might not be invoices or charges reminding the customer to cancel the service.
- Or any number of other, subtler changes in a subscriber’s infrastructure, such as changing a firewall rule or some webserver configuration.
At the scale of Let’s Encrypt, which now covers hundreds of millions of names, scenarios like these have become common, and their impact has become substantial. In 2024, we noticed that about half of all certificate requests to the Let’s Encrypt ACME API came from about a million accounts that never successfully complete any validations. Many of these had completed validations and issued certificates sometime in the past, but nowadays every single one of their validation attempts fails, and they show no signs that this will change anytime soon.
Unfortunately, trying to validate those futile requests still uses resources. Our CA software has to generate challenges, reach out and attempt to validate them over the Internet, detect and report failures, and record all of the associated information in our databases and audit logs. And over time, we’ve seen more and more recurring failures: accounts that always fail their issuance requests have been growing at around 18% per year.
In January, we mentioned that we had been addressing the zombie client problem through our rate limit system. This post provides more detail on that progress.
Our Rate Limit Philosophy
If you’ve used Let’s Encrypt as a subscriber, you may have run into one of our rate limits at some point, maybe during your initial setup process. We have eight different kinds of rate limits in place now; as our January post describes, they’ve become more algorithmically sophisticated and grown to address a wider range of problems. A key principle for Let’s Encrypt is that our rate limiting is not a punishment. We don’t think of rate limits as a way of retaliating against a client for misbehavior. Rate limits are simply a tool to maximize the efficient use of our limited resources and prevent people and programs from using up those resources for no constructive purpose.
We’ve consistently tried to design our rate limit mechanisms in line with that philosophy. So if a misconfiguration or misunderstanding has caused excessive requests in the past, we’re still happy to welcome the user in question back and start issuing them certificates again—once the problem has been addressed. We want the rate limits to put a brake on wasteful use of our systems, but not to frustrate users who are actively trying to make Let’s Encrypt work for them.
In addition, we’ve always implemented our rate limits to err on the side of permissiveness. For example, if the Redis instances where rate limits are tracked have an outage or lose data, the system is designed to permit more issuance rather than less issuance as a result.
We wanted to create additional limits that would target zombie clients, but in a correspondingly non-punitive way that would avoid any disruption to valid issuance, and welcome subscribers back quickly if they happened to notice and fix a long-time problem with their setups.
Our Zombie-Related Rate Limits and Their Impact
In planning a new zombie-specific response, we decided on a “pausing” approach, which can temporarily limit an account’s ability to proceed with certificate requests. The core idea is that, if a particular account consistently fails to complete validation for a particular hostname, we’ll pause that account-hostname pair. The pause means that any new order requests from that account for that hostname will be rejected immediately, before we get to the resource-intensive validation phase.
This approach is more finely targeted than pausing an entire account. Pausing account-hostname pairs means that your ability to issue certs for a specific name could be paused due to repeated failures, but you can still get all of your other certs like normal. So a large hosting provider doesn’t have to fear that its certificate issuance on behalf of one customer will be affected by renewal failures related to a problem with a different customer’s domain name. The account-specificity of the pause, in turn, means that validation failures from one subscriber or device won’t prevent a different subscriber or device from attempting to validate the same name, as long as the devices in question don’t share a single Let’s Encrypt account.
In September 2024, we began applying our zombie rate limits manually by pausing about 21,000 of the most recurrently-failing account-hostname pairs, those which were consistently repeating the same failed requests many times per day, every day. After implementing that first round of pauses, we immediately saw a significant impact on our failed request rates. As we announced at that time, we also began using a formula to automatically pause other zombie client account-hostname pairs from December 2024 onward. The associated new rate limit is called “Consecutive Authorization Failures per Hostname Per Account” (and is independent of the existing “Authorization Failures per Hostname Per Account” limit, which resets every hour).
This formula relates to the frequency of successive failed issuance requests for the same domain name by the same Let’s Encrypt account. It applies only to failures that happen again and again, with no successful issuances at all in between: a single successful validation immediately resets the rate limit all the way to zero. Like all of our rate limits, this is not a punitive measure but is simply intended to reduce the waste of resources. So, we decided to set the thresholds rather high in the expectation that we would catch only the most disruptive zombie clients, and ultimately only those clients that were extremely unlikely to succeed in the future based on their substantial history of failed requests. We don’t hurry to block requesters as zombies: according to our current formula, client software following the default established by EFF’s Certbot (two renewal attempts per day) would be paused as a zombie only after about ten years of constant failures. More aggressive failed issuance attempts will get a client paused sooner, but clients will generally have to fail hundreds or thousands of attempts in a row before they are paused.
Most subscribers using mainstream client applications with default configurations will never encounter this rate limit, even if they forget to deactivate renewal attempts for domains that are no longer pointed at their servers. As described below, our current limit is already providing noticeable benefits with minimal disruption, and we’re likely to tighten it a bit in the near future, so it will trigger after somewhat fewer consecutive failures.
Self-Service Unpausing
A key feature in our zombie issuance pausing mechanism is self-service unpausing. Whenever an account-hostname pair is paused, any new certificate requests for that hostname submitted by that account are immediately rejected. But this means that the “one successful validation immediately resets the rate limit counter” feature can no longer come into effect: once they’re paused, they can’t even attempt validation anymore.
So every rejection comes with an error message explaining what has happened and a custom link that can be used to immediately unpause that account-hostname pair and remove any other pauses on the same account at the same time. The point of this is that subscribers who notice at some point that issuance is failing and want to intervene to get it working again have a straightforward option to let Let’s Encrypt know that they’re aware of the recurring failures and are still planning to use a particular account. As soon as subscribers notify us via the self-service link, they’ll be able to issue certificates again.
Currently, the user interface for an affected subscriber looks like this:

This link would be provided via an ACME error message in response to any request that was blocked due to a pause account-hostname pair.
As it’s turned out, the unpause option shown above has only been used by about 3% of affected accounts! This goes to show that most of the zombies we’ve paused were, in fact, well and truly forgotten about.
However, the unpause feature is there for whenever it’s needed, and there may be cases when it will become more important. A very large integration could trigger the zombie-related rate limits if a newly-introduced software bug causes what looks like a very high volume of zombie requests in a very short time. In that case, once that bug has been noticed and fixed, an integrator may need to unpause its issuance on behalf of lots of customers at once. Our unpause feature permits unpausing 50,000 domain names on a single account at a time, so even the largest integrators can get themselves unpaused expeditiously in this situation.
Conclusion
We’ve been very happy with the results of our zombie mitigation measures, and, as far as we can tell, there’s been almost no impact for subscribers! Our statistics indicate that we’ve managed to reduce the load on our infrastructure while causing no detectable harm or inconvenience to subscribers’ valid issuance requests.
Since implementing the manual pauses in September and the automated pauses in December, we’ve seen:
- Over 100,000 account-hostname pairs have been paused for excessive failures.
- We received zero (!) associated complaints or support requests.
- About 3,200 people manually unpaused issuance.
- Failed certificate orders fell by about 30% so far, and should continue to fall over time as we fine-tune the rate limit formula and catch more zombie clients.
The new rate limit and the self-service unpause system are also ready to deal with circumstances that might produce more zombie clients in the future. For instance, we’ve announced that we’re going to be discontinuing renewal reminder emails soon. If some subscribers overlook failed renewals in the future, we might see more paused clients that result from unintentional renewal failures. We think taking advantage of the existing self-service unpause feature will be straightforward in that case. But it’s much better to notice problems and get them fixed up front, so please remember to set up your own monitoring to avoid unnoticed renewal failures in the future.
If you’re a subscriber who’s had occasion to use the self-service unpause feature, we’d love your feedback on the Community Forum about your experience using the feature and the circumstances that surrounded your account’s getting paused.
Also, if you’re a Let’s Encrypt client developer, please remember to make renewal requests at a random time (not precisely at midnight) so that the load on our infrastructure is smoothed out. You can also reduce the impact of zombie renewals by repeating failed requests somewhat less frequently over time (a “back-off” strategy), especially if the failure reason makes it look like a domain name may no longer be in use at all.
At Let’s Encrypt we know that building a secure Internet isn’t just a technical challenge—it’s a long-term commitment. Over the past decade we’ve made enormous strides: from issuing billions of TLS certificates to continually innovating to keep the web safer and more accessible. But none of this would be possible without recurring donations from individuals and organizations around the world.
Recurring donations are more than just financial support; they allow us to plan, innovate, and keep improving with confidence, knowing that month after month, year after year, our supporters are there. This consistent backing empowers us to maintain a secure, privacy-respecting Internet for all.
Our tenth anniversary tagline, Encryption for Everybody, highlights this vision. It’s both a technical goal and a fundamental belief that secure communication should be available to everyone, everywhere.
When we asked our recurring donors why they give, their responses affirmed how essential this commitment is. One longtime supporter shared:
Supporting Let's Encrypt aligns with my belief in a privacy-conscious world, where encrypted communication is the default.
For some, it’s about paying it forward, helping future users benefit as they once did:
For my 18th birthday, I got my last name as a domain. As a young tech enthusiast with little money, Let's Encrypt made it possible for me to get a TLS certificate and learn about technology. Back then, I was a student using it for free. Now that I have a stable income, donating is my way of giving back and helping others have the same opportunities I did.
The next decade of Let’s Encrypt will likely be about maintaining that commitment to encryption for everybody. It’s about ensuring that our work remains reliable, accessible, and—most importantly—supported by people who believe in what we do. To everyone who’s been part of this journey, thank you. We couldn’t do it without you.
During Let’s Encrypt’s 10th Anniversary Year, we’re celebrating our community and reflecting on our journey. We’d be thrilled to hear from you. Connect with us on LinkedIn, our community forum, or email us at outreach@letsencrypt.org. Let’s keep building a secure Internet together!
Let’s Encrypt is a project of the nonprofit Internet Security Research Group, a 501(c)(3) nonprofit. To support our work, visit letsencrypt.org/donate.
Let’s Encrypt will no longer include the “TLS Client Authentication” Extended Key Usage (EKU) in our certificates beginning in 2026. Most users who use Let’s Encrypt to secure websites won’t be affected and won’t need to take any action. However, if you use Let’s Encrypt certificates as client certificates to authenticate to a server, this change may impact you.
To minimize disruption, Let’s Encrypt will roll this change out in multiple stages, using ACME Profiles:
- Today: Let’s Encrypt already excludes the Client Authentication EKU on our
tlsserver
ACME profile. You can verify compatibility by issuing certificates with this profile now. - October 1, 2025: Let’s Encrypt will launch a new
tlsclient
ACME profile which will retain the TLS Client Authentication EKU. Users who need additional time to migrate can opt-in to this profile. - February 11, 2026: the default
classic
ACME profile will no longer contain the Client Authentication EKU. - May 13, 2026: the
tlsclient
ACME profile will no longer be available and no further certificates with the Client Authentication EKU will be issued.
Once this is completed, Let’s Encrypt will switch to issuing with new intermediate Certificate Authorities which also do not contain the TLS Client Authentication EKU.
For some background information, all certificates include a list of intended uses, known as Extended Key Usages (EKU). Let’s Encrypt certificates have included two EKUs: TLS Server Authentication and TLS Client Authentication.
- TLS Server Authentication is used to authenticate connections to TLS Servers, like websites.
- TLS Client Authentication is used by clients to authenticate themselves to a server. This feature is not typically used on the web, and is not required on the certificates used on a website.
After this change is complete, only TLS Server Authentication will be available from Let’s Encrypt.
This change is prompted by changes to Google Chrome’s root program requirements, which impose a June 2026 deadline to split TLS Client and Server Authentication into separate PKIs. Many uses of client authentication are better served by a private certificate authority, and so Let’s Encrypt is discontinuing support for TLS Client Authentication ahead of this deadline.
How Pebble Supports ACME Client Developers
Together with the IETF community, we created the ACME standard to support completely automated certificate issuance. This open standard is now supported by dozens of clients. On the server side, did you know that we have not one but two open-source ACME server implementations?
The big implementation, which we use ourselves in production, is called Boulder. Boulder handles all of the facets and details needed for a production certificate authority, including policy compliance, database interfaces, challenge verifications, and logging. You can adapt and use Boulder yourself if you need to run a real certificate authority, including an internal, non-publicly-trusted ACME certificate authority within an organization.
The small implementation is called Pebble. It’s meant entirely for testing, not for use as a real certificate authority, and we and ACME client developers use it for various automated and manual testing purposes. For example, Certbot has used Pebble in its development process for years in order to perform a series of basic but realistic checks of the ability to request and obtain certificates from an ACME server.
Pebble is Easy to Use for ACME Client Testing
For any developer or team creating an ACME client application, Pebble solves a range of problems along the lines of “how do I check whether I’ve implemented ACME correctly, so that I could actually get certificates from a CA, without necessarily using a real domain name, and without running into CA rate limits during my routine testing?” Pebble is quick and easy to set up if you need to test an ACME client’s functionality.
It runs in RAM without dependencies or persistence; you won’t need to set up a database or a configuration for it. You can get Pebble running with a single golang command in just a few seconds, and immediately start making local ACME requests. That’s suitable for inclusion in a client’s integration test suite, making much more realistic integration tests possible without needing to worry about real domains, CA rate limits, or network outages.
We see Pebble getting used in the official test suites for ACME clients including getssl, Lego, Certbot, simp_le, and others. In many cases, every change committed to the ACME client’s code base is automatically tested against Pebble.
Pebble is Intentionally Different From Boulder
Pebble is also deliberately different from Boulder in some places in order to provide clients with an opportunity to interoperate with slightly different ACME implementations. The Pebble code explains that
[I]n places where the ACME specification allows customization/CA choice Pebble aims to make choices different from Boulder. For instance, Pebble changes the path structures for its resources and directory endpoints to differ from Boulder. The goal is to emphasize client specification compatibility and to avoid "over-fitting" on Boulder and the Let's Encrypt production service.
For instance, the Let’s Encrypt service currently offers its newAccount
resource at the path /acme/new-acct
, whereas Pebble uses a different name /sign-me-up
, so clients will be reminded to check the directory rather than assuming a specific path. Other substantive differences include:
- Pebble rejects 5% of all requests as having a invalid nonce, even if the nonce was otherwise valid, so clients can test how they respond this error condition
- Pebble only reuses valid authorizations 50% of the time, so clients can check their ability to perform validations when they might not have expected to
- Pebble truncates timestamps to a different degree of precision than Boulder
- Unlike Boulder, Pebble respects the notBefore and notAfter fields of new-order requests
The ability of ACME clients to work with both versions is a good test of their conformance to the ACME specification, rather than making assumptions about the current behavior of the Let’s Encrypt service in particular. This helps ensure that clients will work properly with other ACME CAs, and also with future versions of Let’s Encrypt’s own API.
Pebble is Useful to Both Let’s Encrypt and Client Developers as ACME Evolves
We often test out new ACME features by implementing them, at least in a simplified form, in Pebble before Boulder. This lets us and client developers experiment with support for those features even before they get rolled out in our staging service. We can do this quickly because a Pebble feature implementation doesn’t have to work with a full-scale CA backend.
We continue to encourage ACME client developers to use a copy of Pebble to test their clients’ functionality and ACME interoperability. It’s convenient and it’s likely to increase the correctness and robustness of their client applications.
Try Out Pebble Yourself
Want to try Pebble with your ACME client right now? On a Unix-like system, you can run
git clone https://github.com/letsencrypt/pebble/
cd pebble
go run ./cmd/pebble
Wait a few seconds; now you have a working ACME CA directory available at https://localhost:14000/dir
! Your local ACME Server can immediately receive requests and issue certificates, though not publicly-trusted ones, of course. (If you prefer, we also offer other options for installing Pebble, like a Docker image.)
We welcome code contributions to Pebble. For example, ACME client developers may want to add simple versions of an ACME feature that’s not currently tested in Pebble in order to make their test suites more comprehensive. Also, if you notice a possibly unintended divergence between Pebble and Boulder or Pebble and the ACME specification, we’d love for you to let us know.
As we touched on in our first blog post highlighting ten years of Let’s Encrypt: Just as remarkable to us as the technical innovations behind proliferating TLS at scale is, so too is the sustained generosity we have benefited from throughout our first decade.
With that sense of gratitude top of mind, we are proud to announce a contribution of $1,000,000 from Jeff Atwood. Jeff has been a longtime supporter of our work, beginning many years ago with Discourse providing our community forum pro bono; something Discourse still provides to this day. As best we can tell, our forum has helped hundreds of thousands of people get up and running with Let’s Encrypt—an impact that has helped billions of people use an Internet that’s more secure and privacy-respecting thanks to widely adopted TLS.
When we first spoke with Jeff about the road ahead for Let’s Encrypt back in 2023, we knew a few things wouldn’t change no matter how the Internet changes over the next decade:
- Free TLS is the only way to ensure it is and remains accessible to as many people as possible.
- Let’s Encrypt is here to provide a reliable, trusted, and sound service no matter the scale.
- Generosity from our global community of supporters will be how we sustain our work.
We’re proud that Jeff not only agrees, but has chosen to support us in such a meaningful way. In discussing how Jeff might want us to best celebrate his generosity and recognize his commitment to our work, he shared:
Let's Encrypt is a golden example of how creating inalienable good is possible with the right approach and the right values. And while I'm excited about the work Let's Encrypt has done, I am eager to see their work continue to keep up with the growing Web; to sustain encryption for everybody at Internet scale. To do so is going to take more than me—it's going to take a community of people committed to this work. I am confident Let's Encrypt is a project that deserves all of our support, in ways both large and small.
Indeed, this contribution is significant because of its scale, but more importantly because of its signal: a signal that supporting the not-so-glamorous but oh-so-nerdy work of encryption at scale matters to the lives of billions of people every day; a signal that supporting free privacy and security afforded by TLS for all of the Internet’s five billion users just makes sense.
Ten years ago we set out to build a better Internet through easy to use TLS. If you or your organization have supported us throughout the years, thank you for joining Jeff in believing in the work of Let’s Encrypt. For a deeper dive into the impact of Let’s Encrypt and ISRG’s other projects, take a look at our most recent annual report.
Let’s Encrypt is a project of the nonprofit Internet Security Research Group, a 501(c)(3) nonprofit committed to protecting Internet users by lowering monetary, technological, and informational barriers to a more secure and privacy-respecting Internet. For more, visit abetterinternet.org. Press inquiries can be sent to press@abetterinternet.org
Earlier this year we announced our intention to introduce short-lived certificates with lifetimes of six days as an option for our subscribers. Yesterday we issued our first short-lived certificate. You can see the certificate at the bottom of our post, or here thanks to Certificate Transparency logs. We issued it to ourselves and then immediately revoked it so we can observe the certificate’s whole lifecycle. This is the first step towards making short-lived certificates available to all subscribers.
The next step is for us to make short-lived certificates available to a small set of our subscribers so we can make sure our systems scale as expected prior to general availability. We expect this next phase to begin during Q2 of this year.
We expect short-lived certificates to be generally available by the end of this year.
How To Get Six-Day Certificates
Once short-lived certificates are an option for you, you’ll need to use an ACME client that supports ACME certificate profiles and select the short-lived certificate profile (“shortlived”). The lego
client recently added this functionality.
In the meantime, the best way to prepare to take advantage of short-lived certificates is to make sure your ACME client is reliably renewing certificates in an automated fashion. If that’s working well then there should be no costs to switching to short-lived certificates.
You’ll also want to be sure your ACME client is running frequently - both for the sake of renewing short-lived certificates and so as to take advantage of ACME Renewal Information (ARI). ARI allows Let’s Encrypt to notify your client if it should renew early for some reason. ARI checks should happen at least once per day, and short-lived certificates should be renewed every two to three days, so we recommend having your client run at least once per day.
Shorter Certificate Lifetimes Are Good for Security
When the private key associated with a certificate is compromised, the recommendation has always been to have the certificate revoked so that people will know not to use it. Unfortunately, certificate revocation doesn’t work very well. This means that certificates with compromised keys (or other issues) may continue to be used until they expire. The longer the lifetime of the certificate, the longer the potential for use of a problematic certificate.
The primary advantage of short-lived certificates is that they greatly reduce the potential compromise window because they expire relatively quickly. This reduces the need for certificate revocation, which has historically been unreliable. Our six-day certificates will not include OCSP or CRL URLs. Additionally, short-lived certificates practically require automation, and we believe that automating certificate issuance is important for security.
Questions
If you have questions or comments about our plans, feel free to let us know on our community forums.
We’d like to thank Open Technology Fund for supporting this work.
Our First 6-Day Certificate
PEM format:
-----BEGIN CERTIFICATE-----
MIIDSzCCAtGgAwIBAgISA7CwFcGk4mQWEXMacRtxHeDvMAoGCCqGSM49BAMDMDIx
CzAJBgNVBAYTAlVTMRYwFAYDVQQKEw1MZXQncyBFbmNyeXB0MQswCQYDVQQDEwJF
NjAeFw0yNTAyMTkxNzMwMDFaFw0yNTAyMjYwOTMwMDBaMAAwWTATBgcqhkjOPQIB
BggqhkjOPQMBBwNCAAQoSItt2V1aocI5dxrKR8iLfmm0KiVvOhiwKByzu2kLeC7C
0BdfAgtwdICdkuEhAXokhXLq6DNZZgmh5T4flVwZo4IB9zCCAfMwDgYDVR0PAQH/
BAQDAgeAMBMGA1UdJQQMMAoGCCsGAQUFBwMBMAwGA1UdEwEB/wQCMAAwHwYDVR0j
BBgwFoAUkydGmAOpUWiOmNbEQkjbI79YlNIwVQYIKwYBBQUHAQEESTBHMCEGCCsG
AQUFBzABhhVodHRwOi8vZTYuby5sZW5jci5vcmcwIgYIKwYBBQUHMAKGFmh0dHA6
Ly9lNi5pLmxlbmNyLm9yZy8wKAYDVR0RAQH/BB4wHIIaaGVsbG93b3JsZC5sZXRz
ZW5jcnlwdC5vcmcwEwYDVR0gBAwwCjAIBgZngQwBAgEwggEFBgorBgEEAdZ5AgQC
BIH2BIHzAPEAdgDM+w9qhXEJZf6Vm1PO6bJ8IumFXA2XjbapflTA/kwNsAAAAZUf
d/zOAAAEAwBHMEUCIFNd51TfSNiJrO+294t49C5ANc4oC7gTUzf7xnlNlhKsAiEA
wi5hfiC9SsKLxlTQ0sctUxhLmdYh40r6ECWQS/yWw2AAdwDgkrP8DB3I52g2H95h
uZZNClJ4GYpy1nLEsE2lbW9UBAAAAZUfd/0TAAAEAwBIMEYCIQCs2NuZIUIloOaH
1t9eXDKb8bjoWESBPsK4i2BxMvEIswIhAOMNaQNyr1YkzrcNUz15qGV0oVLg5BJN
+ikWxXOdcRHFMAoGCCqGSM49BAMDA2gAMGUCMDANqy7G09AIwzXcd7SNl7uFwhC+
xlfduvp1PeEDHc/FA9K3mRYkGXuKtzNdOh7wcAIxALjEMDmBQiwXbB447oGkaZAe
0rqxA3EtNV5wj0obeObluj/NgUsVEG9OqiBIoggFRw==
-----END CERTIFICATE-----
openssl x509 -text
output:
Certificate:
Data:
Version: 3 (0x2)
Serial Number:
03:b0:b0:15:c1:a4:e2:64:16:11:73:1a:71:1b:71:1d:e0:ef
Signature Algorithm: ecdsa-with-SHA384
Issuer: C=US, O=Let's Encrypt, CN=E6
Validity
Not Before: Feb 19 17:30:01 2025 GMT
Not After : Feb 26 09:30:00 2025 GMT
Subject:
Subject Public Key Info:
Public Key Algorithm: id-ecPublicKey
Public-Key: (256 bit)
pub:
04:28:48:8b:6d:d9:5d:5a:a1:c2:39:77:1a:ca:47:
c8:8b:7e:69:b4:2a:25:6f:3a:18:b0:28:1c:b3:bb:
69:0b:78:2e:c2:d0:17:5f:02:0b:70:74:80:9d:92:
e1:21:01:7a:24:85:72:ea:e8:33:59:66:09:a1:e5:
3e:1f:95:5c:19
ASN1 OID: prime256v1
NIST CURVE: P-256
X509v3 extensions:
X509v3 Key Usage: critical
Digital Signature
X509v3 Extended Key Usage:
TLS Web Server Authentication
X509v3 Basic Constraints: critical
CA:FALSE
X509v3 Authority Key Identifier:
93:27:46:98:03:A9:51:68:8E:98:D6:C4:42:48:DB:23:BF:58:94:D2
Authority Information Access:
OCSP - URI:http://e6.o.lencr.org
CA Issuers - URI:http://e6.i.lencr.org/
X509v3 Subject Alternative Name: critical
DNS:helloworld.letsencrypt.org
X509v3 Certificate Policies:
Policy: 2.23.140.1.2.1
CT Precertificate SCTs:
Signed Certificate Timestamp:
Version : v1 (0x0)
Log ID : CC:FB:0F:6A:85:71:09:65:FE:95:9B:53:CE:E9:B2:7C:
22:E9:85:5C:0D:97:8D:B6:A9:7E:54:C0:FE:4C:0D:B0
Timestamp : Feb 19 18:28:32.078 2025 GMT
Extensions: none
Signature : ecdsa-with-SHA256
30:45:02:20:53:5D:E7:54:DF:48:D8:89:AC:EF:B6:F7:
8B:78:F4:2E:40:35:CE:28:0B:B8:13:53:37:FB:C6:79:
4D:96:12:AC:02:21:00:C2:2E:61:7E:20:BD:4A:C2:8B:
C6:54:D0:D2:C7:2D:53:18:4B:99:D6:21:E3:4A:FA:10:
25:90:4B:FC:96:C3:60
Signed Certificate Timestamp:
Version : v1 (0x0)
Log ID : E0:92:B3:FC:0C:1D:C8:E7:68:36:1F:DE:61:B9:96:4D:
0A:52:78:19:8A:72:D6:72:C4:B0:4D:A5:6D:6F:54:04
Timestamp : Feb 19 18:28:32.147 2025 GMT
Extensions: none
Signature : ecdsa-with-SHA256
30:46:02:21:00:AC:D8:DB:99:21:42:25:A0:E6:87:D6:
DF:5E:5C:32:9B:F1:B8:E8:58:44:81:3E:C2:B8:8B:60:
71:32:F1:08:B3:02:21:00:E3:0D:69:03:72:AF:56:24:
CE:B7:0D:53:3D:79:A8:65:74:A1:52:E0:E4:12:4D:FA:
29:16:C5:73:9D:71:11:C5
Signature Algorithm: ecdsa-with-SHA384
Signature Value:
30:65:02:30:30:0d:ab:2e:c6:d3:d0:08:c3:35:dc:77:b4:8d:
97:bb:85:c2:10:be:c6:57:dd:ba:fa:75:3d:e1:03:1d:cf:c5:
03:d2:b7:99:16:24:19:7b:8a:b7:33:5d:3a:1e:f0:70:02:31:
00:b8:c4:30:39:81:42:2c:17:6c:1e:38:ee:81:a4:69:90:1e:
d2:ba:b1:03:71:2d:35:5e:70:8f:4a:1b:78:e6:e5:ba:3f:cd:
81:4b:15:10:6f:4e:aa:20:48:a2:08:05:47

2025 marks ten years of Let’s Encrypt. Already this year we’ve taken steps to continue to deliver on our values of user privacy, efficiency, and innovation, all with the intent of continuing to deliver free TLS certificates to as many people as possible; to deliver encryption for everybody.
And while we’re excited about the technical progress we’ll make this year, we’re also going to celebrate this tenth anniversary by highlighting the people around the world who make our impact possible. It’s no small village.
From a community forum that has provided free technical support, to our roster of sponsors who provide vital funding, to the thousands of individual supporters who contribute financially to Let’s Encrypt each year, free TLS at Internet scale works because people have supported it year in, year out, for ten years.
Each month we’ll highlight a different set of people behind our “everybody.” Who do you want to see us highlight? What use cases of Let’s Encrypt have you seen that amazed you? What about our work do you hope we’ll continue or improve as we go forward? Let us know on LinkedIn, or drop a note to outreach@letsencrypt.org.
Encryption for Everybody is our unofficial tagline for this tenth anniversary year. What we love about it is that, yes, it captures our commitment to ensuring anyone around the world can easily get a cert for free. But more importantly, it captures the reality that technical innovation won’t work without people believing in it and supporting it. We’re grateful that, for ten years (and counting!), our community of supporters has made an impact on the lives of billions of Internet users—an impact that’s made theWeb more secure and privacy respecting for everybody, everywhere.
Internet Security Research Group (ISRG) is the parent organization of Let’s Encrypt, Prossimo, and Divvi Up. ISRG is a 501(c)(3) nonprofit. If you’d like to support our work, please consider getting involved, donating, or encouraging your company to become a sponsor.
Let’s Encrypt protects a vast portion of the Web by providing TLS certificates to over 550 million websites—a figure that has grown by 42% in the last year alone. We currently issue over 340,000 certificates per hour. To manage this immense traffic and maintain responsiveness under high demand, our infrastructure relies on rate limiting. In 2015, we introduced our first rate limiting system, built on MariaDB. It evolved alongside our rapidly growing service but eventually revealed its limits: straining database servers, forcing long reset times on subscribers, and slowing down every request.
We needed a solution built for the future—one that could scale with demand, reduce the load on MariaDB, and adapt to real-world subscriber request patterns. The result was a new rate limiting system powered by Redis and a proven virtual scheduling algorithm from the mid-90s. Efficient and scalable, and capable of handling over a billion active certificates.
Rate Limiting a Free Service is Hard
In 2015, Let’s Encrypt was in early preview, and we faced a unique challenge. We were poised to become incredibly popular, offering certificates freely and without requiring contact information or email verification. Ensuring fair usage and preventing abuse without traditional safeguards demanded an atypical approach to rate limiting.
We decided to limit the number of certificates issued—per week—for each registered domain. Registered domains are a limited resource with real costs, making them a natural and effective basis for rate limiting—one that mirrors the structure of the Web itself. Specifically, this approach targets the effective Top-Level Domain (eTLD), as defined by the Public Suffix List (PSL), plus one additional label to the left. For example, in new.blog.example.co.uk
, the eTLD is .co.uk
, making example.co.uk
the eTLD+1.
Counting Events Was Easy
For each successfully issued certificate, we logged an entry in a table that recorded the registered domain, the issuance date, and other relevant details. To enforce rate limits, the system scanned this table, counted the rows matching a given registered domain within a specific time window, and compared the total to a configured threshold. This simple design formed the basis for all future rate limits.
Counting a Lot of Events Got Expensive
By 2019, we had added six new rate limits to protect our infrastructure as demand for certificates surged. Enforcing these limits required frequent scans of database tables to count recent matching events. These operations, especially on our heavily-used authorizations table, caused significant overhead, with reads outpacing all other tables—often by an order of magnitude.
Rate limit calculations were performed early in request processing and often. Counting rows in MariaDB, particularly for accounts with rate limit overrides, was inherently expensive and quickly became a scaling bottleneck.
Adding new limits required careful trade-offs. Decisions about whether to reuse existing schema, optimize indexes, or design purpose-built tables helped balance performance, complexity, and long-term maintainability.
Buying Runway — Offloading Reads
In late 2021, we updated our control plane and Boulder—our in-house CA software—to route most API reads, including rate limit checks, to database replicas. This reduced the load on the primary database and improved its overall health. At the same time, however, latency of rate limit checks during peak hours continued to rise, highlighting the limitations of scaling reads alone.
Sliding Windows Got Frustrating
Subscribers were frequently hitting rate limits unexpectedly, leaving them unable to request certificates for days. This issue stemmed from our use of relatively large rate limiting windows—most spanning a week. Subscribers could deplete their entire limit in just a few moments by repeating the same request, and find themselves locked out for the remainder of the week. This approach was inflexible and disruptive, causing unnecessary frustration and delays.
In early 2022, we patched the Duplicate Certificate limit to address this rigidity. Using a naive token-bucket approach, we allowed users to “earn back” requests incrementally, cutting the wait time—once rate limited—to about 1.4 days. The patch worked by fetching recent issuance timestamps and calculating the time between them to grant requests based on the time waited. This change also allowed us to include a Retry-After timestamp in rate limited responses. While this improved the user experience for this one limit, we understood it to be a temporary fix for a system in need of a larger overhaul.
When a Problem Grows Large Enough, It Finds the Time for You
Setting aside time for a complete overhaul of our rate-limiting system wasn’t easy. Our development team, composed of just three permanent engineers, typically juggles several competing priorities. Yet by 2023, our flagging rate limits code had begun to endanger the reliability of our MariaDB databases.
Our authorizations table was now regularly read an order of magnitude more than any other. Individually identifying and deleting unnecessary rows—or specific values—had proved unworkable due to poor MariaDB delete performance. Storage engines like InnoDB must maintain indexes, foreign key constraints, and transaction logs for every deletion, which significantly increases overhead for concurrent transactions and leads to gruelingly slow deletes.
Our SRE team automated the cleanup of old rows for many tables using the PARTITION
command, which worked well for bookkeeping and compliance data. Unfortunately, we couldn’t apply it to most of our purpose-built rate limit tables. These tables depend on ON DUPLICATE KEY UPDATE
, a mechanism that requires the targeted column to be a unique index or primary key, while partitioning demands that the primary key be included in the partitioning key.
Indexes on these tables—such as those tracking requested hostnames—often grew larger than the tables themselves and, in some cases, exceeded the memory of our smaller staging environment databases, eventually forcing us to periodically wipe them entirely.
By late 2023, this cascading confluence of complexities required a reckoning. We set out to design a rate limiting system built for the future.
The Solution: Redis + GCRA
We designed a system from the ground up that combines Redis for storage and the Generic Cell Rate Algorithm (GCRA) for managing request flow.
Why Redis?
Our engineers were already familiar with Redis, having recently deployed it to cache and serve OCSP responses. Its high throughput and low latency made it a candidate for tracking rate limit state as well.
By moving this data from MariaDB to Redis, we could eliminate the need for ever-expanding, purpose-built tables and indexes, significantly reducing read and write pressure. Redis’s feature set made it a perfect fit for the task. Most rate limit data is ephemeral—after a few days, or sometimes just minutes, it becomes irrelevant unless the subscriber calls us again. Redis’s per-key Time-To-Live would allow us to expire this data the moment it was no longer needed.
Redis also supports atomic integer operations, enabling fast, reliable counter updates, even when increments occur concurrently. Its “set if not exist” functionality ensures efficient initialization of keys, while pipeline support allows us to get and set multiple keys in bulk. This combination of familiarity, speed, simplicity, and flexibility made Redis the natural choice.
Why GCRA?
The Generic Cell Rate Algorithm (GCRA) is a virtual scheduling algorithm originally designed for telecommunication networks to regulate traffic and prevent congestion. Unlike traditional sliding window approaches that work in fixed time blocks, GCRA enforces rate limits continuously, making it well-suited to our goals.
A rate limit in GCRA is defined by two parameters: the emission interval and the burst tolerance. The emission interval specifies the minimum time that must pass between consecutive requests to maintain a steady rate. For example, an emission interval of one second allows one request per second on average. The burst tolerance determines how much unused capacity can be drawn on to allow short bursts of requests beyond the steady rate.
When a request is received, GCRA compares the current time to the Theoretical Arrival Time (TAT), which indicates when the next request is allowed under the steady rate. If the current time is greater than or equal to the TAT, the request is permitted, and the TAT is updated by adding the emission interval. If the current time plus the burst tolerance is greater than or equal to the TAT, the request is also permitted. In this case, the TAT is updated by adding the emission interval, reducing the remaining burst capacity.
However, if the current time plus the burst tolerance is less than the TAT, the request exceeds the rate limit and is denied. Conveniently, the difference between the TAT and the current time can then be returned to the subscriber in a Retry-After header, informing their client exactly how long to wait before trying again.
To illustrate, consider a rate limit of one request per second (emission interval = 1s) with a burst tolerance of three requests. Up to three requests can arrive back-to-back, but subsequent requests will be delayed until “now” catches up to the TAT, ensuring that the average rate over time remains one request per second.
What sets GCRA apart is its ability to automatically refill capacity gradually and continuously. Unlike sliding windows, where users must wait for an entire time block to reset, GCRA allows users to retry as soon as enough time has passed to maintain the steady rate. This dynamic pacing reduces frustration and provides a smoother, more predictable experience for subscribers.
GCRA is also storage and computationally efficient. It requires tracking only the TAT—stored as a single Unix timestamp—and performing simple arithmetic to enforce limits. This lightweight design allows it to scale to handle billions of requests, with minimal computational and memory overhead.
The Results: Faster, Smoother, and More Scalable
The transition to Redis and GCRA brought immediate, measurable improvements. We cut database load, improved response times, and delivered consistent performance even during periods of peak traffic. Subscribers now experience smoother, more predictable behavior, while the system’s increased permissiveness allows for certificates that the previous approach would have delayed—all achieved without sacrificing scalability or fairness.
Rate Limit Check Latency
Check latency is the extra time added to each request while verifying rate limit compliance. Under the old MariaDB-based system, these checks slowed noticeably during peak traffic, when database contention caused significant delays. Our new Redis-based system dramatically reduced this overhead. The high-traffic “new-order” endpoint saw the greatest improvement, while the “new-account” endpoint—though considerably lighter in traffic—also benefited, especially callers with IPv6 addresses. These results show that our subscribers now experience consistent response times, even under peak load.
Database Health
Our once strained database servers are now operating with ample headroom. In total, MariaDB operations have dropped by 80%, improving responsiveness, reducing contention, and freeing up resources for mission-critical issuance workflows.
Buffer pool requests have decreased by more than 50%, improving caching efficiency and reducing overall memory pressure.
Reads of the authorizations table—a notorious bottleneck—have dropped by over 99%. Previously, this table outpaced all others by more than two orders of magnitude; now it ranks second (the green line below), just narrowly surpassing our third most-read table.
Tracking Zombie Clients
In late 2024, we turned our new rate limiting system toward a longstanding challenge: “zombie clients.” These requesters repeatedly attempt to issue certificates but fail, often because of expired domains or misconfigured DNS records. Together, they generate nearly half of all order attempts yet almost never succeed. We were able to build on this new infrastructure to record consecutive ACME challenge failures by account/domain pair and automatically “pause” this problematic issuance. The result has been a considerable reduction in resource consumption, freeing database and network capacity without disrupting legitimate traffic.
Scalability on Redis
Before deploying the limits to track zombie clients, we maintained just over 12.6 million unique TATs across several Redis databases. Within 24 hours, that number more than doubled to 26 million, and by the end of the week, it peaked at over 30 million. Yet, even with this sharp increase, there was no noticeable impact on rate limit responsiveness. That’s all we’ll share for now about zombie clients—there’s plenty more to unpack, but we’ll save those insights and figures for a future blog post.
What’s Next?
Scaling our rate limits to keep pace with the growth of the Web is a huge achievement, but there’s still more to do. In the near term, many of our other ACME endpoints rely on load balancers to enforce per-IP limits, which works but gives us little control over the feedback provided to subscribers. We’re looking to deploy this new infrastructure across those endpoints as well. Looking further ahead, we’re exploring how we might redefine our rate limits now that we’re no longer constrained by a system that simply counts events between two points in time.
By adopting Redis and GCRA, we’ve built a flexible, efficient rate limit system that promotes fair usage and enables our infrastructure to handle ever-growing demand. We’ll keep adapting to the ever-evolving Web while honoring our primary goal: giving people the certificates they need, for free, in the most user-friendly way we can.
Since its inception, Let’s Encrypt has been sending expiration notification emails to subscribers that have provided an email address to us. We will be ending this service on June 4, 2025. The decision to end this service is the result of the following factors:
-
Over the past 10 years more and more of our subscribers have been able to put reliable automation into place for certificate renewal.
-
Providing expiration notification emails means that we have to retain millions of email addresses connected to issuance records. As an organization that values privacy, removing this requirement is important to us.
-
Providing expiration notifications costs Let’s Encrypt tens of thousands of dollars per year, money that we believe can be better spent on other aspects of our infrastructure.
-
Providing expiration notifications adds complexity to our infrastructure, which takes time and attention to manage and increases the likelihood of mistakes being made. Over the long term, particularly as we add support for new service components, we need to manage overall complexity by phasing out system components that can no longer be justified.
For those who would like to continue receiving expiration notifications, we recommend using a third party service such as Red Sift Certificates Lite (formerly Hardenize). Red Sift’s monitoring service providing expiration emails is free of charge for up to 250 certificates. More monitoring options can be found here.
While we will be minimizing the email addresses we retain connected to issuance data, you can opt in to receive other emails. We’ll keep you informed about technical updates, and other news about Let’s Encrypt and our parent nonprofit, ISRG, based on the preferences you choose. You can sign up for our email lists below:
Wed, 22 Jan 2025 00:00:00 +0000