In A Nutshell
About Android OS
Some parts of Android will be familiar, such as the Linux Kernel, OpenGL, and the SQL database. Others may be completely foreign, such as Android's idea of the application life cycle. You'll need a good understanding of these key concepts in order to write well-behaved Android applications. Let's start off by taking a look at the overall system architecture--the key layers and components that make up the Android stack. Read More
Linux From Scratch
There are always many ways to accomplish a single task. The same can be said about Linux distributions. A great many have existed over the years. Some still exist, some have morphed into something else, yet others have been relegated to our memories. They all do things differently to suit the needs of their target audience. Because so many different ways to accomplish the same end goal exist, I began to realize I no longer had to be limited by any one implementation. Prior to discovering Linux, we simply put up with issues in other Operating Systems as you had no choice. It was what it was, whether you liked it or not. With Linux, the concept of choice began to emerge. If you didn't like something, you were free, even encouraged, to change it. Linux From Scratch
Creating a Raspberry Pi-Based Beowulf Cluster
Raspberry Pis have really taken the embedded Linux community by storm. For those unfamiliar, however, a Raspberry Pi (RPi) is a small (credit card sized), inexpensive single-board computer that is capable of running Linux and other lightweight operating systems which run on ARM processors. For those who may not have heard of a Beowulf cluster before, a Beowulf cluster is simply a collection of identical, (typically) commodity computer hardware based systems, networked together and running some kind of parallel processing software that allows each node in the cluster to share data and computation. Joshua Kiepert, Boise State University
Let's Encrypt News
We Issued Our First Six Day Cert
Earlier this year we announced our intention to introduce short-lived certificates with lifetimes of six days as an option for our subscribers. Yesterday we issued our first short-lived certificate. You can see the certificate at the bottom of our post, or here thanks to Certificate Transparency logs. We issued it to ourselves and then immediately revoked it so we can observe the certificate’s whole lifecycle. This is the first step towards making short-lived certificates available to all subscribers.
The next step is for us to make short-lived certificates available to a small set of our subscribers so we can make sure our systems scale as expected prior to general availability. We expect this next phase to begin during Q2 of this year.
We expect short-lived certificates to be generally available by the end of this year.
How To Get Six-Day Certificates
Once short-lived certificates are an option for you, you’ll need to use an ACME client that supports ACME certificate profiles and select the short-lived certificate profile (“shortlived”). The lego
client recently added this functionality.
In the meantime, the best way to prepare to take advantage of short-lived certificates is to make sure your ACME client is reliably renewing certificates in an automated fashion. If that’s working well then there should be no costs to switching to short-lived certificates.
You’ll also want to be sure your ACME client is running frequently - both for the sake of renewing short-lived certificates and so as to take advantage of ACME Renewal Information (ARI). ARI allows Let’s Encrypt to notify your client if it should renew early for some reason. ARI checks should happen at least once per day, and short-lived certificates should be renewed every two to three days, so we recommend having your client run at least once per day.
Shorter Certificate Lifetimes Are Good for Security
When the private key associated with a certificate is compromised, the recommendation has always been to have the certificate revoked so that people will know not to use it. Unfortunately, certificate revocation doesn’t work very well. This means that certificates with compromised keys (or other issues) may continue to be used until they expire. The longer the lifetime of the certificate, the longer the potential for use of a problematic certificate.
The primary advantage of short-lived certificates is that they greatly reduce the potential compromise window because they expire relatively quickly. This reduces the need for certificate revocation, which has historically been unreliable. Our six-day certificates will not include OCSP or CRL URLs. Additionally, short-lived certificates practically require automation, and we believe that automating certificate issuance is important for security.
Questions
If you have questions or comments about our plans, feel free to let us know on our community forums.
We’d like to thank Open Technology Fund for supporting this work.
Our First 6-Day Certificate
PEM format:
-----BEGIN CERTIFICATE-----
MIIDSzCCAtGgAwIBAgISA7CwFcGk4mQWEXMacRtxHeDvMAoGCCqGSM49BAMDMDIx
CzAJBgNVBAYTAlVTMRYwFAYDVQQKEw1MZXQncyBFbmNyeXB0MQswCQYDVQQDEwJF
NjAeFw0yNTAyMTkxNzMwMDFaFw0yNTAyMjYwOTMwMDBaMAAwWTATBgcqhkjOPQIB
BggqhkjOPQMBBwNCAAQoSItt2V1aocI5dxrKR8iLfmm0KiVvOhiwKByzu2kLeC7C
0BdfAgtwdICdkuEhAXokhXLq6DNZZgmh5T4flVwZo4IB9zCCAfMwDgYDVR0PAQH/
BAQDAgeAMBMGA1UdJQQMMAoGCCsGAQUFBwMBMAwGA1UdEwEB/wQCMAAwHwYDVR0j
BBgwFoAUkydGmAOpUWiOmNbEQkjbI79YlNIwVQYIKwYBBQUHAQEESTBHMCEGCCsG
AQUFBzABhhVodHRwOi8vZTYuby5sZW5jci5vcmcwIgYIKwYBBQUHMAKGFmh0dHA6
Ly9lNi5pLmxlbmNyLm9yZy8wKAYDVR0RAQH/BB4wHIIaaGVsbG93b3JsZC5sZXRz
ZW5jcnlwdC5vcmcwEwYDVR0gBAwwCjAIBgZngQwBAgEwggEFBgorBgEEAdZ5AgQC
BIH2BIHzAPEAdgDM+w9qhXEJZf6Vm1PO6bJ8IumFXA2XjbapflTA/kwNsAAAAZUf
d/zOAAAEAwBHMEUCIFNd51TfSNiJrO+294t49C5ANc4oC7gTUzf7xnlNlhKsAiEA
wi5hfiC9SsKLxlTQ0sctUxhLmdYh40r6ECWQS/yWw2AAdwDgkrP8DB3I52g2H95h
uZZNClJ4GYpy1nLEsE2lbW9UBAAAAZUfd/0TAAAEAwBIMEYCIQCs2NuZIUIloOaH
1t9eXDKb8bjoWESBPsK4i2BxMvEIswIhAOMNaQNyr1YkzrcNUz15qGV0oVLg5BJN
+ikWxXOdcRHFMAoGCCqGSM49BAMDA2gAMGUCMDANqy7G09AIwzXcd7SNl7uFwhC+
xlfduvp1PeEDHc/FA9K3mRYkGXuKtzNdOh7wcAIxALjEMDmBQiwXbB447oGkaZAe
0rqxA3EtNV5wj0obeObluj/NgUsVEG9OqiBIoggFRw==
-----END CERTIFICATE-----
openssl x509 -text
output:
Certificate:
Data:
Version: 3 (0x2)
Serial Number:
03:b0:b0:15:c1:a4:e2:64:16:11:73:1a:71:1b:71:1d:e0:ef
Signature Algorithm: ecdsa-with-SHA384
Issuer: C=US, O=Let's Encrypt, CN=E6
Validity
Not Before: Feb 19 17:30:01 2025 GMT
Not After : Feb 26 09:30:00 2025 GMT
Subject:
Subject Public Key Info:
Public Key Algorithm: id-ecPublicKey
Public-Key: (256 bit)
pub:
04:28:48:8b:6d:d9:5d:5a:a1:c2:39:77:1a:ca:47:
c8:8b:7e:69:b4:2a:25:6f:3a:18:b0:28:1c:b3:bb:
69:0b:78:2e:c2:d0:17:5f:02:0b:70:74:80:9d:92:
e1:21:01:7a:24:85:72:ea:e8:33:59:66:09:a1:e5:
3e:1f:95:5c:19
ASN1 OID: prime256v1
NIST CURVE: P-256
X509v3 extensions:
X509v3 Key Usage: critical
Digital Signature
X509v3 Extended Key Usage:
TLS Web Server Authentication
X509v3 Basic Constraints: critical
CA:FALSE
X509v3 Authority Key Identifier:
93:27:46:98:03:A9:51:68:8E:98:D6:C4:42:48:DB:23:BF:58:94:D2
Authority Information Access:
OCSP - URI:http://e6.o.lencr.org
CA Issuers - URI:http://e6.i.lencr.org/
X509v3 Subject Alternative Name: critical
DNS:helloworld.letsencrypt.org
X509v3 Certificate Policies:
Policy: 2.23.140.1.2.1
CT Precertificate SCTs:
Signed Certificate Timestamp:
Version : v1 (0x0)
Log ID : CC:FB:0F:6A:85:71:09:65:FE:95:9B:53:CE:E9:B2:7C:
22:E9:85:5C:0D:97:8D:B6:A9:7E:54:C0:FE:4C:0D:B0
Timestamp : Feb 19 18:28:32.078 2025 GMT
Extensions: none
Signature : ecdsa-with-SHA256
30:45:02:20:53:5D:E7:54:DF:48:D8:89:AC:EF:B6:F7:
8B:78:F4:2E:40:35:CE:28:0B:B8:13:53:37:FB:C6:79:
4D:96:12:AC:02:21:00:C2:2E:61:7E:20:BD:4A:C2:8B:
C6:54:D0:D2:C7:2D:53:18:4B:99:D6:21:E3:4A:FA:10:
25:90:4B:FC:96:C3:60
Signed Certificate Timestamp:
Version : v1 (0x0)
Log ID : E0:92:B3:FC:0C:1D:C8:E7:68:36:1F:DE:61:B9:96:4D:
0A:52:78:19:8A:72:D6:72:C4:B0:4D:A5:6D:6F:54:04
Timestamp : Feb 19 18:28:32.147 2025 GMT
Extensions: none
Signature : ecdsa-with-SHA256
30:46:02:21:00:AC:D8:DB:99:21:42:25:A0:E6:87:D6:
DF:5E:5C:32:9B:F1:B8:E8:58:44:81:3E:C2:B8:8B:60:
71:32:F1:08:B3:02:21:00:E3:0D:69:03:72:AF:56:24:
CE:B7:0D:53:3D:79:A8:65:74:A1:52:E0:E4:12:4D:FA:
29:16:C5:73:9D:71:11:C5
Signature Algorithm: ecdsa-with-SHA384
Signature Value:
30:65:02:30:30:0d:ab:2e:c6:d3:d0:08:c3:35:dc:77:b4:8d:
97:bb:85:c2:10:be:c6:57:dd:ba:fa:75:3d:e1:03:1d:cf:c5:
03:d2:b7:99:16:24:19:7b:8a:b7:33:5d:3a:1e:f0:70:02:31:
00:b8:c4:30:39:81:42:2c:17:6c:1e:38:ee:81:a4:69:90:1e:
d2:ba:b1:03:71:2d:35:5e:70:8f:4a:1b:78:e6:e5:ba:3f:cd:
81:4b:15:10:6f:4e:aa:20:48:a2:08:05:47
Thu, 20 Feb 2025 00:00:00 +0000
Encryption for Everybody
Earlier this year we announced our intention to introduce short-lived certificates with lifetimes of six days as an option for our subscribers. Yesterday we issued our first short-lived certificate. You can see the certificate at the bottom of our post, or here thanks to Certificate Transparency logs. We issued it to ourselves and then immediately revoked it so we can observe the certificate’s whole lifecycle. This is the first step towards making short-lived certificates available to all subscribers.
The next step is for us to make short-lived certificates available to a small set of our subscribers so we can make sure our systems scale as expected prior to general availability. We expect this next phase to begin during Q2 of this year.
We expect short-lived certificates to be generally available by the end of this year.
How To Get Six-Day Certificates
Once short-lived certificates are an option for you, you’ll need to use an ACME client that supports ACME certificate profiles and select the short-lived certificate profile (“shortlived”). The lego
client recently added this functionality.
In the meantime, the best way to prepare to take advantage of short-lived certificates is to make sure your ACME client is reliably renewing certificates in an automated fashion. If that’s working well then there should be no costs to switching to short-lived certificates.
You’ll also want to be sure your ACME client is running frequently - both for the sake of renewing short-lived certificates and so as to take advantage of ACME Renewal Information (ARI). ARI allows Let’s Encrypt to notify your client if it should renew early for some reason. ARI checks should happen at least once per day, and short-lived certificates should be renewed every two to three days, so we recommend having your client run at least once per day.
Shorter Certificate Lifetimes Are Good for Security
When the private key associated with a certificate is compromised, the recommendation has always been to have the certificate revoked so that people will know not to use it. Unfortunately, certificate revocation doesn’t work very well. This means that certificates with compromised keys (or other issues) may continue to be used until they expire. The longer the lifetime of the certificate, the longer the potential for use of a problematic certificate.
The primary advantage of short-lived certificates is that they greatly reduce the potential compromise window because they expire relatively quickly. This reduces the need for certificate revocation, which has historically been unreliable. Our six-day certificates will not include OCSP or CRL URLs. Additionally, short-lived certificates practically require automation, and we believe that automating certificate issuance is important for security.
Questions
If you have questions or comments about our plans, feel free to let us know on our community forums.
We’d like to thank Open Technology Fund for supporting this work.
Our First 6-Day Certificate
PEM format:
-----BEGIN CERTIFICATE-----
MIIDSzCCAtGgAwIBAgISA7CwFcGk4mQWEXMacRtxHeDvMAoGCCqGSM49BAMDMDIx
CzAJBgNVBAYTAlVTMRYwFAYDVQQKEw1MZXQncyBFbmNyeXB0MQswCQYDVQQDEwJF
NjAeFw0yNTAyMTkxNzMwMDFaFw0yNTAyMjYwOTMwMDBaMAAwWTATBgcqhkjOPQIB
BggqhkjOPQMBBwNCAAQoSItt2V1aocI5dxrKR8iLfmm0KiVvOhiwKByzu2kLeC7C
0BdfAgtwdICdkuEhAXokhXLq6DNZZgmh5T4flVwZo4IB9zCCAfMwDgYDVR0PAQH/
BAQDAgeAMBMGA1UdJQQMMAoGCCsGAQUFBwMBMAwGA1UdEwEB/wQCMAAwHwYDVR0j
BBgwFoAUkydGmAOpUWiOmNbEQkjbI79YlNIwVQYIKwYBBQUHAQEESTBHMCEGCCsG
AQUFBzABhhVodHRwOi8vZTYuby5sZW5jci5vcmcwIgYIKwYBBQUHMAKGFmh0dHA6
Ly9lNi5pLmxlbmNyLm9yZy8wKAYDVR0RAQH/BB4wHIIaaGVsbG93b3JsZC5sZXRz
ZW5jcnlwdC5vcmcwEwYDVR0gBAwwCjAIBgZngQwBAgEwggEFBgorBgEEAdZ5AgQC
BIH2BIHzAPEAdgDM+w9qhXEJZf6Vm1PO6bJ8IumFXA2XjbapflTA/kwNsAAAAZUf
d/zOAAAEAwBHMEUCIFNd51TfSNiJrO+294t49C5ANc4oC7gTUzf7xnlNlhKsAiEA
wi5hfiC9SsKLxlTQ0sctUxhLmdYh40r6ECWQS/yWw2AAdwDgkrP8DB3I52g2H95h
uZZNClJ4GYpy1nLEsE2lbW9UBAAAAZUfd/0TAAAEAwBIMEYCIQCs2NuZIUIloOaH
1t9eXDKb8bjoWESBPsK4i2BxMvEIswIhAOMNaQNyr1YkzrcNUz15qGV0oVLg5BJN
+ikWxXOdcRHFMAoGCCqGSM49BAMDA2gAMGUCMDANqy7G09AIwzXcd7SNl7uFwhC+
xlfduvp1PeEDHc/FA9K3mRYkGXuKtzNdOh7wcAIxALjEMDmBQiwXbB447oGkaZAe
0rqxA3EtNV5wj0obeObluj/NgUsVEG9OqiBIoggFRw==
-----END CERTIFICATE-----
openssl x509 -text
output:
Certificate:
Data:
Version: 3 (0x2)
Serial Number:
03:b0:b0:15:c1:a4:e2:64:16:11:73:1a:71:1b:71:1d:e0:ef
Signature Algorithm: ecdsa-with-SHA384
Issuer: C=US, O=Let's Encrypt, CN=E6
Validity
Not Before: Feb 19 17:30:01 2025 GMT
Not After : Feb 26 09:30:00 2025 GMT
Subject:
Subject Public Key Info:
Public Key Algorithm: id-ecPublicKey
Public-Key: (256 bit)
pub:
04:28:48:8b:6d:d9:5d:5a:a1:c2:39:77:1a:ca:47:
c8:8b:7e:69:b4:2a:25:6f:3a:18:b0:28:1c:b3:bb:
69:0b:78:2e:c2:d0:17:5f:02:0b:70:74:80:9d:92:
e1:21:01:7a:24:85:72:ea:e8:33:59:66:09:a1:e5:
3e:1f:95:5c:19
ASN1 OID: prime256v1
NIST CURVE: P-256
X509v3 extensions:
X509v3 Key Usage: critical
Digital Signature
X509v3 Extended Key Usage:
TLS Web Server Authentication
X509v3 Basic Constraints: critical
CA:FALSE
X509v3 Authority Key Identifier:
93:27:46:98:03:A9:51:68:8E:98:D6:C4:42:48:DB:23:BF:58:94:D2
Authority Information Access:
OCSP - URI:http://e6.o.lencr.org
CA Issuers - URI:http://e6.i.lencr.org/
X509v3 Subject Alternative Name: critical
DNS:helloworld.letsencrypt.org
X509v3 Certificate Policies:
Policy: 2.23.140.1.2.1
CT Precertificate SCTs:
Signed Certificate Timestamp:
Version : v1 (0x0)
Log ID : CC:FB:0F:6A:85:71:09:65:FE:95:9B:53:CE:E9:B2:7C:
22:E9:85:5C:0D:97:8D:B6:A9:7E:54:C0:FE:4C:0D:B0
Timestamp : Feb 19 18:28:32.078 2025 GMT
Extensions: none
Signature : ecdsa-with-SHA256
30:45:02:20:53:5D:E7:54:DF:48:D8:89:AC:EF:B6:F7:
8B:78:F4:2E:40:35:CE:28:0B:B8:13:53:37:FB:C6:79:
4D:96:12:AC:02:21:00:C2:2E:61:7E:20:BD:4A:C2:8B:
C6:54:D0:D2:C7:2D:53:18:4B:99:D6:21:E3:4A:FA:10:
25:90:4B:FC:96:C3:60
Signed Certificate Timestamp:
Version : v1 (0x0)
Log ID : E0:92:B3:FC:0C:1D:C8:E7:68:36:1F:DE:61:B9:96:4D:
0A:52:78:19:8A:72:D6:72:C4:B0:4D:A5:6D:6F:54:04
Timestamp : Feb 19 18:28:32.147 2025 GMT
Extensions: none
Signature : ecdsa-with-SHA256
30:46:02:21:00:AC:D8:DB:99:21:42:25:A0:E6:87:D6:
DF:5E:5C:32:9B:F1:B8:E8:58:44:81:3E:C2:B8:8B:60:
71:32:F1:08:B3:02:21:00:E3:0D:69:03:72:AF:56:24:
CE:B7:0D:53:3D:79:A8:65:74:A1:52:E0:E4:12:4D:FA:
29:16:C5:73:9D:71:11:C5
Signature Algorithm: ecdsa-with-SHA384
Signature Value:
30:65:02:30:30:0d:ab:2e:c6:d3:d0:08:c3:35:dc:77:b4:8d:
97:bb:85:c2:10:be:c6:57:dd:ba:fa:75:3d:e1:03:1d:cf:c5:
03:d2:b7:99:16:24:19:7b:8a:b7:33:5d:3a:1e:f0:70:02:31:
00:b8:c4:30:39:81:42:2c:17:6c:1e:38:ee:81:a4:69:90:1e:
d2:ba:b1:03:71:2d:35:5e:70:8f:4a:1b:78:e6:e5:ba:3f:cd:
81:4b:15:10:6f:4e:aa:20:48:a2:08:05:47

2025 marks ten years of Let’s Encrypt. Already this year we’ve taken steps to continue to deliver on our values of user privacy, efficiency, and innovation, all with the intent of continuing to deliver free TLS certificates to as many people as possible; to deliver encryption for everybody.
And while we’re excited about the technical progress we’ll make this year, we’re also going to celebrate this tenth anniversary by highlighting the people around the world who make our impact possible. It’s no small village.
From a community forum that has provided free technical support, to our roster of sponsors who provide vital funding, to the thousands of individual supporters who contribute financially to Let’s Encrypt each year, free TLS at Internet scale works because people have supported it year in, year out, for ten years.
Each month we’ll highlight a different set of people behind our “everybody.” Who do you want to see us highlight? What use cases of Let’s Encrypt have you seen that amazed you? What about our work do you hope we’ll continue or improve as we go forward? Let us know on LinkedIn, or drop a note to outreach@letsencrypt.org.
Encryption for Everybody is our unofficial tagline for this tenth anniversary year. What we love about it is that, yes, it captures our commitment to ensuring anyone around the world can easily get a cert for free. But more importantly, it captures the reality that technical innovation won’t work without people believing in it and supporting it. We’re grateful that, for ten years (and counting!), our community of supporters has made an impact on the lives of billions of Internet users—an impact that’s made theWeb more secure and privacy respecting for everybody, everywhere.
Internet Security Research Group (ISRG) is the parent organization of Let’s Encrypt, Prossimo, and Divvi Up. ISRG is a 501(c)(3) nonprofit. If you’d like to support our work, please consider getting involved, donating, or encouraging your company to become a sponsor.
Fri, 14 Feb 2025 00:00:00 +0000
Scaling Our Rate Limits to Prepare for a Billion Active Certificates
Earlier this year we announced our intention to introduce short-lived certificates with lifetimes of six days as an option for our subscribers. Yesterday we issued our first short-lived certificate. You can see the certificate at the bottom of our post, or here thanks to Certificate Transparency logs. We issued it to ourselves and then immediately revoked it so we can observe the certificate’s whole lifecycle. This is the first step towards making short-lived certificates available to all subscribers.
The next step is for us to make short-lived certificates available to a small set of our subscribers so we can make sure our systems scale as expected prior to general availability. We expect this next phase to begin during Q2 of this year.
We expect short-lived certificates to be generally available by the end of this year.
How To Get Six-Day Certificates
Once short-lived certificates are an option for you, you’ll need to use an ACME client that supports ACME certificate profiles and select the short-lived certificate profile (“shortlived”). The lego
client recently added this functionality.
In the meantime, the best way to prepare to take advantage of short-lived certificates is to make sure your ACME client is reliably renewing certificates in an automated fashion. If that’s working well then there should be no costs to switching to short-lived certificates.
You’ll also want to be sure your ACME client is running frequently - both for the sake of renewing short-lived certificates and so as to take advantage of ACME Renewal Information (ARI). ARI allows Let’s Encrypt to notify your client if it should renew early for some reason. ARI checks should happen at least once per day, and short-lived certificates should be renewed every two to three days, so we recommend having your client run at least once per day.
Shorter Certificate Lifetimes Are Good for Security
When the private key associated with a certificate is compromised, the recommendation has always been to have the certificate revoked so that people will know not to use it. Unfortunately, certificate revocation doesn’t work very well. This means that certificates with compromised keys (or other issues) may continue to be used until they expire. The longer the lifetime of the certificate, the longer the potential for use of a problematic certificate.
The primary advantage of short-lived certificates is that they greatly reduce the potential compromise window because they expire relatively quickly. This reduces the need for certificate revocation, which has historically been unreliable. Our six-day certificates will not include OCSP or CRL URLs. Additionally, short-lived certificates practically require automation, and we believe that automating certificate issuance is important for security.
Questions
If you have questions or comments about our plans, feel free to let us know on our community forums.
We’d like to thank Open Technology Fund for supporting this work.
Our First 6-Day Certificate
PEM format:
-----BEGIN CERTIFICATE-----
MIIDSzCCAtGgAwIBAgISA7CwFcGk4mQWEXMacRtxHeDvMAoGCCqGSM49BAMDMDIx
CzAJBgNVBAYTAlVTMRYwFAYDVQQKEw1MZXQncyBFbmNyeXB0MQswCQYDVQQDEwJF
NjAeFw0yNTAyMTkxNzMwMDFaFw0yNTAyMjYwOTMwMDBaMAAwWTATBgcqhkjOPQIB
BggqhkjOPQMBBwNCAAQoSItt2V1aocI5dxrKR8iLfmm0KiVvOhiwKByzu2kLeC7C
0BdfAgtwdICdkuEhAXokhXLq6DNZZgmh5T4flVwZo4IB9zCCAfMwDgYDVR0PAQH/
BAQDAgeAMBMGA1UdJQQMMAoGCCsGAQUFBwMBMAwGA1UdEwEB/wQCMAAwHwYDVR0j
BBgwFoAUkydGmAOpUWiOmNbEQkjbI79YlNIwVQYIKwYBBQUHAQEESTBHMCEGCCsG
AQUFBzABhhVodHRwOi8vZTYuby5sZW5jci5vcmcwIgYIKwYBBQUHMAKGFmh0dHA6
Ly9lNi5pLmxlbmNyLm9yZy8wKAYDVR0RAQH/BB4wHIIaaGVsbG93b3JsZC5sZXRz
ZW5jcnlwdC5vcmcwEwYDVR0gBAwwCjAIBgZngQwBAgEwggEFBgorBgEEAdZ5AgQC
BIH2BIHzAPEAdgDM+w9qhXEJZf6Vm1PO6bJ8IumFXA2XjbapflTA/kwNsAAAAZUf
d/zOAAAEAwBHMEUCIFNd51TfSNiJrO+294t49C5ANc4oC7gTUzf7xnlNlhKsAiEA
wi5hfiC9SsKLxlTQ0sctUxhLmdYh40r6ECWQS/yWw2AAdwDgkrP8DB3I52g2H95h
uZZNClJ4GYpy1nLEsE2lbW9UBAAAAZUfd/0TAAAEAwBIMEYCIQCs2NuZIUIloOaH
1t9eXDKb8bjoWESBPsK4i2BxMvEIswIhAOMNaQNyr1YkzrcNUz15qGV0oVLg5BJN
+ikWxXOdcRHFMAoGCCqGSM49BAMDA2gAMGUCMDANqy7G09AIwzXcd7SNl7uFwhC+
xlfduvp1PeEDHc/FA9K3mRYkGXuKtzNdOh7wcAIxALjEMDmBQiwXbB447oGkaZAe
0rqxA3EtNV5wj0obeObluj/NgUsVEG9OqiBIoggFRw==
-----END CERTIFICATE-----
openssl x509 -text
output:
Certificate:
Data:
Version: 3 (0x2)
Serial Number:
03:b0:b0:15:c1:a4:e2:64:16:11:73:1a:71:1b:71:1d:e0:ef
Signature Algorithm: ecdsa-with-SHA384
Issuer: C=US, O=Let's Encrypt, CN=E6
Validity
Not Before: Feb 19 17:30:01 2025 GMT
Not After : Feb 26 09:30:00 2025 GMT
Subject:
Subject Public Key Info:
Public Key Algorithm: id-ecPublicKey
Public-Key: (256 bit)
pub:
04:28:48:8b:6d:d9:5d:5a:a1:c2:39:77:1a:ca:47:
c8:8b:7e:69:b4:2a:25:6f:3a:18:b0:28:1c:b3:bb:
69:0b:78:2e:c2:d0:17:5f:02:0b:70:74:80:9d:92:
e1:21:01:7a:24:85:72:ea:e8:33:59:66:09:a1:e5:
3e:1f:95:5c:19
ASN1 OID: prime256v1
NIST CURVE: P-256
X509v3 extensions:
X509v3 Key Usage: critical
Digital Signature
X509v3 Extended Key Usage:
TLS Web Server Authentication
X509v3 Basic Constraints: critical
CA:FALSE
X509v3 Authority Key Identifier:
93:27:46:98:03:A9:51:68:8E:98:D6:C4:42:48:DB:23:BF:58:94:D2
Authority Information Access:
OCSP - URI:http://e6.o.lencr.org
CA Issuers - URI:http://e6.i.lencr.org/
X509v3 Subject Alternative Name: critical
DNS:helloworld.letsencrypt.org
X509v3 Certificate Policies:
Policy: 2.23.140.1.2.1
CT Precertificate SCTs:
Signed Certificate Timestamp:
Version : v1 (0x0)
Log ID : CC:FB:0F:6A:85:71:09:65:FE:95:9B:53:CE:E9:B2:7C:
22:E9:85:5C:0D:97:8D:B6:A9:7E:54:C0:FE:4C:0D:B0
Timestamp : Feb 19 18:28:32.078 2025 GMT
Extensions: none
Signature : ecdsa-with-SHA256
30:45:02:20:53:5D:E7:54:DF:48:D8:89:AC:EF:B6:F7:
8B:78:F4:2E:40:35:CE:28:0B:B8:13:53:37:FB:C6:79:
4D:96:12:AC:02:21:00:C2:2E:61:7E:20:BD:4A:C2:8B:
C6:54:D0:D2:C7:2D:53:18:4B:99:D6:21:E3:4A:FA:10:
25:90:4B:FC:96:C3:60
Signed Certificate Timestamp:
Version : v1 (0x0)
Log ID : E0:92:B3:FC:0C:1D:C8:E7:68:36:1F:DE:61:B9:96:4D:
0A:52:78:19:8A:72:D6:72:C4:B0:4D:A5:6D:6F:54:04
Timestamp : Feb 19 18:28:32.147 2025 GMT
Extensions: none
Signature : ecdsa-with-SHA256
30:46:02:21:00:AC:D8:DB:99:21:42:25:A0:E6:87:D6:
DF:5E:5C:32:9B:F1:B8:E8:58:44:81:3E:C2:B8:8B:60:
71:32:F1:08:B3:02:21:00:E3:0D:69:03:72:AF:56:24:
CE:B7:0D:53:3D:79:A8:65:74:A1:52:E0:E4:12:4D:FA:
29:16:C5:73:9D:71:11:C5
Signature Algorithm: ecdsa-with-SHA384
Signature Value:
30:65:02:30:30:0d:ab:2e:c6:d3:d0:08:c3:35:dc:77:b4:8d:
97:bb:85:c2:10:be:c6:57:dd:ba:fa:75:3d:e1:03:1d:cf:c5:
03:d2:b7:99:16:24:19:7b:8a:b7:33:5d:3a:1e:f0:70:02:31:
00:b8:c4:30:39:81:42:2c:17:6c:1e:38:ee:81:a4:69:90:1e:
d2:ba:b1:03:71:2d:35:5e:70:8f:4a:1b:78:e6:e5:ba:3f:cd:
81:4b:15:10:6f:4e:aa:20:48:a2:08:05:47

2025 marks ten years of Let’s Encrypt. Already this year we’ve taken steps to continue to deliver on our values of user privacy, efficiency, and innovation, all with the intent of continuing to deliver free TLS certificates to as many people as possible; to deliver encryption for everybody.
And while we’re excited about the technical progress we’ll make this year, we’re also going to celebrate this tenth anniversary by highlighting the people around the world who make our impact possible. It’s no small village.
From a community forum that has provided free technical support, to our roster of sponsors who provide vital funding, to the thousands of individual supporters who contribute financially to Let’s Encrypt each year, free TLS at Internet scale works because people have supported it year in, year out, for ten years.
Each month we’ll highlight a different set of people behind our “everybody.” Who do you want to see us highlight? What use cases of Let’s Encrypt have you seen that amazed you? What about our work do you hope we’ll continue or improve as we go forward? Let us know on LinkedIn, or drop a note to outreach@letsencrypt.org.
Encryption for Everybody is our unofficial tagline for this tenth anniversary year. What we love about it is that, yes, it captures our commitment to ensuring anyone around the world can easily get a cert for free. But more importantly, it captures the reality that technical innovation won’t work without people believing in it and supporting it. We’re grateful that, for ten years (and counting!), our community of supporters has made an impact on the lives of billions of Internet users—an impact that’s made theWeb more secure and privacy respecting for everybody, everywhere.
Internet Security Research Group (ISRG) is the parent organization of Let’s Encrypt, Prossimo, and Divvi Up. ISRG is a 501(c)(3) nonprofit. If you’d like to support our work, please consider getting involved, donating, or encouraging your company to become a sponsor.
Let’s Encrypt protects a vast portion of the Web by providing TLS certificates to over 550 million websites—a figure that has grown by 42% in the last year alone. We currently issue over 340,000 certificates per hour. To manage this immense traffic and maintain responsiveness under high demand, our infrastructure relies on rate limiting. In 2015, we introduced our first rate limiting system, built on MariaDB. It evolved alongside our rapidly growing service but eventually revealed its limits: straining database servers, forcing long reset times on subscribers, and slowing down every request.
We needed a solution built for the future—one that could scale with demand, reduce the load on MariaDB, and adapt to real-world subscriber request patterns. The result was a new rate limiting system powered by Redis and a proven virtual scheduling algorithm from the mid-90s. Efficient and scalable, and capable of handling over a billion active certificates.
Rate Limiting a Free Service is Hard
In 2015, Let’s Encrypt was in early preview, and we faced a unique challenge. We were poised to become incredibly popular, offering certificates freely and without requiring contact information or email verification. Ensuring fair usage and preventing abuse without traditional safeguards demanded an atypical approach to rate limiting.
We decided to limit the number of certificates issued—per week—for each registered domain. Registered domains are a limited resource with real costs, making them a natural and effective basis for rate limiting—one that mirrors the structure of the Web itself. Specifically, this approach targets the effective Top-Level Domain (eTLD), as defined by the Public Suffix List (PSL), plus one additional label to the left. For example, in new.blog.example.co.uk
, the eTLD is .co.uk
, making example.co.uk
the eTLD+1.
Counting Events Was Easy
For each successfully issued certificate, we logged an entry in a table that recorded the registered domain, the issuance date, and other relevant details. To enforce rate limits, the system scanned this table, counted the rows matching a given registered domain within a specific time window, and compared the total to a configured threshold. This simple design formed the basis for all future rate limits.
Counting a Lot of Events Got Expensive
By 2019, we had added six new rate limits to protect our infrastructure as demand for certificates surged. Enforcing these limits required frequent scans of database tables to count recent matching events. These operations, especially on our heavily-used authorizations table, caused significant overhead, with reads outpacing all other tables—often by an order of magnitude.
Rate limit calculations were performed early in request processing and often. Counting rows in MariaDB, particularly for accounts with rate limit overrides, was inherently expensive and quickly became a scaling bottleneck.
Adding new limits required careful trade-offs. Decisions about whether to reuse existing schema, optimize indexes, or design purpose-built tables helped balance performance, complexity, and long-term maintainability.
Buying Runway — Offloading Reads
In late 2021, we updated our control plane and Boulder—our in-house CA software—to route most API reads, including rate limit checks, to database replicas. This reduced the load on the primary database and improved its overall health. At the same time, however, latency of rate limit checks during peak hours continued to rise, highlighting the limitations of scaling reads alone.
Sliding Windows Got Frustrating
Subscribers were frequently hitting rate limits unexpectedly, leaving them unable to request certificates for days. This issue stemmed from our use of relatively large rate limiting windows—most spanning a week. Subscribers could deplete their entire limit in just a few moments by repeating the same request, and find themselves locked out for the remainder of the week. This approach was inflexible and disruptive, causing unnecessary frustration and delays.
In early 2022, we patched the Duplicate Certificate limit to address this rigidity. Using a naive token-bucket approach, we allowed users to “earn back” requests incrementally, cutting the wait time—once rate limited—to about 1.4 days. The patch worked by fetching recent issuance timestamps and calculating the time between them to grant requests based on the time waited. This change also allowed us to include a Retry-After timestamp in rate limited responses. While this improved the user experience for this one limit, we understood it to be a temporary fix for a system in need of a larger overhaul.
When a Problem Grows Large Enough, It Finds the Time for You
Setting aside time for a complete overhaul of our rate-limiting system wasn’t easy. Our development team, composed of just three permanent engineers, typically juggles several competing priorities. Yet by 2023, our flagging rate limits code had begun to endanger the reliability of our MariaDB databases.
Our authorizations table was now regularly read an order of magnitude more than any other. Individually identifying and deleting unnecessary rows—or specific values—had proved unworkable due to poor MariaDB delete performance. Storage engines like InnoDB must maintain indexes, foreign key constraints, and transaction logs for every deletion, which significantly increases overhead for concurrent transactions and leads to gruelingly slow deletes.
Our SRE team automated the cleanup of old rows for many tables using the PARTITION
command, which worked well for bookkeeping and compliance data. Unfortunately, we couldn’t apply it to most of our purpose-built rate limit tables. These tables depend on ON DUPLICATE KEY UPDATE
, a mechanism that requires the targeted column to be a unique index or primary key, while partitioning demands that the primary key be included in the partitioning key.
Indexes on these tables—such as those tracking requested hostnames—often grew larger than the tables themselves and, in some cases, exceeded the memory of our smaller staging environment databases, eventually forcing us to periodically wipe them entirely.
By late 2023, this cascading confluence of complexities required a reckoning. We set out to design a rate limiting system built for the future.
The Solution: Redis + GCRA
We designed a system from the ground up that combines Redis for storage and the Generic Cell Rate Algorithm (GCRA) for managing request flow.
Why Redis?
Our engineers were already familiar with Redis, having recently deployed it to cache and serve OCSP responses. Its high throughput and low latency made it a candidate for tracking rate limit state as well.
By moving this data from MariaDB to Redis, we could eliminate the need for ever-expanding, purpose-built tables and indexes, significantly reducing read and write pressure. Redis’s feature set made it a perfect fit for the task. Most rate limit data is ephemeral—after a few days, or sometimes just minutes, it becomes irrelevant unless the subscriber calls us again. Redis’s per-key Time-To-Live would allow us to expire this data the moment it was no longer needed.
Redis also supports atomic integer operations, enabling fast, reliable counter updates, even when increments occur concurrently. Its “set if not exist” functionality ensures efficient initialization of keys, while pipeline support allows us to get and set multiple keys in bulk. This combination of familiarity, speed, simplicity, and flexibility made Redis the natural choice.
Why GCRA?
The Generic Cell Rate Algorithm (GCRA) is a virtual scheduling algorithm originally designed for telecommunication networks to regulate traffic and prevent congestion. Unlike traditional sliding window approaches that work in fixed time blocks, GCRA enforces rate limits continuously, making it well-suited to our goals.
A rate limit in GCRA is defined by two parameters: the emission interval and the burst tolerance. The emission interval specifies the minimum time that must pass between consecutive requests to maintain a steady rate. For example, an emission interval of one second allows one request per second on average. The burst tolerance determines how much unused capacity can be drawn on to allow short bursts of requests beyond the steady rate.
When a request is received, GCRA compares the current time to the Theoretical Arrival Time (TAT), which indicates when the next request is allowed under the steady rate. If the current time is greater than or equal to the TAT, the request is permitted, and the TAT is updated by adding the emission interval. If the current time plus the burst tolerance is greater than or equal to the TAT, the request is also permitted. In this case, the TAT is updated by adding the emission interval, reducing the remaining burst capacity.
However, if the current time plus the burst tolerance is less than the TAT, the request exceeds the rate limit and is denied. Conveniently, the difference between the TAT and the current time can then be returned to the subscriber in a Retry-After header, informing their client exactly how long to wait before trying again.
To illustrate, consider a rate limit of one request per second (emission interval = 1s) with a burst tolerance of three requests. Up to three requests can arrive back-to-back, but subsequent requests will be delayed until “now” catches up to the TAT, ensuring that the average rate over time remains one request per second.
What sets GCRA apart is its ability to automatically refill capacity gradually and continuously. Unlike sliding windows, where users must wait for an entire time block to reset, GCRA allows users to retry as soon as enough time has passed to maintain the steady rate. This dynamic pacing reduces frustration and provides a smoother, more predictable experience for subscribers.
GCRA is also storage and computationally efficient. It requires tracking only the TAT—stored as a single Unix timestamp—and performing simple arithmetic to enforce limits. This lightweight design allows it to scale to handle billions of requests, with minimal computational and memory overhead.
The Results: Faster, Smoother, and More Scalable
The transition to Redis and GCRA brought immediate, measurable improvements. We cut database load, improved response times, and delivered consistent performance even during periods of peak traffic. Subscribers now experience smoother, more predictable behavior, while the system’s increased permissiveness allows for certificates that the previous approach would have delayed—all achieved without sacrificing scalability or fairness.
Rate Limit Check Latency
Check latency is the extra time added to each request while verifying rate limit compliance. Under the old MariaDB-based system, these checks slowed noticeably during peak traffic, when database contention caused significant delays. Our new Redis-based system dramatically reduced this overhead. The high-traffic “new-order” endpoint saw the greatest improvement, while the “new-account” endpoint—though considerably lighter in traffic—also benefited, especially callers with IPv6 addresses. These results show that our subscribers now experience consistent response times, even under peak load.
Database Health
Our once strained database servers are now operating with ample headroom. In total, MariaDB operations have dropped by 80%, improving responsiveness, reducing contention, and freeing up resources for mission-critical issuance workflows.
Buffer pool requests have decreased by more than 50%, improving caching efficiency and reducing overall memory pressure.
Reads of the authorizations table—a notorious bottleneck—have dropped by over 99%. Previously, this table outpaced all others by more than two orders of magnitude; now it ranks second (the green line below), just narrowly surpassing our third most-read table.
Tracking Zombie Clients
In late 2024, we turned our new rate limiting system toward a longstanding challenge: “zombie clients.” These requesters repeatedly attempt to issue certificates but fail, often because of expired domains or misconfigured DNS records. Together, they generate nearly half of all order attempts yet almost never succeed. We were able to build on this new infrastructure to record consecutive ACME challenge failures by account/domain pair and automatically “pause” this problematic issuance. The result has been a considerable reduction in resource consumption, freeing database and network capacity without disrupting legitimate traffic.
Scalability on Redis
Before deploying the limits to track zombie clients, we maintained just over 12.6 million unique TATs across several Redis databases. Within 24 hours, that number more than doubled to 26 million, and by the end of the week, it peaked at over 30 million. Yet, even with this sharp increase, there was no noticeable impact on rate limit responsiveness. That’s all we’ll share for now about zombie clients—there’s plenty more to unpack, but we’ll save those insights and figures for a future blog post.
What’s Next?
Scaling our rate limits to keep pace with the growth of the Web is a huge achievement, but there’s still more to do. In the near term, many of our other ACME endpoints rely on load balancers to enforce per-IP limits, which works but gives us little control over the feedback provided to subscribers. We’re looking to deploy this new infrastructure across those endpoints as well. Looking further ahead, we’re exploring how we might redefine our rate limits now that we’re no longer constrained by a system that simply counts events between two points in time.
By adopting Redis and GCRA, we’ve built a flexible, efficient rate limit system that promotes fair usage and enables our infrastructure to handle ever-growing demand. We’ll keep adapting to the ever-evolving Web while honoring our primary goal: giving people the certificates they need, for free, in the most user-friendly way we can.
Thu, 30 Jan 2025 00:00:00 +0000
Ending Support for Expiration Notification Emails
Earlier this year we announced our intention to introduce short-lived certificates with lifetimes of six days as an option for our subscribers. Yesterday we issued our first short-lived certificate. You can see the certificate at the bottom of our post, or here thanks to Certificate Transparency logs. We issued it to ourselves and then immediately revoked it so we can observe the certificate’s whole lifecycle. This is the first step towards making short-lived certificates available to all subscribers.
The next step is for us to make short-lived certificates available to a small set of our subscribers so we can make sure our systems scale as expected prior to general availability. We expect this next phase to begin during Q2 of this year.
We expect short-lived certificates to be generally available by the end of this year.
How To Get Six-Day Certificates
Once short-lived certificates are an option for you, you’ll need to use an ACME client that supports ACME certificate profiles and select the short-lived certificate profile (“shortlived”). The lego
client recently added this functionality.
In the meantime, the best way to prepare to take advantage of short-lived certificates is to make sure your ACME client is reliably renewing certificates in an automated fashion. If that’s working well then there should be no costs to switching to short-lived certificates.
You’ll also want to be sure your ACME client is running frequently - both for the sake of renewing short-lived certificates and so as to take advantage of ACME Renewal Information (ARI). ARI allows Let’s Encrypt to notify your client if it should renew early for some reason. ARI checks should happen at least once per day, and short-lived certificates should be renewed every two to three days, so we recommend having your client run at least once per day.
Shorter Certificate Lifetimes Are Good for Security
When the private key associated with a certificate is compromised, the recommendation has always been to have the certificate revoked so that people will know not to use it. Unfortunately, certificate revocation doesn’t work very well. This means that certificates with compromised keys (or other issues) may continue to be used until they expire. The longer the lifetime of the certificate, the longer the potential for use of a problematic certificate.
The primary advantage of short-lived certificates is that they greatly reduce the potential compromise window because they expire relatively quickly. This reduces the need for certificate revocation, which has historically been unreliable. Our six-day certificates will not include OCSP or CRL URLs. Additionally, short-lived certificates practically require automation, and we believe that automating certificate issuance is important for security.
Questions
If you have questions or comments about our plans, feel free to let us know on our community forums.
We’d like to thank Open Technology Fund for supporting this work.
Our First 6-Day Certificate
PEM format:
-----BEGIN CERTIFICATE-----
MIIDSzCCAtGgAwIBAgISA7CwFcGk4mQWEXMacRtxHeDvMAoGCCqGSM49BAMDMDIx
CzAJBgNVBAYTAlVTMRYwFAYDVQQKEw1MZXQncyBFbmNyeXB0MQswCQYDVQQDEwJF
NjAeFw0yNTAyMTkxNzMwMDFaFw0yNTAyMjYwOTMwMDBaMAAwWTATBgcqhkjOPQIB
BggqhkjOPQMBBwNCAAQoSItt2V1aocI5dxrKR8iLfmm0KiVvOhiwKByzu2kLeC7C
0BdfAgtwdICdkuEhAXokhXLq6DNZZgmh5T4flVwZo4IB9zCCAfMwDgYDVR0PAQH/
BAQDAgeAMBMGA1UdJQQMMAoGCCsGAQUFBwMBMAwGA1UdEwEB/wQCMAAwHwYDVR0j
BBgwFoAUkydGmAOpUWiOmNbEQkjbI79YlNIwVQYIKwYBBQUHAQEESTBHMCEGCCsG
AQUFBzABhhVodHRwOi8vZTYuby5sZW5jci5vcmcwIgYIKwYBBQUHMAKGFmh0dHA6
Ly9lNi5pLmxlbmNyLm9yZy8wKAYDVR0RAQH/BB4wHIIaaGVsbG93b3JsZC5sZXRz
ZW5jcnlwdC5vcmcwEwYDVR0gBAwwCjAIBgZngQwBAgEwggEFBgorBgEEAdZ5AgQC
BIH2BIHzAPEAdgDM+w9qhXEJZf6Vm1PO6bJ8IumFXA2XjbapflTA/kwNsAAAAZUf
d/zOAAAEAwBHMEUCIFNd51TfSNiJrO+294t49C5ANc4oC7gTUzf7xnlNlhKsAiEA
wi5hfiC9SsKLxlTQ0sctUxhLmdYh40r6ECWQS/yWw2AAdwDgkrP8DB3I52g2H95h
uZZNClJ4GYpy1nLEsE2lbW9UBAAAAZUfd/0TAAAEAwBIMEYCIQCs2NuZIUIloOaH
1t9eXDKb8bjoWESBPsK4i2BxMvEIswIhAOMNaQNyr1YkzrcNUz15qGV0oVLg5BJN
+ikWxXOdcRHFMAoGCCqGSM49BAMDA2gAMGUCMDANqy7G09AIwzXcd7SNl7uFwhC+
xlfduvp1PeEDHc/FA9K3mRYkGXuKtzNdOh7wcAIxALjEMDmBQiwXbB447oGkaZAe
0rqxA3EtNV5wj0obeObluj/NgUsVEG9OqiBIoggFRw==
-----END CERTIFICATE-----
openssl x509 -text
output:
Certificate:
Data:
Version: 3 (0x2)
Serial Number:
03:b0:b0:15:c1:a4:e2:64:16:11:73:1a:71:1b:71:1d:e0:ef
Signature Algorithm: ecdsa-with-SHA384
Issuer: C=US, O=Let's Encrypt, CN=E6
Validity
Not Before: Feb 19 17:30:01 2025 GMT
Not After : Feb 26 09:30:00 2025 GMT
Subject:
Subject Public Key Info:
Public Key Algorithm: id-ecPublicKey
Public-Key: (256 bit)
pub:
04:28:48:8b:6d:d9:5d:5a:a1:c2:39:77:1a:ca:47:
c8:8b:7e:69:b4:2a:25:6f:3a:18:b0:28:1c:b3:bb:
69:0b:78:2e:c2:d0:17:5f:02:0b:70:74:80:9d:92:
e1:21:01:7a:24:85:72:ea:e8:33:59:66:09:a1:e5:
3e:1f:95:5c:19
ASN1 OID: prime256v1
NIST CURVE: P-256
X509v3 extensions:
X509v3 Key Usage: critical
Digital Signature
X509v3 Extended Key Usage:
TLS Web Server Authentication
X509v3 Basic Constraints: critical
CA:FALSE
X509v3 Authority Key Identifier:
93:27:46:98:03:A9:51:68:8E:98:D6:C4:42:48:DB:23:BF:58:94:D2
Authority Information Access:
OCSP - URI:http://e6.o.lencr.org
CA Issuers - URI:http://e6.i.lencr.org/
X509v3 Subject Alternative Name: critical
DNS:helloworld.letsencrypt.org
X509v3 Certificate Policies:
Policy: 2.23.140.1.2.1
CT Precertificate SCTs:
Signed Certificate Timestamp:
Version : v1 (0x0)
Log ID : CC:FB:0F:6A:85:71:09:65:FE:95:9B:53:CE:E9:B2:7C:
22:E9:85:5C:0D:97:8D:B6:A9:7E:54:C0:FE:4C:0D:B0
Timestamp : Feb 19 18:28:32.078 2025 GMT
Extensions: none
Signature : ecdsa-with-SHA256
30:45:02:20:53:5D:E7:54:DF:48:D8:89:AC:EF:B6:F7:
8B:78:F4:2E:40:35:CE:28:0B:B8:13:53:37:FB:C6:79:
4D:96:12:AC:02:21:00:C2:2E:61:7E:20:BD:4A:C2:8B:
C6:54:D0:D2:C7:2D:53:18:4B:99:D6:21:E3:4A:FA:10:
25:90:4B:FC:96:C3:60
Signed Certificate Timestamp:
Version : v1 (0x0)
Log ID : E0:92:B3:FC:0C:1D:C8:E7:68:36:1F:DE:61:B9:96:4D:
0A:52:78:19:8A:72:D6:72:C4:B0:4D:A5:6D:6F:54:04
Timestamp : Feb 19 18:28:32.147 2025 GMT
Extensions: none
Signature : ecdsa-with-SHA256
30:46:02:21:00:AC:D8:DB:99:21:42:25:A0:E6:87:D6:
DF:5E:5C:32:9B:F1:B8:E8:58:44:81:3E:C2:B8:8B:60:
71:32:F1:08:B3:02:21:00:E3:0D:69:03:72:AF:56:24:
CE:B7:0D:53:3D:79:A8:65:74:A1:52:E0:E4:12:4D:FA:
29:16:C5:73:9D:71:11:C5
Signature Algorithm: ecdsa-with-SHA384
Signature Value:
30:65:02:30:30:0d:ab:2e:c6:d3:d0:08:c3:35:dc:77:b4:8d:
97:bb:85:c2:10:be:c6:57:dd:ba:fa:75:3d:e1:03:1d:cf:c5:
03:d2:b7:99:16:24:19:7b:8a:b7:33:5d:3a:1e:f0:70:02:31:
00:b8:c4:30:39:81:42:2c:17:6c:1e:38:ee:81:a4:69:90:1e:
d2:ba:b1:03:71:2d:35:5e:70:8f:4a:1b:78:e6:e5:ba:3f:cd:
81:4b:15:10:6f:4e:aa:20:48:a2:08:05:47

2025 marks ten years of Let’s Encrypt. Already this year we’ve taken steps to continue to deliver on our values of user privacy, efficiency, and innovation, all with the intent of continuing to deliver free TLS certificates to as many people as possible; to deliver encryption for everybody.
And while we’re excited about the technical progress we’ll make this year, we’re also going to celebrate this tenth anniversary by highlighting the people around the world who make our impact possible. It’s no small village.
From a community forum that has provided free technical support, to our roster of sponsors who provide vital funding, to the thousands of individual supporters who contribute financially to Let’s Encrypt each year, free TLS at Internet scale works because people have supported it year in, year out, for ten years.
Each month we’ll highlight a different set of people behind our “everybody.” Who do you want to see us highlight? What use cases of Let’s Encrypt have you seen that amazed you? What about our work do you hope we’ll continue or improve as we go forward? Let us know on LinkedIn, or drop a note to outreach@letsencrypt.org.
Encryption for Everybody is our unofficial tagline for this tenth anniversary year. What we love about it is that, yes, it captures our commitment to ensuring anyone around the world can easily get a cert for free. But more importantly, it captures the reality that technical innovation won’t work without people believing in it and supporting it. We’re grateful that, for ten years (and counting!), our community of supporters has made an impact on the lives of billions of Internet users—an impact that’s made theWeb more secure and privacy respecting for everybody, everywhere.
Internet Security Research Group (ISRG) is the parent organization of Let’s Encrypt, Prossimo, and Divvi Up. ISRG is a 501(c)(3) nonprofit. If you’d like to support our work, please consider getting involved, donating, or encouraging your company to become a sponsor.
Let’s Encrypt protects a vast portion of the Web by providing TLS certificates to over 550 million websites—a figure that has grown by 42% in the last year alone. We currently issue over 340,000 certificates per hour. To manage this immense traffic and maintain responsiveness under high demand, our infrastructure relies on rate limiting. In 2015, we introduced our first rate limiting system, built on MariaDB. It evolved alongside our rapidly growing service but eventually revealed its limits: straining database servers, forcing long reset times on subscribers, and slowing down every request.
We needed a solution built for the future—one that could scale with demand, reduce the load on MariaDB, and adapt to real-world subscriber request patterns. The result was a new rate limiting system powered by Redis and a proven virtual scheduling algorithm from the mid-90s. Efficient and scalable, and capable of handling over a billion active certificates.
Rate Limiting a Free Service is Hard
In 2015, Let’s Encrypt was in early preview, and we faced a unique challenge. We were poised to become incredibly popular, offering certificates freely and without requiring contact information or email verification. Ensuring fair usage and preventing abuse without traditional safeguards demanded an atypical approach to rate limiting.
We decided to limit the number of certificates issued—per week—for each registered domain. Registered domains are a limited resource with real costs, making them a natural and effective basis for rate limiting—one that mirrors the structure of the Web itself. Specifically, this approach targets the effective Top-Level Domain (eTLD), as defined by the Public Suffix List (PSL), plus one additional label to the left. For example, in new.blog.example.co.uk
, the eTLD is .co.uk
, making example.co.uk
the eTLD+1.
Counting Events Was Easy
For each successfully issued certificate, we logged an entry in a table that recorded the registered domain, the issuance date, and other relevant details. To enforce rate limits, the system scanned this table, counted the rows matching a given registered domain within a specific time window, and compared the total to a configured threshold. This simple design formed the basis for all future rate limits.
Counting a Lot of Events Got Expensive
By 2019, we had added six new rate limits to protect our infrastructure as demand for certificates surged. Enforcing these limits required frequent scans of database tables to count recent matching events. These operations, especially on our heavily-used authorizations table, caused significant overhead, with reads outpacing all other tables—often by an order of magnitude.
Rate limit calculations were performed early in request processing and often. Counting rows in MariaDB, particularly for accounts with rate limit overrides, was inherently expensive and quickly became a scaling bottleneck.
Adding new limits required careful trade-offs. Decisions about whether to reuse existing schema, optimize indexes, or design purpose-built tables helped balance performance, complexity, and long-term maintainability.
Buying Runway — Offloading Reads
In late 2021, we updated our control plane and Boulder—our in-house CA software—to route most API reads, including rate limit checks, to database replicas. This reduced the load on the primary database and improved its overall health. At the same time, however, latency of rate limit checks during peak hours continued to rise, highlighting the limitations of scaling reads alone.
Sliding Windows Got Frustrating
Subscribers were frequently hitting rate limits unexpectedly, leaving them unable to request certificates for days. This issue stemmed from our use of relatively large rate limiting windows—most spanning a week. Subscribers could deplete their entire limit in just a few moments by repeating the same request, and find themselves locked out for the remainder of the week. This approach was inflexible and disruptive, causing unnecessary frustration and delays.
In early 2022, we patched the Duplicate Certificate limit to address this rigidity. Using a naive token-bucket approach, we allowed users to “earn back” requests incrementally, cutting the wait time—once rate limited—to about 1.4 days. The patch worked by fetching recent issuance timestamps and calculating the time between them to grant requests based on the time waited. This change also allowed us to include a Retry-After timestamp in rate limited responses. While this improved the user experience for this one limit, we understood it to be a temporary fix for a system in need of a larger overhaul.
When a Problem Grows Large Enough, It Finds the Time for You
Setting aside time for a complete overhaul of our rate-limiting system wasn’t easy. Our development team, composed of just three permanent engineers, typically juggles several competing priorities. Yet by 2023, our flagging rate limits code had begun to endanger the reliability of our MariaDB databases.
Our authorizations table was now regularly read an order of magnitude more than any other. Individually identifying and deleting unnecessary rows—or specific values—had proved unworkable due to poor MariaDB delete performance. Storage engines like InnoDB must maintain indexes, foreign key constraints, and transaction logs for every deletion, which significantly increases overhead for concurrent transactions and leads to gruelingly slow deletes.
Our SRE team automated the cleanup of old rows for many tables using the PARTITION
command, which worked well for bookkeeping and compliance data. Unfortunately, we couldn’t apply it to most of our purpose-built rate limit tables. These tables depend on ON DUPLICATE KEY UPDATE
, a mechanism that requires the targeted column to be a unique index or primary key, while partitioning demands that the primary key be included in the partitioning key.
Indexes on these tables—such as those tracking requested hostnames—often grew larger than the tables themselves and, in some cases, exceeded the memory of our smaller staging environment databases, eventually forcing us to periodically wipe them entirely.
By late 2023, this cascading confluence of complexities required a reckoning. We set out to design a rate limiting system built for the future.
The Solution: Redis + GCRA
We designed a system from the ground up that combines Redis for storage and the Generic Cell Rate Algorithm (GCRA) for managing request flow.
Why Redis?
Our engineers were already familiar with Redis, having recently deployed it to cache and serve OCSP responses. Its high throughput and low latency made it a candidate for tracking rate limit state as well.
By moving this data from MariaDB to Redis, we could eliminate the need for ever-expanding, purpose-built tables and indexes, significantly reducing read and write pressure. Redis’s feature set made it a perfect fit for the task. Most rate limit data is ephemeral—after a few days, or sometimes just minutes, it becomes irrelevant unless the subscriber calls us again. Redis’s per-key Time-To-Live would allow us to expire this data the moment it was no longer needed.
Redis also supports atomic integer operations, enabling fast, reliable counter updates, even when increments occur concurrently. Its “set if not exist” functionality ensures efficient initialization of keys, while pipeline support allows us to get and set multiple keys in bulk. This combination of familiarity, speed, simplicity, and flexibility made Redis the natural choice.
Why GCRA?
The Generic Cell Rate Algorithm (GCRA) is a virtual scheduling algorithm originally designed for telecommunication networks to regulate traffic and prevent congestion. Unlike traditional sliding window approaches that work in fixed time blocks, GCRA enforces rate limits continuously, making it well-suited to our goals.
A rate limit in GCRA is defined by two parameters: the emission interval and the burst tolerance. The emission interval specifies the minimum time that must pass between consecutive requests to maintain a steady rate. For example, an emission interval of one second allows one request per second on average. The burst tolerance determines how much unused capacity can be drawn on to allow short bursts of requests beyond the steady rate.
When a request is received, GCRA compares the current time to the Theoretical Arrival Time (TAT), which indicates when the next request is allowed under the steady rate. If the current time is greater than or equal to the TAT, the request is permitted, and the TAT is updated by adding the emission interval. If the current time plus the burst tolerance is greater than or equal to the TAT, the request is also permitted. In this case, the TAT is updated by adding the emission interval, reducing the remaining burst capacity.
However, if the current time plus the burst tolerance is less than the TAT, the request exceeds the rate limit and is denied. Conveniently, the difference between the TAT and the current time can then be returned to the subscriber in a Retry-After header, informing their client exactly how long to wait before trying again.
To illustrate, consider a rate limit of one request per second (emission interval = 1s) with a burst tolerance of three requests. Up to three requests can arrive back-to-back, but subsequent requests will be delayed until “now” catches up to the TAT, ensuring that the average rate over time remains one request per second.
What sets GCRA apart is its ability to automatically refill capacity gradually and continuously. Unlike sliding windows, where users must wait for an entire time block to reset, GCRA allows users to retry as soon as enough time has passed to maintain the steady rate. This dynamic pacing reduces frustration and provides a smoother, more predictable experience for subscribers.
GCRA is also storage and computationally efficient. It requires tracking only the TAT—stored as a single Unix timestamp—and performing simple arithmetic to enforce limits. This lightweight design allows it to scale to handle billions of requests, with minimal computational and memory overhead.
The Results: Faster, Smoother, and More Scalable
The transition to Redis and GCRA brought immediate, measurable improvements. We cut database load, improved response times, and delivered consistent performance even during periods of peak traffic. Subscribers now experience smoother, more predictable behavior, while the system’s increased permissiveness allows for certificates that the previous approach would have delayed—all achieved without sacrificing scalability or fairness.
Rate Limit Check Latency
Check latency is the extra time added to each request while verifying rate limit compliance. Under the old MariaDB-based system, these checks slowed noticeably during peak traffic, when database contention caused significant delays. Our new Redis-based system dramatically reduced this overhead. The high-traffic “new-order” endpoint saw the greatest improvement, while the “new-account” endpoint—though considerably lighter in traffic—also benefited, especially callers with IPv6 addresses. These results show that our subscribers now experience consistent response times, even under peak load.
Database Health
Our once strained database servers are now operating with ample headroom. In total, MariaDB operations have dropped by 80%, improving responsiveness, reducing contention, and freeing up resources for mission-critical issuance workflows.
Buffer pool requests have decreased by more than 50%, improving caching efficiency and reducing overall memory pressure.
Reads of the authorizations table—a notorious bottleneck—have dropped by over 99%. Previously, this table outpaced all others by more than two orders of magnitude; now it ranks second (the green line below), just narrowly surpassing our third most-read table.
Tracking Zombie Clients
In late 2024, we turned our new rate limiting system toward a longstanding challenge: “zombie clients.” These requesters repeatedly attempt to issue certificates but fail, often because of expired domains or misconfigured DNS records. Together, they generate nearly half of all order attempts yet almost never succeed. We were able to build on this new infrastructure to record consecutive ACME challenge failures by account/domain pair and automatically “pause” this problematic issuance. The result has been a considerable reduction in resource consumption, freeing database and network capacity without disrupting legitimate traffic.
Scalability on Redis
Before deploying the limits to track zombie clients, we maintained just over 12.6 million unique TATs across several Redis databases. Within 24 hours, that number more than doubled to 26 million, and by the end of the week, it peaked at over 30 million. Yet, even with this sharp increase, there was no noticeable impact on rate limit responsiveness. That’s all we’ll share for now about zombie clients—there’s plenty more to unpack, but we’ll save those insights and figures for a future blog post.
What’s Next?
Scaling our rate limits to keep pace with the growth of the Web is a huge achievement, but there’s still more to do. In the near term, many of our other ACME endpoints rely on load balancers to enforce per-IP limits, which works but gives us little control over the feedback provided to subscribers. We’re looking to deploy this new infrastructure across those endpoints as well. Looking further ahead, we’re exploring how we might redefine our rate limits now that we’re no longer constrained by a system that simply counts events between two points in time.
By adopting Redis and GCRA, we’ve built a flexible, efficient rate limit system that promotes fair usage and enables our infrastructure to handle ever-growing demand. We’ll keep adapting to the ever-evolving Web while honoring our primary goal: giving people the certificates they need, for free, in the most user-friendly way we can.
Since its inception, Let’s Encrypt has been sending expiration notification emails to subscribers that have provided an email address to us. We will be ending this service on June 4, 2025. The decision to end this service is the result of the following factors:
-
Over the past 10 years more and more of our subscribers have been able to put reliable automation into place for certificate renewal.
-
Providing expiration notification emails means that we have to retain millions of email addresses connected to issuance records. As an organization that values privacy, removing this requirement is important to us.
-
Providing expiration notifications costs Let’s Encrypt tens of thousands of dollars per year, money that we believe can be better spent on other aspects of our infrastructure.
-
Providing expiration notifications adds complexity to our infrastructure, which takes time and attention to manage and increases the likelihood of mistakes being made. Over the long term, particularly as we add support for new service components, we need to manage overall complexity by phasing out system components that can no longer be justified.
For those who would like to continue receiving expiration notifications, we recommend using a third party service such as Red Sift Certificates Lite (formerly Hardenize). Red Sift’s monitoring service providing expiration emails is free of charge for up to 250 certificates. More monitoring options can be found here.
While we will be minimizing the email addresses we retain connected to issuance data, you can opt in to receive other emails. We’ll keep you informed about technical updates, and other news about Let’s Encrypt and our parent nonprofit, ISRG, based on the preferences you choose. You can sign up for our email lists below:
Wed, 22 Jan 2025 00:00:00 +0000
Announcing Six Day and IP Address Certificate Options in 2025
Earlier this year we announced our intention to introduce short-lived certificates with lifetimes of six days as an option for our subscribers. Yesterday we issued our first short-lived certificate. You can see the certificate at the bottom of our post, or here thanks to Certificate Transparency logs. We issued it to ourselves and then immediately revoked it so we can observe the certificate’s whole lifecycle. This is the first step towards making short-lived certificates available to all subscribers.
The next step is for us to make short-lived certificates available to a small set of our subscribers so we can make sure our systems scale as expected prior to general availability. We expect this next phase to begin during Q2 of this year.
We expect short-lived certificates to be generally available by the end of this year.
How To Get Six-Day Certificates
Once short-lived certificates are an option for you, you’ll need to use an ACME client that supports ACME certificate profiles and select the short-lived certificate profile (“shortlived”). The lego
client recently added this functionality.
In the meantime, the best way to prepare to take advantage of short-lived certificates is to make sure your ACME client is reliably renewing certificates in an automated fashion. If that’s working well then there should be no costs to switching to short-lived certificates.
You’ll also want to be sure your ACME client is running frequently - both for the sake of renewing short-lived certificates and so as to take advantage of ACME Renewal Information (ARI). ARI allows Let’s Encrypt to notify your client if it should renew early for some reason. ARI checks should happen at least once per day, and short-lived certificates should be renewed every two to three days, so we recommend having your client run at least once per day.
Shorter Certificate Lifetimes Are Good for Security
When the private key associated with a certificate is compromised, the recommendation has always been to have the certificate revoked so that people will know not to use it. Unfortunately, certificate revocation doesn’t work very well. This means that certificates with compromised keys (or other issues) may continue to be used until they expire. The longer the lifetime of the certificate, the longer the potential for use of a problematic certificate.
The primary advantage of short-lived certificates is that they greatly reduce the potential compromise window because they expire relatively quickly. This reduces the need for certificate revocation, which has historically been unreliable. Our six-day certificates will not include OCSP or CRL URLs. Additionally, short-lived certificates practically require automation, and we believe that automating certificate issuance is important for security.
Questions
If you have questions or comments about our plans, feel free to let us know on our community forums.
We’d like to thank Open Technology Fund for supporting this work.
Our First 6-Day Certificate
PEM format:
-----BEGIN CERTIFICATE-----
MIIDSzCCAtGgAwIBAgISA7CwFcGk4mQWEXMacRtxHeDvMAoGCCqGSM49BAMDMDIx
CzAJBgNVBAYTAlVTMRYwFAYDVQQKEw1MZXQncyBFbmNyeXB0MQswCQYDVQQDEwJF
NjAeFw0yNTAyMTkxNzMwMDFaFw0yNTAyMjYwOTMwMDBaMAAwWTATBgcqhkjOPQIB
BggqhkjOPQMBBwNCAAQoSItt2V1aocI5dxrKR8iLfmm0KiVvOhiwKByzu2kLeC7C
0BdfAgtwdICdkuEhAXokhXLq6DNZZgmh5T4flVwZo4IB9zCCAfMwDgYDVR0PAQH/
BAQDAgeAMBMGA1UdJQQMMAoGCCsGAQUFBwMBMAwGA1UdEwEB/wQCMAAwHwYDVR0j
BBgwFoAUkydGmAOpUWiOmNbEQkjbI79YlNIwVQYIKwYBBQUHAQEESTBHMCEGCCsG
AQUFBzABhhVodHRwOi8vZTYuby5sZW5jci5vcmcwIgYIKwYBBQUHMAKGFmh0dHA6
Ly9lNi5pLmxlbmNyLm9yZy8wKAYDVR0RAQH/BB4wHIIaaGVsbG93b3JsZC5sZXRz
ZW5jcnlwdC5vcmcwEwYDVR0gBAwwCjAIBgZngQwBAgEwggEFBgorBgEEAdZ5AgQC
BIH2BIHzAPEAdgDM+w9qhXEJZf6Vm1PO6bJ8IumFXA2XjbapflTA/kwNsAAAAZUf
d/zOAAAEAwBHMEUCIFNd51TfSNiJrO+294t49C5ANc4oC7gTUzf7xnlNlhKsAiEA
wi5hfiC9SsKLxlTQ0sctUxhLmdYh40r6ECWQS/yWw2AAdwDgkrP8DB3I52g2H95h
uZZNClJ4GYpy1nLEsE2lbW9UBAAAAZUfd/0TAAAEAwBIMEYCIQCs2NuZIUIloOaH
1t9eXDKb8bjoWESBPsK4i2BxMvEIswIhAOMNaQNyr1YkzrcNUz15qGV0oVLg5BJN
+ikWxXOdcRHFMAoGCCqGSM49BAMDA2gAMGUCMDANqy7G09AIwzXcd7SNl7uFwhC+
xlfduvp1PeEDHc/FA9K3mRYkGXuKtzNdOh7wcAIxALjEMDmBQiwXbB447oGkaZAe
0rqxA3EtNV5wj0obeObluj/NgUsVEG9OqiBIoggFRw==
-----END CERTIFICATE-----
openssl x509 -text
output:
Certificate:
Data:
Version: 3 (0x2)
Serial Number:
03:b0:b0:15:c1:a4:e2:64:16:11:73:1a:71:1b:71:1d:e0:ef
Signature Algorithm: ecdsa-with-SHA384
Issuer: C=US, O=Let's Encrypt, CN=E6
Validity
Not Before: Feb 19 17:30:01 2025 GMT
Not After : Feb 26 09:30:00 2025 GMT
Subject:
Subject Public Key Info:
Public Key Algorithm: id-ecPublicKey
Public-Key: (256 bit)
pub:
04:28:48:8b:6d:d9:5d:5a:a1:c2:39:77:1a:ca:47:
c8:8b:7e:69:b4:2a:25:6f:3a:18:b0:28:1c:b3:bb:
69:0b:78:2e:c2:d0:17:5f:02:0b:70:74:80:9d:92:
e1:21:01:7a:24:85:72:ea:e8:33:59:66:09:a1:e5:
3e:1f:95:5c:19
ASN1 OID: prime256v1
NIST CURVE: P-256
X509v3 extensions:
X509v3 Key Usage: critical
Digital Signature
X509v3 Extended Key Usage:
TLS Web Server Authentication
X509v3 Basic Constraints: critical
CA:FALSE
X509v3 Authority Key Identifier:
93:27:46:98:03:A9:51:68:8E:98:D6:C4:42:48:DB:23:BF:58:94:D2
Authority Information Access:
OCSP - URI:http://e6.o.lencr.org
CA Issuers - URI:http://e6.i.lencr.org/
X509v3 Subject Alternative Name: critical
DNS:helloworld.letsencrypt.org
X509v3 Certificate Policies:
Policy: 2.23.140.1.2.1
CT Precertificate SCTs:
Signed Certificate Timestamp:
Version : v1 (0x0)
Log ID : CC:FB:0F:6A:85:71:09:65:FE:95:9B:53:CE:E9:B2:7C:
22:E9:85:5C:0D:97:8D:B6:A9:7E:54:C0:FE:4C:0D:B0
Timestamp : Feb 19 18:28:32.078 2025 GMT
Extensions: none
Signature : ecdsa-with-SHA256
30:45:02:20:53:5D:E7:54:DF:48:D8:89:AC:EF:B6:F7:
8B:78:F4:2E:40:35:CE:28:0B:B8:13:53:37:FB:C6:79:
4D:96:12:AC:02:21:00:C2:2E:61:7E:20:BD:4A:C2:8B:
C6:54:D0:D2:C7:2D:53:18:4B:99:D6:21:E3:4A:FA:10:
25:90:4B:FC:96:C3:60
Signed Certificate Timestamp:
Version : v1 (0x0)
Log ID : E0:92:B3:FC:0C:1D:C8:E7:68:36:1F:DE:61:B9:96:4D:
0A:52:78:19:8A:72:D6:72:C4:B0:4D:A5:6D:6F:54:04
Timestamp : Feb 19 18:28:32.147 2025 GMT
Extensions: none
Signature : ecdsa-with-SHA256
30:46:02:21:00:AC:D8:DB:99:21:42:25:A0:E6:87:D6:
DF:5E:5C:32:9B:F1:B8:E8:58:44:81:3E:C2:B8:8B:60:
71:32:F1:08:B3:02:21:00:E3:0D:69:03:72:AF:56:24:
CE:B7:0D:53:3D:79:A8:65:74:A1:52:E0:E4:12:4D:FA:
29:16:C5:73:9D:71:11:C5
Signature Algorithm: ecdsa-with-SHA384
Signature Value:
30:65:02:30:30:0d:ab:2e:c6:d3:d0:08:c3:35:dc:77:b4:8d:
97:bb:85:c2:10:be:c6:57:dd:ba:fa:75:3d:e1:03:1d:cf:c5:
03:d2:b7:99:16:24:19:7b:8a:b7:33:5d:3a:1e:f0:70:02:31:
00:b8:c4:30:39:81:42:2c:17:6c:1e:38:ee:81:a4:69:90:1e:
d2:ba:b1:03:71:2d:35:5e:70:8f:4a:1b:78:e6:e5:ba:3f:cd:
81:4b:15:10:6f:4e:aa:20:48:a2:08:05:47

2025 marks ten years of Let’s Encrypt. Already this year we’ve taken steps to continue to deliver on our values of user privacy, efficiency, and innovation, all with the intent of continuing to deliver free TLS certificates to as many people as possible; to deliver encryption for everybody.
And while we’re excited about the technical progress we’ll make this year, we’re also going to celebrate this tenth anniversary by highlighting the people around the world who make our impact possible. It’s no small village.
From a community forum that has provided free technical support, to our roster of sponsors who provide vital funding, to the thousands of individual supporters who contribute financially to Let’s Encrypt each year, free TLS at Internet scale works because people have supported it year in, year out, for ten years.
Each month we’ll highlight a different set of people behind our “everybody.” Who do you want to see us highlight? What use cases of Let’s Encrypt have you seen that amazed you? What about our work do you hope we’ll continue or improve as we go forward? Let us know on LinkedIn, or drop a note to outreach@letsencrypt.org.
Encryption for Everybody is our unofficial tagline for this tenth anniversary year. What we love about it is that, yes, it captures our commitment to ensuring anyone around the world can easily get a cert for free. But more importantly, it captures the reality that technical innovation won’t work without people believing in it and supporting it. We’re grateful that, for ten years (and counting!), our community of supporters has made an impact on the lives of billions of Internet users—an impact that’s made theWeb more secure and privacy respecting for everybody, everywhere.
Internet Security Research Group (ISRG) is the parent organization of Let’s Encrypt, Prossimo, and Divvi Up. ISRG is a 501(c)(3) nonprofit. If you’d like to support our work, please consider getting involved, donating, or encouraging your company to become a sponsor.
Let’s Encrypt protects a vast portion of the Web by providing TLS certificates to over 550 million websites—a figure that has grown by 42% in the last year alone. We currently issue over 340,000 certificates per hour. To manage this immense traffic and maintain responsiveness under high demand, our infrastructure relies on rate limiting. In 2015, we introduced our first rate limiting system, built on MariaDB. It evolved alongside our rapidly growing service but eventually revealed its limits: straining database servers, forcing long reset times on subscribers, and slowing down every request.
We needed a solution built for the future—one that could scale with demand, reduce the load on MariaDB, and adapt to real-world subscriber request patterns. The result was a new rate limiting system powered by Redis and a proven virtual scheduling algorithm from the mid-90s. Efficient and scalable, and capable of handling over a billion active certificates.
Rate Limiting a Free Service is Hard
In 2015, Let’s Encrypt was in early preview, and we faced a unique challenge. We were poised to become incredibly popular, offering certificates freely and without requiring contact information or email verification. Ensuring fair usage and preventing abuse without traditional safeguards demanded an atypical approach to rate limiting.
We decided to limit the number of certificates issued—per week—for each registered domain. Registered domains are a limited resource with real costs, making them a natural and effective basis for rate limiting—one that mirrors the structure of the Web itself. Specifically, this approach targets the effective Top-Level Domain (eTLD), as defined by the Public Suffix List (PSL), plus one additional label to the left. For example, in new.blog.example.co.uk
, the eTLD is .co.uk
, making example.co.uk
the eTLD+1.
Counting Events Was Easy
For each successfully issued certificate, we logged an entry in a table that recorded the registered domain, the issuance date, and other relevant details. To enforce rate limits, the system scanned this table, counted the rows matching a given registered domain within a specific time window, and compared the total to a configured threshold. This simple design formed the basis for all future rate limits.
Counting a Lot of Events Got Expensive
By 2019, we had added six new rate limits to protect our infrastructure as demand for certificates surged. Enforcing these limits required frequent scans of database tables to count recent matching events. These operations, especially on our heavily-used authorizations table, caused significant overhead, with reads outpacing all other tables—often by an order of magnitude.
Rate limit calculations were performed early in request processing and often. Counting rows in MariaDB, particularly for accounts with rate limit overrides, was inherently expensive and quickly became a scaling bottleneck.
Adding new limits required careful trade-offs. Decisions about whether to reuse existing schema, optimize indexes, or design purpose-built tables helped balance performance, complexity, and long-term maintainability.
Buying Runway — Offloading Reads
In late 2021, we updated our control plane and Boulder—our in-house CA software—to route most API reads, including rate limit checks, to database replicas. This reduced the load on the primary database and improved its overall health. At the same time, however, latency of rate limit checks during peak hours continued to rise, highlighting the limitations of scaling reads alone.
Sliding Windows Got Frustrating
Subscribers were frequently hitting rate limits unexpectedly, leaving them unable to request certificates for days. This issue stemmed from our use of relatively large rate limiting windows—most spanning a week. Subscribers could deplete their entire limit in just a few moments by repeating the same request, and find themselves locked out for the remainder of the week. This approach was inflexible and disruptive, causing unnecessary frustration and delays.
In early 2022, we patched the Duplicate Certificate limit to address this rigidity. Using a naive token-bucket approach, we allowed users to “earn back” requests incrementally, cutting the wait time—once rate limited—to about 1.4 days. The patch worked by fetching recent issuance timestamps and calculating the time between them to grant requests based on the time waited. This change also allowed us to include a Retry-After timestamp in rate limited responses. While this improved the user experience for this one limit, we understood it to be a temporary fix for a system in need of a larger overhaul.
When a Problem Grows Large Enough, It Finds the Time for You
Setting aside time for a complete overhaul of our rate-limiting system wasn’t easy. Our development team, composed of just three permanent engineers, typically juggles several competing priorities. Yet by 2023, our flagging rate limits code had begun to endanger the reliability of our MariaDB databases.
Our authorizations table was now regularly read an order of magnitude more than any other. Individually identifying and deleting unnecessary rows—or specific values—had proved unworkable due to poor MariaDB delete performance. Storage engines like InnoDB must maintain indexes, foreign key constraints, and transaction logs for every deletion, which significantly increases overhead for concurrent transactions and leads to gruelingly slow deletes.
Our SRE team automated the cleanup of old rows for many tables using the PARTITION
command, which worked well for bookkeeping and compliance data. Unfortunately, we couldn’t apply it to most of our purpose-built rate limit tables. These tables depend on ON DUPLICATE KEY UPDATE
, a mechanism that requires the targeted column to be a unique index or primary key, while partitioning demands that the primary key be included in the partitioning key.
Indexes on these tables—such as those tracking requested hostnames—often grew larger than the tables themselves and, in some cases, exceeded the memory of our smaller staging environment databases, eventually forcing us to periodically wipe them entirely.
By late 2023, this cascading confluence of complexities required a reckoning. We set out to design a rate limiting system built for the future.
The Solution: Redis + GCRA
We designed a system from the ground up that combines Redis for storage and the Generic Cell Rate Algorithm (GCRA) for managing request flow.
Why Redis?
Our engineers were already familiar with Redis, having recently deployed it to cache and serve OCSP responses. Its high throughput and low latency made it a candidate for tracking rate limit state as well.
By moving this data from MariaDB to Redis, we could eliminate the need for ever-expanding, purpose-built tables and indexes, significantly reducing read and write pressure. Redis’s feature set made it a perfect fit for the task. Most rate limit data is ephemeral—after a few days, or sometimes just minutes, it becomes irrelevant unless the subscriber calls us again. Redis’s per-key Time-To-Live would allow us to expire this data the moment it was no longer needed.
Redis also supports atomic integer operations, enabling fast, reliable counter updates, even when increments occur concurrently. Its “set if not exist” functionality ensures efficient initialization of keys, while pipeline support allows us to get and set multiple keys in bulk. This combination of familiarity, speed, simplicity, and flexibility made Redis the natural choice.
Why GCRA?
The Generic Cell Rate Algorithm (GCRA) is a virtual scheduling algorithm originally designed for telecommunication networks to regulate traffic and prevent congestion. Unlike traditional sliding window approaches that work in fixed time blocks, GCRA enforces rate limits continuously, making it well-suited to our goals.
A rate limit in GCRA is defined by two parameters: the emission interval and the burst tolerance. The emission interval specifies the minimum time that must pass between consecutive requests to maintain a steady rate. For example, an emission interval of one second allows one request per second on average. The burst tolerance determines how much unused capacity can be drawn on to allow short bursts of requests beyond the steady rate.
When a request is received, GCRA compares the current time to the Theoretical Arrival Time (TAT), which indicates when the next request is allowed under the steady rate. If the current time is greater than or equal to the TAT, the request is permitted, and the TAT is updated by adding the emission interval. If the current time plus the burst tolerance is greater than or equal to the TAT, the request is also permitted. In this case, the TAT is updated by adding the emission interval, reducing the remaining burst capacity.
However, if the current time plus the burst tolerance is less than the TAT, the request exceeds the rate limit and is denied. Conveniently, the difference between the TAT and the current time can then be returned to the subscriber in a Retry-After header, informing their client exactly how long to wait before trying again.
To illustrate, consider a rate limit of one request per second (emission interval = 1s) with a burst tolerance of three requests. Up to three requests can arrive back-to-back, but subsequent requests will be delayed until “now” catches up to the TAT, ensuring that the average rate over time remains one request per second.
What sets GCRA apart is its ability to automatically refill capacity gradually and continuously. Unlike sliding windows, where users must wait for an entire time block to reset, GCRA allows users to retry as soon as enough time has passed to maintain the steady rate. This dynamic pacing reduces frustration and provides a smoother, more predictable experience for subscribers.
GCRA is also storage and computationally efficient. It requires tracking only the TAT—stored as a single Unix timestamp—and performing simple arithmetic to enforce limits. This lightweight design allows it to scale to handle billions of requests, with minimal computational and memory overhead.
The Results: Faster, Smoother, and More Scalable
The transition to Redis and GCRA brought immediate, measurable improvements. We cut database load, improved response times, and delivered consistent performance even during periods of peak traffic. Subscribers now experience smoother, more predictable behavior, while the system’s increased permissiveness allows for certificates that the previous approach would have delayed—all achieved without sacrificing scalability or fairness.
Rate Limit Check Latency
Check latency is the extra time added to each request while verifying rate limit compliance. Under the old MariaDB-based system, these checks slowed noticeably during peak traffic, when database contention caused significant delays. Our new Redis-based system dramatically reduced this overhead. The high-traffic “new-order” endpoint saw the greatest improvement, while the “new-account” endpoint—though considerably lighter in traffic—also benefited, especially callers with IPv6 addresses. These results show that our subscribers now experience consistent response times, even under peak load.
Database Health
Our once strained database servers are now operating with ample headroom. In total, MariaDB operations have dropped by 80%, improving responsiveness, reducing contention, and freeing up resources for mission-critical issuance workflows.
Buffer pool requests have decreased by more than 50%, improving caching efficiency and reducing overall memory pressure.
Reads of the authorizations table—a notorious bottleneck—have dropped by over 99%. Previously, this table outpaced all others by more than two orders of magnitude; now it ranks second (the green line below), just narrowly surpassing our third most-read table.
Tracking Zombie Clients
In late 2024, we turned our new rate limiting system toward a longstanding challenge: “zombie clients.” These requesters repeatedly attempt to issue certificates but fail, often because of expired domains or misconfigured DNS records. Together, they generate nearly half of all order attempts yet almost never succeed. We were able to build on this new infrastructure to record consecutive ACME challenge failures by account/domain pair and automatically “pause” this problematic issuance. The result has been a considerable reduction in resource consumption, freeing database and network capacity without disrupting legitimate traffic.
Scalability on Redis
Before deploying the limits to track zombie clients, we maintained just over 12.6 million unique TATs across several Redis databases. Within 24 hours, that number more than doubled to 26 million, and by the end of the week, it peaked at over 30 million. Yet, even with this sharp increase, there was no noticeable impact on rate limit responsiveness. That’s all we’ll share for now about zombie clients—there’s plenty more to unpack, but we’ll save those insights and figures for a future blog post.
What’s Next?
Scaling our rate limits to keep pace with the growth of the Web is a huge achievement, but there’s still more to do. In the near term, many of our other ACME endpoints rely on load balancers to enforce per-IP limits, which works but gives us little control over the feedback provided to subscribers. We’re looking to deploy this new infrastructure across those endpoints as well. Looking further ahead, we’re exploring how we might redefine our rate limits now that we’re no longer constrained by a system that simply counts events between two points in time.
By adopting Redis and GCRA, we’ve built a flexible, efficient rate limit system that promotes fair usage and enables our infrastructure to handle ever-growing demand. We’ll keep adapting to the ever-evolving Web while honoring our primary goal: giving people the certificates they need, for free, in the most user-friendly way we can.
Since its inception, Let’s Encrypt has been sending expiration notification emails to subscribers that have provided an email address to us. We will be ending this service on June 4, 2025. The decision to end this service is the result of the following factors:
-
Over the past 10 years more and more of our subscribers have been able to put reliable automation into place for certificate renewal.
-
Providing expiration notification emails means that we have to retain millions of email addresses connected to issuance records. As an organization that values privacy, removing this requirement is important to us.
-
Providing expiration notifications costs Let’s Encrypt tens of thousands of dollars per year, money that we believe can be better spent on other aspects of our infrastructure.
-
Providing expiration notifications adds complexity to our infrastructure, which takes time and attention to manage and increases the likelihood of mistakes being made. Over the long term, particularly as we add support for new service components, we need to manage overall complexity by phasing out system components that can no longer be justified.
For those who would like to continue receiving expiration notifications, we recommend using a third party service such as Red Sift Certificates Lite (formerly Hardenize). Red Sift’s monitoring service providing expiration emails is free of charge for up to 250 certificates. More monitoring options can be found here.
While we will be minimizing the email addresses we retain connected to issuance data, you can opt in to receive other emails. We’ll keep you informed about technical updates, and other news about Let’s Encrypt and our parent nonprofit, ISRG, based on the preferences you choose. You can sign up for our email lists below:
This year we will continue to pursue our commitment to improving the security of the Web PKI by introducing the option to get certificates with six-day lifetimes (“short-lived certificates”). We will also add support for IP addresses in addition to domain names. Our longer-lived certificates, which currently have a lifetime of 90 days, will continue to be available alongside our six-day offering. Subscribers will be able to opt in to short-lived certificates via a certificate profile mechanism being added to our ACME API.
Shorter Certificate Lifetimes Are Good for Security
When the private key associated with a certificate is compromised, the recommendation has always been to have the certificate revoked so that people will know not to use it. Unfortunately, certificate revocation doesn’t work very well. This means that certificates with compromised keys (or other issues) may continue to be used until they expire. The longer the lifetime of the certificate, the longer the potential for use of a problematic certificate.
The primary advantage of short-lived certificates is that they greatly reduce the potential compromise window because they expire relatively quickly. This reduces the need for certificate revocation, which has historically been unreliable. Our six-day certificates will not include OCSP or CRL URLs. Additionally, short-lived certificates practically require automation, and we believe that automating certificate issuance is important for security.
IP Address Support For Securing Additional Use Cases
We will support including IP addresses as Subject Alternative Names in our six-day certificates. This will enable secure TLS connections, with publicly trusted certificates, to services made available via IP address, without the need for a domain name.
Validation for IP addresses will work much the same as validation for domain names, though validation will be restricted to the http-01 and tls-alpn-01 challenge types. The dns-01 challenge type will not be available because the DNS is not involved in validating IP addresses. Additionally, there is no mechanism to check CAA records for IP addresses.
Timeline
We expect to issue the first valid short-lived certificates to ourselves in February of this year. Around April we will enable short-lived certificates for a small set of early adopting subscribers. We hope to make short-lived certificates generally available by the end of 2025.
The earliest short-lived certificates we issue may not support IP addresses, but we intend to enable IP address support by the time short-lived certificates reach general availability.
How To Get Six-Day and IP Address Certificates
Once short-lived certificates are an option for you, you’ll need to use an ACME client that supports ACME certificate profiles and select the short-lived certificate profile (the name of which will be published at a later date).
Once IP address support is an option for you, requesting an IP address in a certificate will automatically select a short-lived certificate profile.
Looking Ahead
The best way to prepare to take advantage of short-lived certificates is to make sure your ACME client is reliably renewing certificates in an automated fashion. If that’s working well then there should be no costs to switching to short-lived certificates.
If you have questions or comments about our plans, feel free to let us know on our community forums.
Thu, 16 Jan 2025 00:00:00 +0000
Announcing Certificate Profile Selection
Earlier this year we announced our intention to introduce short-lived certificates with lifetimes of six days as an option for our subscribers. Yesterday we issued our first short-lived certificate. You can see the certificate at the bottom of our post, or here thanks to Certificate Transparency logs. We issued it to ourselves and then immediately revoked it so we can observe the certificate’s whole lifecycle. This is the first step towards making short-lived certificates available to all subscribers.
The next step is for us to make short-lived certificates available to a small set of our subscribers so we can make sure our systems scale as expected prior to general availability. We expect this next phase to begin during Q2 of this year.
We expect short-lived certificates to be generally available by the end of this year.
How To Get Six-Day Certificates
Once short-lived certificates are an option for you, you’ll need to use an ACME client that supports ACME certificate profiles and select the short-lived certificate profile (“shortlived”). The lego
client recently added this functionality.
In the meantime, the best way to prepare to take advantage of short-lived certificates is to make sure your ACME client is reliably renewing certificates in an automated fashion. If that’s working well then there should be no costs to switching to short-lived certificates.
You’ll also want to be sure your ACME client is running frequently - both for the sake of renewing short-lived certificates and so as to take advantage of ACME Renewal Information (ARI). ARI allows Let’s Encrypt to notify your client if it should renew early for some reason. ARI checks should happen at least once per day, and short-lived certificates should be renewed every two to three days, so we recommend having your client run at least once per day.
Shorter Certificate Lifetimes Are Good for Security
When the private key associated with a certificate is compromised, the recommendation has always been to have the certificate revoked so that people will know not to use it. Unfortunately, certificate revocation doesn’t work very well. This means that certificates with compromised keys (or other issues) may continue to be used until they expire. The longer the lifetime of the certificate, the longer the potential for use of a problematic certificate.
The primary advantage of short-lived certificates is that they greatly reduce the potential compromise window because they expire relatively quickly. This reduces the need for certificate revocation, which has historically been unreliable. Our six-day certificates will not include OCSP or CRL URLs. Additionally, short-lived certificates practically require automation, and we believe that automating certificate issuance is important for security.
Questions
If you have questions or comments about our plans, feel free to let us know on our community forums.
We’d like to thank Open Technology Fund for supporting this work.
Our First 6-Day Certificate
PEM format:
-----BEGIN CERTIFICATE-----
MIIDSzCCAtGgAwIBAgISA7CwFcGk4mQWEXMacRtxHeDvMAoGCCqGSM49BAMDMDIx
CzAJBgNVBAYTAlVTMRYwFAYDVQQKEw1MZXQncyBFbmNyeXB0MQswCQYDVQQDEwJF
NjAeFw0yNTAyMTkxNzMwMDFaFw0yNTAyMjYwOTMwMDBaMAAwWTATBgcqhkjOPQIB
BggqhkjOPQMBBwNCAAQoSItt2V1aocI5dxrKR8iLfmm0KiVvOhiwKByzu2kLeC7C
0BdfAgtwdICdkuEhAXokhXLq6DNZZgmh5T4flVwZo4IB9zCCAfMwDgYDVR0PAQH/
BAQDAgeAMBMGA1UdJQQMMAoGCCsGAQUFBwMBMAwGA1UdEwEB/wQCMAAwHwYDVR0j
BBgwFoAUkydGmAOpUWiOmNbEQkjbI79YlNIwVQYIKwYBBQUHAQEESTBHMCEGCCsG
AQUFBzABhhVodHRwOi8vZTYuby5sZW5jci5vcmcwIgYIKwYBBQUHMAKGFmh0dHA6
Ly9lNi5pLmxlbmNyLm9yZy8wKAYDVR0RAQH/BB4wHIIaaGVsbG93b3JsZC5sZXRz
ZW5jcnlwdC5vcmcwEwYDVR0gBAwwCjAIBgZngQwBAgEwggEFBgorBgEEAdZ5AgQC
BIH2BIHzAPEAdgDM+w9qhXEJZf6Vm1PO6bJ8IumFXA2XjbapflTA/kwNsAAAAZUf
d/zOAAAEAwBHMEUCIFNd51TfSNiJrO+294t49C5ANc4oC7gTUzf7xnlNlhKsAiEA
wi5hfiC9SsKLxlTQ0sctUxhLmdYh40r6ECWQS/yWw2AAdwDgkrP8DB3I52g2H95h
uZZNClJ4GYpy1nLEsE2lbW9UBAAAAZUfd/0TAAAEAwBIMEYCIQCs2NuZIUIloOaH
1t9eXDKb8bjoWESBPsK4i2BxMvEIswIhAOMNaQNyr1YkzrcNUz15qGV0oVLg5BJN
+ikWxXOdcRHFMAoGCCqGSM49BAMDA2gAMGUCMDANqy7G09AIwzXcd7SNl7uFwhC+
xlfduvp1PeEDHc/FA9K3mRYkGXuKtzNdOh7wcAIxALjEMDmBQiwXbB447oGkaZAe
0rqxA3EtNV5wj0obeObluj/NgUsVEG9OqiBIoggFRw==
-----END CERTIFICATE-----
openssl x509 -text
output:
Certificate:
Data:
Version: 3 (0x2)
Serial Number:
03:b0:b0:15:c1:a4:e2:64:16:11:73:1a:71:1b:71:1d:e0:ef
Signature Algorithm: ecdsa-with-SHA384
Issuer: C=US, O=Let's Encrypt, CN=E6
Validity
Not Before: Feb 19 17:30:01 2025 GMT
Not After : Feb 26 09:30:00 2025 GMT
Subject:
Subject Public Key Info:
Public Key Algorithm: id-ecPublicKey
Public-Key: (256 bit)
pub:
04:28:48:8b:6d:d9:5d:5a:a1:c2:39:77:1a:ca:47:
c8:8b:7e:69:b4:2a:25:6f:3a:18:b0:28:1c:b3:bb:
69:0b:78:2e:c2:d0:17:5f:02:0b:70:74:80:9d:92:
e1:21:01:7a:24:85:72:ea:e8:33:59:66:09:a1:e5:
3e:1f:95:5c:19
ASN1 OID: prime256v1
NIST CURVE: P-256
X509v3 extensions:
X509v3 Key Usage: critical
Digital Signature
X509v3 Extended Key Usage:
TLS Web Server Authentication
X509v3 Basic Constraints: critical
CA:FALSE
X509v3 Authority Key Identifier:
93:27:46:98:03:A9:51:68:8E:98:D6:C4:42:48:DB:23:BF:58:94:D2
Authority Information Access:
OCSP - URI:http://e6.o.lencr.org
CA Issuers - URI:http://e6.i.lencr.org/
X509v3 Subject Alternative Name: critical
DNS:helloworld.letsencrypt.org
X509v3 Certificate Policies:
Policy: 2.23.140.1.2.1
CT Precertificate SCTs:
Signed Certificate Timestamp:
Version : v1 (0x0)
Log ID : CC:FB:0F:6A:85:71:09:65:FE:95:9B:53:CE:E9:B2:7C:
22:E9:85:5C:0D:97:8D:B6:A9:7E:54:C0:FE:4C:0D:B0
Timestamp : Feb 19 18:28:32.078 2025 GMT
Extensions: none
Signature : ecdsa-with-SHA256
30:45:02:20:53:5D:E7:54:DF:48:D8:89:AC:EF:B6:F7:
8B:78:F4:2E:40:35:CE:28:0B:B8:13:53:37:FB:C6:79:
4D:96:12:AC:02:21:00:C2:2E:61:7E:20:BD:4A:C2:8B:
C6:54:D0:D2:C7:2D:53:18:4B:99:D6:21:E3:4A:FA:10:
25:90:4B:FC:96:C3:60
Signed Certificate Timestamp:
Version : v1 (0x0)
Log ID : E0:92:B3:FC:0C:1D:C8:E7:68:36:1F:DE:61:B9:96:4D:
0A:52:78:19:8A:72:D6:72:C4:B0:4D:A5:6D:6F:54:04
Timestamp : Feb 19 18:28:32.147 2025 GMT
Extensions: none
Signature : ecdsa-with-SHA256
30:46:02:21:00:AC:D8:DB:99:21:42:25:A0:E6:87:D6:
DF:5E:5C:32:9B:F1:B8:E8:58:44:81:3E:C2:B8:8B:60:
71:32:F1:08:B3:02:21:00:E3:0D:69:03:72:AF:56:24:
CE:B7:0D:53:3D:79:A8:65:74:A1:52:E0:E4:12:4D:FA:
29:16:C5:73:9D:71:11:C5
Signature Algorithm: ecdsa-with-SHA384
Signature Value:
30:65:02:30:30:0d:ab:2e:c6:d3:d0:08:c3:35:dc:77:b4:8d:
97:bb:85:c2:10:be:c6:57:dd:ba:fa:75:3d:e1:03:1d:cf:c5:
03:d2:b7:99:16:24:19:7b:8a:b7:33:5d:3a:1e:f0:70:02:31:
00:b8:c4:30:39:81:42:2c:17:6c:1e:38:ee:81:a4:69:90:1e:
d2:ba:b1:03:71:2d:35:5e:70:8f:4a:1b:78:e6:e5:ba:3f:cd:
81:4b:15:10:6f:4e:aa:20:48:a2:08:05:47

2025 marks ten years of Let’s Encrypt. Already this year we’ve taken steps to continue to deliver on our values of user privacy, efficiency, and innovation, all with the intent of continuing to deliver free TLS certificates to as many people as possible; to deliver encryption for everybody.
And while we’re excited about the technical progress we’ll make this year, we’re also going to celebrate this tenth anniversary by highlighting the people around the world who make our impact possible. It’s no small village.
From a community forum that has provided free technical support, to our roster of sponsors who provide vital funding, to the thousands of individual supporters who contribute financially to Let’s Encrypt each year, free TLS at Internet scale works because people have supported it year in, year out, for ten years.
Each month we’ll highlight a different set of people behind our “everybody.” Who do you want to see us highlight? What use cases of Let’s Encrypt have you seen that amazed you? What about our work do you hope we’ll continue or improve as we go forward? Let us know on LinkedIn, or drop a note to outreach@letsencrypt.org.
Encryption for Everybody is our unofficial tagline for this tenth anniversary year. What we love about it is that, yes, it captures our commitment to ensuring anyone around the world can easily get a cert for free. But more importantly, it captures the reality that technical innovation won’t work without people believing in it and supporting it. We’re grateful that, for ten years (and counting!), our community of supporters has made an impact on the lives of billions of Internet users—an impact that’s made theWeb more secure and privacy respecting for everybody, everywhere.
Internet Security Research Group (ISRG) is the parent organization of Let’s Encrypt, Prossimo, and Divvi Up. ISRG is a 501(c)(3) nonprofit. If you’d like to support our work, please consider getting involved, donating, or encouraging your company to become a sponsor.
Let’s Encrypt protects a vast portion of the Web by providing TLS certificates to over 550 million websites—a figure that has grown by 42% in the last year alone. We currently issue over 340,000 certificates per hour. To manage this immense traffic and maintain responsiveness under high demand, our infrastructure relies on rate limiting. In 2015, we introduced our first rate limiting system, built on MariaDB. It evolved alongside our rapidly growing service but eventually revealed its limits: straining database servers, forcing long reset times on subscribers, and slowing down every request.
We needed a solution built for the future—one that could scale with demand, reduce the load on MariaDB, and adapt to real-world subscriber request patterns. The result was a new rate limiting system powered by Redis and a proven virtual scheduling algorithm from the mid-90s. Efficient and scalable, and capable of handling over a billion active certificates.
Rate Limiting a Free Service is Hard
In 2015, Let’s Encrypt was in early preview, and we faced a unique challenge. We were poised to become incredibly popular, offering certificates freely and without requiring contact information or email verification. Ensuring fair usage and preventing abuse without traditional safeguards demanded an atypical approach to rate limiting.
We decided to limit the number of certificates issued—per week—for each registered domain. Registered domains are a limited resource with real costs, making them a natural and effective basis for rate limiting—one that mirrors the structure of the Web itself. Specifically, this approach targets the effective Top-Level Domain (eTLD), as defined by the Public Suffix List (PSL), plus one additional label to the left. For example, in new.blog.example.co.uk
, the eTLD is .co.uk
, making example.co.uk
the eTLD+1.
Counting Events Was Easy
For each successfully issued certificate, we logged an entry in a table that recorded the registered domain, the issuance date, and other relevant details. To enforce rate limits, the system scanned this table, counted the rows matching a given registered domain within a specific time window, and compared the total to a configured threshold. This simple design formed the basis for all future rate limits.
Counting a Lot of Events Got Expensive
By 2019, we had added six new rate limits to protect our infrastructure as demand for certificates surged. Enforcing these limits required frequent scans of database tables to count recent matching events. These operations, especially on our heavily-used authorizations table, caused significant overhead, with reads outpacing all other tables—often by an order of magnitude.
Rate limit calculations were performed early in request processing and often. Counting rows in MariaDB, particularly for accounts with rate limit overrides, was inherently expensive and quickly became a scaling bottleneck.
Adding new limits required careful trade-offs. Decisions about whether to reuse existing schema, optimize indexes, or design purpose-built tables helped balance performance, complexity, and long-term maintainability.
Buying Runway — Offloading Reads
In late 2021, we updated our control plane and Boulder—our in-house CA software—to route most API reads, including rate limit checks, to database replicas. This reduced the load on the primary database and improved its overall health. At the same time, however, latency of rate limit checks during peak hours continued to rise, highlighting the limitations of scaling reads alone.
Sliding Windows Got Frustrating
Subscribers were frequently hitting rate limits unexpectedly, leaving them unable to request certificates for days. This issue stemmed from our use of relatively large rate limiting windows—most spanning a week. Subscribers could deplete their entire limit in just a few moments by repeating the same request, and find themselves locked out for the remainder of the week. This approach was inflexible and disruptive, causing unnecessary frustration and delays.
In early 2022, we patched the Duplicate Certificate limit to address this rigidity. Using a naive token-bucket approach, we allowed users to “earn back” requests incrementally, cutting the wait time—once rate limited—to about 1.4 days. The patch worked by fetching recent issuance timestamps and calculating the time between them to grant requests based on the time waited. This change also allowed us to include a Retry-After timestamp in rate limited responses. While this improved the user experience for this one limit, we understood it to be a temporary fix for a system in need of a larger overhaul.
When a Problem Grows Large Enough, It Finds the Time for You
Setting aside time for a complete overhaul of our rate-limiting system wasn’t easy. Our development team, composed of just three permanent engineers, typically juggles several competing priorities. Yet by 2023, our flagging rate limits code had begun to endanger the reliability of our MariaDB databases.
Our authorizations table was now regularly read an order of magnitude more than any other. Individually identifying and deleting unnecessary rows—or specific values—had proved unworkable due to poor MariaDB delete performance. Storage engines like InnoDB must maintain indexes, foreign key constraints, and transaction logs for every deletion, which significantly increases overhead for concurrent transactions and leads to gruelingly slow deletes.
Our SRE team automated the cleanup of old rows for many tables using the PARTITION
command, which worked well for bookkeeping and compliance data. Unfortunately, we couldn’t apply it to most of our purpose-built rate limit tables. These tables depend on ON DUPLICATE KEY UPDATE
, a mechanism that requires the targeted column to be a unique index or primary key, while partitioning demands that the primary key be included in the partitioning key.
Indexes on these tables—such as those tracking requested hostnames—often grew larger than the tables themselves and, in some cases, exceeded the memory of our smaller staging environment databases, eventually forcing us to periodically wipe them entirely.
By late 2023, this cascading confluence of complexities required a reckoning. We set out to design a rate limiting system built for the future.
The Solution: Redis + GCRA
We designed a system from the ground up that combines Redis for storage and the Generic Cell Rate Algorithm (GCRA) for managing request flow.
Why Redis?
Our engineers were already familiar with Redis, having recently deployed it to cache and serve OCSP responses. Its high throughput and low latency made it a candidate for tracking rate limit state as well.
By moving this data from MariaDB to Redis, we could eliminate the need for ever-expanding, purpose-built tables and indexes, significantly reducing read and write pressure. Redis’s feature set made it a perfect fit for the task. Most rate limit data is ephemeral—after a few days, or sometimes just minutes, it becomes irrelevant unless the subscriber calls us again. Redis’s per-key Time-To-Live would allow us to expire this data the moment it was no longer needed.
Redis also supports atomic integer operations, enabling fast, reliable counter updates, even when increments occur concurrently. Its “set if not exist” functionality ensures efficient initialization of keys, while pipeline support allows us to get and set multiple keys in bulk. This combination of familiarity, speed, simplicity, and flexibility made Redis the natural choice.
Why GCRA?
The Generic Cell Rate Algorithm (GCRA) is a virtual scheduling algorithm originally designed for telecommunication networks to regulate traffic and prevent congestion. Unlike traditional sliding window approaches that work in fixed time blocks, GCRA enforces rate limits continuously, making it well-suited to our goals.
A rate limit in GCRA is defined by two parameters: the emission interval and the burst tolerance. The emission interval specifies the minimum time that must pass between consecutive requests to maintain a steady rate. For example, an emission interval of one second allows one request per second on average. The burst tolerance determines how much unused capacity can be drawn on to allow short bursts of requests beyond the steady rate.
When a request is received, GCRA compares the current time to the Theoretical Arrival Time (TAT), which indicates when the next request is allowed under the steady rate. If the current time is greater than or equal to the TAT, the request is permitted, and the TAT is updated by adding the emission interval. If the current time plus the burst tolerance is greater than or equal to the TAT, the request is also permitted. In this case, the TAT is updated by adding the emission interval, reducing the remaining burst capacity.
However, if the current time plus the burst tolerance is less than the TAT, the request exceeds the rate limit and is denied. Conveniently, the difference between the TAT and the current time can then be returned to the subscriber in a Retry-After header, informing their client exactly how long to wait before trying again.
To illustrate, consider a rate limit of one request per second (emission interval = 1s) with a burst tolerance of three requests. Up to three requests can arrive back-to-back, but subsequent requests will be delayed until “now” catches up to the TAT, ensuring that the average rate over time remains one request per second.
What sets GCRA apart is its ability to automatically refill capacity gradually and continuously. Unlike sliding windows, where users must wait for an entire time block to reset, GCRA allows users to retry as soon as enough time has passed to maintain the steady rate. This dynamic pacing reduces frustration and provides a smoother, more predictable experience for subscribers.
GCRA is also storage and computationally efficient. It requires tracking only the TAT—stored as a single Unix timestamp—and performing simple arithmetic to enforce limits. This lightweight design allows it to scale to handle billions of requests, with minimal computational and memory overhead.
The Results: Faster, Smoother, and More Scalable
The transition to Redis and GCRA brought immediate, measurable improvements. We cut database load, improved response times, and delivered consistent performance even during periods of peak traffic. Subscribers now experience smoother, more predictable behavior, while the system’s increased permissiveness allows for certificates that the previous approach would have delayed—all achieved without sacrificing scalability or fairness.
Rate Limit Check Latency
Check latency is the extra time added to each request while verifying rate limit compliance. Under the old MariaDB-based system, these checks slowed noticeably during peak traffic, when database contention caused significant delays. Our new Redis-based system dramatically reduced this overhead. The high-traffic “new-order” endpoint saw the greatest improvement, while the “new-account” endpoint—though considerably lighter in traffic—also benefited, especially callers with IPv6 addresses. These results show that our subscribers now experience consistent response times, even under peak load.
Database Health
Our once strained database servers are now operating with ample headroom. In total, MariaDB operations have dropped by 80%, improving responsiveness, reducing contention, and freeing up resources for mission-critical issuance workflows.
Buffer pool requests have decreased by more than 50%, improving caching efficiency and reducing overall memory pressure.
Reads of the authorizations table—a notorious bottleneck—have dropped by over 99%. Previously, this table outpaced all others by more than two orders of magnitude; now it ranks second (the green line below), just narrowly surpassing our third most-read table.
Tracking Zombie Clients
In late 2024, we turned our new rate limiting system toward a longstanding challenge: “zombie clients.” These requesters repeatedly attempt to issue certificates but fail, often because of expired domains or misconfigured DNS records. Together, they generate nearly half of all order attempts yet almost never succeed. We were able to build on this new infrastructure to record consecutive ACME challenge failures by account/domain pair and automatically “pause” this problematic issuance. The result has been a considerable reduction in resource consumption, freeing database and network capacity without disrupting legitimate traffic.
Scalability on Redis
Before deploying the limits to track zombie clients, we maintained just over 12.6 million unique TATs across several Redis databases. Within 24 hours, that number more than doubled to 26 million, and by the end of the week, it peaked at over 30 million. Yet, even with this sharp increase, there was no noticeable impact on rate limit responsiveness. That’s all we’ll share for now about zombie clients—there’s plenty more to unpack, but we’ll save those insights and figures for a future blog post.
What’s Next?
Scaling our rate limits to keep pace with the growth of the Web is a huge achievement, but there’s still more to do. In the near term, many of our other ACME endpoints rely on load balancers to enforce per-IP limits, which works but gives us little control over the feedback provided to subscribers. We’re looking to deploy this new infrastructure across those endpoints as well. Looking further ahead, we’re exploring how we might redefine our rate limits now that we’re no longer constrained by a system that simply counts events between two points in time.
By adopting Redis and GCRA, we’ve built a flexible, efficient rate limit system that promotes fair usage and enables our infrastructure to handle ever-growing demand. We’ll keep adapting to the ever-evolving Web while honoring our primary goal: giving people the certificates they need, for free, in the most user-friendly way we can.
Since its inception, Let’s Encrypt has been sending expiration notification emails to subscribers that have provided an email address to us. We will be ending this service on June 4, 2025. The decision to end this service is the result of the following factors:
-
Over the past 10 years more and more of our subscribers have been able to put reliable automation into place for certificate renewal.
-
Providing expiration notification emails means that we have to retain millions of email addresses connected to issuance records. As an organization that values privacy, removing this requirement is important to us.
-
Providing expiration notifications costs Let’s Encrypt tens of thousands of dollars per year, money that we believe can be better spent on other aspects of our infrastructure.
-
Providing expiration notifications adds complexity to our infrastructure, which takes time and attention to manage and increases the likelihood of mistakes being made. Over the long term, particularly as we add support for new service components, we need to manage overall complexity by phasing out system components that can no longer be justified.
For those who would like to continue receiving expiration notifications, we recommend using a third party service such as Red Sift Certificates Lite (formerly Hardenize). Red Sift’s monitoring service providing expiration emails is free of charge for up to 250 certificates. More monitoring options can be found here.
While we will be minimizing the email addresses we retain connected to issuance data, you can opt in to receive other emails. We’ll keep you informed about technical updates, and other news about Let’s Encrypt and our parent nonprofit, ISRG, based on the preferences you choose. You can sign up for our email lists below:
This year we will continue to pursue our commitment to improving the security of the Web PKI by introducing the option to get certificates with six-day lifetimes (“short-lived certificates”). We will also add support for IP addresses in addition to domain names. Our longer-lived certificates, which currently have a lifetime of 90 days, will continue to be available alongside our six-day offering. Subscribers will be able to opt in to short-lived certificates via a certificate profile mechanism being added to our ACME API.
Shorter Certificate Lifetimes Are Good for Security
When the private key associated with a certificate is compromised, the recommendation has always been to have the certificate revoked so that people will know not to use it. Unfortunately, certificate revocation doesn’t work very well. This means that certificates with compromised keys (or other issues) may continue to be used until they expire. The longer the lifetime of the certificate, the longer the potential for use of a problematic certificate.
The primary advantage of short-lived certificates is that they greatly reduce the potential compromise window because they expire relatively quickly. This reduces the need for certificate revocation, which has historically been unreliable. Our six-day certificates will not include OCSP or CRL URLs. Additionally, short-lived certificates practically require automation, and we believe that automating certificate issuance is important for security.
IP Address Support For Securing Additional Use Cases
We will support including IP addresses as Subject Alternative Names in our six-day certificates. This will enable secure TLS connections, with publicly trusted certificates, to services made available via IP address, without the need for a domain name.
Validation for IP addresses will work much the same as validation for domain names, though validation will be restricted to the http-01 and tls-alpn-01 challenge types. The dns-01 challenge type will not be available because the DNS is not involved in validating IP addresses. Additionally, there is no mechanism to check CAA records for IP addresses.
Timeline
We expect to issue the first valid short-lived certificates to ourselves in February of this year. Around April we will enable short-lived certificates for a small set of early adopting subscribers. We hope to make short-lived certificates generally available by the end of 2025.
The earliest short-lived certificates we issue may not support IP addresses, but we intend to enable IP address support by the time short-lived certificates reach general availability.
How To Get Six-Day and IP Address Certificates
Once short-lived certificates are an option for you, you’ll need to use an ACME client that supports ACME certificate profiles and select the short-lived certificate profile (the name of which will be published at a later date).
Once IP address support is an option for you, requesting an IP address in a certificate will automatically select a short-lived certificate profile.
Looking Ahead
The best way to prepare to take advantage of short-lived certificates is to make sure your ACME client is reliably renewing certificates in an automated fashion. If that’s working well then there should be no costs to switching to short-lived certificates.
If you have questions or comments about our plans, feel free to let us know on our community forums.
We are excited to announce a new extension to Let’s Encrypt’s implementation of the ACME protocol that we are calling “profile selection.” This new feature will allow site operators and ACME clients to opt in to the next evolution of Let’s Encrypt.
As of today, the staging environment is advertising a new field in its directory resource:
GET /directory HTTP/1.1
HTTP/1.1 200 OK
Content-Type: application/json
{
...
"meta": {
"profiles": {
"classic": "The same profile you're accustomed to",
"tlsserver": "https://letsencrypt.org/2025/01/09/acme-profiles/"
}
}
}
Here, the keys are the names of new “profiles”, and the values are human-readable descriptions of those profiles. A profile describes a collection of attributes about the certificate that will be issued, such as what extensions it will contain, how long it will be valid for, and more.
For example, the “classic” profile is exactly what it sounds like: certificates issued under the classic profile will look exactly the same as those that we have always issued, valid for 90 days.
But certificates issued under the “tlsserver” profile will have a number of differences tailored specifically towards TLS server usage:
- No Common Name field (including a CN has been NOT RECOMMENDED by the Baseline Requirements for several years now)
- No Subject Key Identifier (including a SKID is NOT RECOMMENDED by the Baseline Requirements)
- No TLS Client Auth Extended Key Usage (root programs are moving towards requiring “single-purpose” issuance hierarchies, where every certificate has only a single EKU)
- No Key Encipherment Key Usage for certificates with RSA public keys (this KU was used by older RSA-based TLS cipher suites, but is fully unnecessary in TLS 1.3)
Additionally, in the near future we will offer a “shortlived” profile which will be identical to the “tlsserver” profile but with a validity period of only 6 days. This profile isn’t available in Staging just yet, so keep an eye out for further announcements regarding short-lived certificates and why we think they’re exciting.
An ACME client can supply a desired profile name in a new-order request:
POST /acme/new-order HTTP/1.1
Host: example.com
Content-Type: application/jose+json
{
"protected": base64url(...),
"payload": base64url({
"profile": "tlsserver",
"identifiers": [
{ "type": "dns", "value": "www.example.org" },
{ "type": "dns", "value": "example.org" }
],
}),
"signature": "H6ZXtGjTZyUnPeKn...wEA4TklBdh3e454g"
}
If the new-order request is accepted, then the selected profile name will be reflected in the Order object when it is returned, and the resulting certificate after finalization will be issued with the selected profile. If the new-order request does not specify a profile, then the server will select one for it.
Guidance for ACME clients and users
If you are an ACME client author, we encourage you to introduce support for this new field in your client. Start by taking a look at the draft specification in the IETF ACME Working Group. A simple implementation might allow the user to configure a static profile name and include that name in all new-order requests. For a better user experience, check the configured name against the list of profiles advertised in the directory, to ensure that changes to the available profiles don’t result in invalid new-order requests. For clients with a user interface, such as a control panel or interactive command line interface, an implementation could fetch the list of profiles and their descriptions to prompt the user to select one on first run. It could also use a notification mechanism to inform the user of changes to the list of available profiles. We’d also love to hear from you about your experience implementing and deploying this new extension.
If you are a site operator or ACME client user, we encourage you to keep an eye on your ACME client of choice to see when they adopt this new feature, and update your client when they do. We also encourage you to try out the modern “tlsserver” profile in Staging, and let us know what you think of the changes we’ve made to the certificates issued under that profile.
What’s next?
Obviously there is more work to be done here. The draft standard will go through multiple rounds of review and tweaks before becoming an IETF RFC, and our implementation will evolve along with it if necessary. Over the coming weeks and months we will also be providing more information about when we enable profile selection in our production environment, and what our production profile options will be.
Thank you for coming along with us on this journey into the future of the Web PKI. We look forward to your testing and feedback!
Thu, 09 Jan 2025 00:00:00 +0000
A Note from our Executive Director
Earlier this year we announced our intention to introduce short-lived certificates with lifetimes of six days as an option for our subscribers. Yesterday we issued our first short-lived certificate. You can see the certificate at the bottom of our post, or here thanks to Certificate Transparency logs. We issued it to ourselves and then immediately revoked it so we can observe the certificate’s whole lifecycle. This is the first step towards making short-lived certificates available to all subscribers.
The next step is for us to make short-lived certificates available to a small set of our subscribers so we can make sure our systems scale as expected prior to general availability. We expect this next phase to begin during Q2 of this year.
We expect short-lived certificates to be generally available by the end of this year.
How To Get Six-Day Certificates
Once short-lived certificates are an option for you, you’ll need to use an ACME client that supports ACME certificate profiles and select the short-lived certificate profile (“shortlived”). The lego
client recently added this functionality.
In the meantime, the best way to prepare to take advantage of short-lived certificates is to make sure your ACME client is reliably renewing certificates in an automated fashion. If that’s working well then there should be no costs to switching to short-lived certificates.
You’ll also want to be sure your ACME client is running frequently - both for the sake of renewing short-lived certificates and so as to take advantage of ACME Renewal Information (ARI). ARI allows Let’s Encrypt to notify your client if it should renew early for some reason. ARI checks should happen at least once per day, and short-lived certificates should be renewed every two to three days, so we recommend having your client run at least once per day.
Shorter Certificate Lifetimes Are Good for Security
When the private key associated with a certificate is compromised, the recommendation has always been to have the certificate revoked so that people will know not to use it. Unfortunately, certificate revocation doesn’t work very well. This means that certificates with compromised keys (or other issues) may continue to be used until they expire. The longer the lifetime of the certificate, the longer the potential for use of a problematic certificate.
The primary advantage of short-lived certificates is that they greatly reduce the potential compromise window because they expire relatively quickly. This reduces the need for certificate revocation, which has historically been unreliable. Our six-day certificates will not include OCSP or CRL URLs. Additionally, short-lived certificates practically require automation, and we believe that automating certificate issuance is important for security.
Questions
If you have questions or comments about our plans, feel free to let us know on our community forums.
We’d like to thank Open Technology Fund for supporting this work.
Our First 6-Day Certificate
PEM format:
-----BEGIN CERTIFICATE-----
MIIDSzCCAtGgAwIBAgISA7CwFcGk4mQWEXMacRtxHeDvMAoGCCqGSM49BAMDMDIx
CzAJBgNVBAYTAlVTMRYwFAYDVQQKEw1MZXQncyBFbmNyeXB0MQswCQYDVQQDEwJF
NjAeFw0yNTAyMTkxNzMwMDFaFw0yNTAyMjYwOTMwMDBaMAAwWTATBgcqhkjOPQIB
BggqhkjOPQMBBwNCAAQoSItt2V1aocI5dxrKR8iLfmm0KiVvOhiwKByzu2kLeC7C
0BdfAgtwdICdkuEhAXokhXLq6DNZZgmh5T4flVwZo4IB9zCCAfMwDgYDVR0PAQH/
BAQDAgeAMBMGA1UdJQQMMAoGCCsGAQUFBwMBMAwGA1UdEwEB/wQCMAAwHwYDVR0j
BBgwFoAUkydGmAOpUWiOmNbEQkjbI79YlNIwVQYIKwYBBQUHAQEESTBHMCEGCCsG
AQUFBzABhhVodHRwOi8vZTYuby5sZW5jci5vcmcwIgYIKwYBBQUHMAKGFmh0dHA6
Ly9lNi5pLmxlbmNyLm9yZy8wKAYDVR0RAQH/BB4wHIIaaGVsbG93b3JsZC5sZXRz
ZW5jcnlwdC5vcmcwEwYDVR0gBAwwCjAIBgZngQwBAgEwggEFBgorBgEEAdZ5AgQC
BIH2BIHzAPEAdgDM+w9qhXEJZf6Vm1PO6bJ8IumFXA2XjbapflTA/kwNsAAAAZUf
d/zOAAAEAwBHMEUCIFNd51TfSNiJrO+294t49C5ANc4oC7gTUzf7xnlNlhKsAiEA
wi5hfiC9SsKLxlTQ0sctUxhLmdYh40r6ECWQS/yWw2AAdwDgkrP8DB3I52g2H95h
uZZNClJ4GYpy1nLEsE2lbW9UBAAAAZUfd/0TAAAEAwBIMEYCIQCs2NuZIUIloOaH
1t9eXDKb8bjoWESBPsK4i2BxMvEIswIhAOMNaQNyr1YkzrcNUz15qGV0oVLg5BJN
+ikWxXOdcRHFMAoGCCqGSM49BAMDA2gAMGUCMDANqy7G09AIwzXcd7SNl7uFwhC+
xlfduvp1PeEDHc/FA9K3mRYkGXuKtzNdOh7wcAIxALjEMDmBQiwXbB447oGkaZAe
0rqxA3EtNV5wj0obeObluj/NgUsVEG9OqiBIoggFRw==
-----END CERTIFICATE-----
openssl x509 -text
output:
Certificate:
Data:
Version: 3 (0x2)
Serial Number:
03:b0:b0:15:c1:a4:e2:64:16:11:73:1a:71:1b:71:1d:e0:ef
Signature Algorithm: ecdsa-with-SHA384
Issuer: C=US, O=Let's Encrypt, CN=E6
Validity
Not Before: Feb 19 17:30:01 2025 GMT
Not After : Feb 26 09:30:00 2025 GMT
Subject:
Subject Public Key Info:
Public Key Algorithm: id-ecPublicKey
Public-Key: (256 bit)
pub:
04:28:48:8b:6d:d9:5d:5a:a1:c2:39:77:1a:ca:47:
c8:8b:7e:69:b4:2a:25:6f:3a:18:b0:28:1c:b3:bb:
69:0b:78:2e:c2:d0:17:5f:02:0b:70:74:80:9d:92:
e1:21:01:7a:24:85:72:ea:e8:33:59:66:09:a1:e5:
3e:1f:95:5c:19
ASN1 OID: prime256v1
NIST CURVE: P-256
X509v3 extensions:
X509v3 Key Usage: critical
Digital Signature
X509v3 Extended Key Usage:
TLS Web Server Authentication
X509v3 Basic Constraints: critical
CA:FALSE
X509v3 Authority Key Identifier:
93:27:46:98:03:A9:51:68:8E:98:D6:C4:42:48:DB:23:BF:58:94:D2
Authority Information Access:
OCSP - URI:http://e6.o.lencr.org
CA Issuers - URI:http://e6.i.lencr.org/
X509v3 Subject Alternative Name: critical
DNS:helloworld.letsencrypt.org
X509v3 Certificate Policies:
Policy: 2.23.140.1.2.1
CT Precertificate SCTs:
Signed Certificate Timestamp:
Version : v1 (0x0)
Log ID : CC:FB:0F:6A:85:71:09:65:FE:95:9B:53:CE:E9:B2:7C:
22:E9:85:5C:0D:97:8D:B6:A9:7E:54:C0:FE:4C:0D:B0
Timestamp : Feb 19 18:28:32.078 2025 GMT
Extensions: none
Signature : ecdsa-with-SHA256
30:45:02:20:53:5D:E7:54:DF:48:D8:89:AC:EF:B6:F7:
8B:78:F4:2E:40:35:CE:28:0B:B8:13:53:37:FB:C6:79:
4D:96:12:AC:02:21:00:C2:2E:61:7E:20:BD:4A:C2:8B:
C6:54:D0:D2:C7:2D:53:18:4B:99:D6:21:E3:4A:FA:10:
25:90:4B:FC:96:C3:60
Signed Certificate Timestamp:
Version : v1 (0x0)
Log ID : E0:92:B3:FC:0C:1D:C8:E7:68:36:1F:DE:61:B9:96:4D:
0A:52:78:19:8A:72:D6:72:C4:B0:4D:A5:6D:6F:54:04
Timestamp : Feb 19 18:28:32.147 2025 GMT
Extensions: none
Signature : ecdsa-with-SHA256
30:46:02:21:00:AC:D8:DB:99:21:42:25:A0:E6:87:D6:
DF:5E:5C:32:9B:F1:B8:E8:58:44:81:3E:C2:B8:8B:60:
71:32:F1:08:B3:02:21:00:E3:0D:69:03:72:AF:56:24:
CE:B7:0D:53:3D:79:A8:65:74:A1:52:E0:E4:12:4D:FA:
29:16:C5:73:9D:71:11:C5
Signature Algorithm: ecdsa-with-SHA384
Signature Value:
30:65:02:30:30:0d:ab:2e:c6:d3:d0:08:c3:35:dc:77:b4:8d:
97:bb:85:c2:10:be:c6:57:dd:ba:fa:75:3d:e1:03:1d:cf:c5:
03:d2:b7:99:16:24:19:7b:8a:b7:33:5d:3a:1e:f0:70:02:31:
00:b8:c4:30:39:81:42:2c:17:6c:1e:38:ee:81:a4:69:90:1e:
d2:ba:b1:03:71:2d:35:5e:70:8f:4a:1b:78:e6:e5:ba:3f:cd:
81:4b:15:10:6f:4e:aa:20:48:a2:08:05:47

2025 marks ten years of Let’s Encrypt. Already this year we’ve taken steps to continue to deliver on our values of user privacy, efficiency, and innovation, all with the intent of continuing to deliver free TLS certificates to as many people as possible; to deliver encryption for everybody.
And while we’re excited about the technical progress we’ll make this year, we’re also going to celebrate this tenth anniversary by highlighting the people around the world who make our impact possible. It’s no small village.
From a community forum that has provided free technical support, to our roster of sponsors who provide vital funding, to the thousands of individual supporters who contribute financially to Let’s Encrypt each year, free TLS at Internet scale works because people have supported it year in, year out, for ten years.
Each month we’ll highlight a different set of people behind our “everybody.” Who do you want to see us highlight? What use cases of Let’s Encrypt have you seen that amazed you? What about our work do you hope we’ll continue or improve as we go forward? Let us know on LinkedIn, or drop a note to outreach@letsencrypt.org.
Encryption for Everybody is our unofficial tagline for this tenth anniversary year. What we love about it is that, yes, it captures our commitment to ensuring anyone around the world can easily get a cert for free. But more importantly, it captures the reality that technical innovation won’t work without people believing in it and supporting it. We’re grateful that, for ten years (and counting!), our community of supporters has made an impact on the lives of billions of Internet users—an impact that’s made theWeb more secure and privacy respecting for everybody, everywhere.
Internet Security Research Group (ISRG) is the parent organization of Let’s Encrypt, Prossimo, and Divvi Up. ISRG is a 501(c)(3) nonprofit. If you’d like to support our work, please consider getting involved, donating, or encouraging your company to become a sponsor.
Let’s Encrypt protects a vast portion of the Web by providing TLS certificates to over 550 million websites—a figure that has grown by 42% in the last year alone. We currently issue over 340,000 certificates per hour. To manage this immense traffic and maintain responsiveness under high demand, our infrastructure relies on rate limiting. In 2015, we introduced our first rate limiting system, built on MariaDB. It evolved alongside our rapidly growing service but eventually revealed its limits: straining database servers, forcing long reset times on subscribers, and slowing down every request.
We needed a solution built for the future—one that could scale with demand, reduce the load on MariaDB, and adapt to real-world subscriber request patterns. The result was a new rate limiting system powered by Redis and a proven virtual scheduling algorithm from the mid-90s. Efficient and scalable, and capable of handling over a billion active certificates.
Rate Limiting a Free Service is Hard
In 2015, Let’s Encrypt was in early preview, and we faced a unique challenge. We were poised to become incredibly popular, offering certificates freely and without requiring contact information or email verification. Ensuring fair usage and preventing abuse without traditional safeguards demanded an atypical approach to rate limiting.
We decided to limit the number of certificates issued—per week—for each registered domain. Registered domains are a limited resource with real costs, making them a natural and effective basis for rate limiting—one that mirrors the structure of the Web itself. Specifically, this approach targets the effective Top-Level Domain (eTLD), as defined by the Public Suffix List (PSL), plus one additional label to the left. For example, in new.blog.example.co.uk
, the eTLD is .co.uk
, making example.co.uk
the eTLD+1.
Counting Events Was Easy
For each successfully issued certificate, we logged an entry in a table that recorded the registered domain, the issuance date, and other relevant details. To enforce rate limits, the system scanned this table, counted the rows matching a given registered domain within a specific time window, and compared the total to a configured threshold. This simple design formed the basis for all future rate limits.
Counting a Lot of Events Got Expensive
By 2019, we had added six new rate limits to protect our infrastructure as demand for certificates surged. Enforcing these limits required frequent scans of database tables to count recent matching events. These operations, especially on our heavily-used authorizations table, caused significant overhead, with reads outpacing all other tables—often by an order of magnitude.
Rate limit calculations were performed early in request processing and often. Counting rows in MariaDB, particularly for accounts with rate limit overrides, was inherently expensive and quickly became a scaling bottleneck.
Adding new limits required careful trade-offs. Decisions about whether to reuse existing schema, optimize indexes, or design purpose-built tables helped balance performance, complexity, and long-term maintainability.
Buying Runway — Offloading Reads
In late 2021, we updated our control plane and Boulder—our in-house CA software—to route most API reads, including rate limit checks, to database replicas. This reduced the load on the primary database and improved its overall health. At the same time, however, latency of rate limit checks during peak hours continued to rise, highlighting the limitations of scaling reads alone.
Sliding Windows Got Frustrating
Subscribers were frequently hitting rate limits unexpectedly, leaving them unable to request certificates for days. This issue stemmed from our use of relatively large rate limiting windows—most spanning a week. Subscribers could deplete their entire limit in just a few moments by repeating the same request, and find themselves locked out for the remainder of the week. This approach was inflexible and disruptive, causing unnecessary frustration and delays.
In early 2022, we patched the Duplicate Certificate limit to address this rigidity. Using a naive token-bucket approach, we allowed users to “earn back” requests incrementally, cutting the wait time—once rate limited—to about 1.4 days. The patch worked by fetching recent issuance timestamps and calculating the time between them to grant requests based on the time waited. This change also allowed us to include a Retry-After timestamp in rate limited responses. While this improved the user experience for this one limit, we understood it to be a temporary fix for a system in need of a larger overhaul.
When a Problem Grows Large Enough, It Finds the Time for You
Setting aside time for a complete overhaul of our rate-limiting system wasn’t easy. Our development team, composed of just three permanent engineers, typically juggles several competing priorities. Yet by 2023, our flagging rate limits code had begun to endanger the reliability of our MariaDB databases.
Our authorizations table was now regularly read an order of magnitude more than any other. Individually identifying and deleting unnecessary rows—or specific values—had proved unworkable due to poor MariaDB delete performance. Storage engines like InnoDB must maintain indexes, foreign key constraints, and transaction logs for every deletion, which significantly increases overhead for concurrent transactions and leads to gruelingly slow deletes.
Our SRE team automated the cleanup of old rows for many tables using the PARTITION
command, which worked well for bookkeeping and compliance data. Unfortunately, we couldn’t apply it to most of our purpose-built rate limit tables. These tables depend on ON DUPLICATE KEY UPDATE
, a mechanism that requires the targeted column to be a unique index or primary key, while partitioning demands that the primary key be included in the partitioning key.
Indexes on these tables—such as those tracking requested hostnames—often grew larger than the tables themselves and, in some cases, exceeded the memory of our smaller staging environment databases, eventually forcing us to periodically wipe them entirely.
By late 2023, this cascading confluence of complexities required a reckoning. We set out to design a rate limiting system built for the future.
The Solution: Redis + GCRA
We designed a system from the ground up that combines Redis for storage and the Generic Cell Rate Algorithm (GCRA) for managing request flow.
Why Redis?
Our engineers were already familiar with Redis, having recently deployed it to cache and serve OCSP responses. Its high throughput and low latency made it a candidate for tracking rate limit state as well.
By moving this data from MariaDB to Redis, we could eliminate the need for ever-expanding, purpose-built tables and indexes, significantly reducing read and write pressure. Redis’s feature set made it a perfect fit for the task. Most rate limit data is ephemeral—after a few days, or sometimes just minutes, it becomes irrelevant unless the subscriber calls us again. Redis’s per-key Time-To-Live would allow us to expire this data the moment it was no longer needed.
Redis also supports atomic integer operations, enabling fast, reliable counter updates, even when increments occur concurrently. Its “set if not exist” functionality ensures efficient initialization of keys, while pipeline support allows us to get and set multiple keys in bulk. This combination of familiarity, speed, simplicity, and flexibility made Redis the natural choice.
Why GCRA?
The Generic Cell Rate Algorithm (GCRA) is a virtual scheduling algorithm originally designed for telecommunication networks to regulate traffic and prevent congestion. Unlike traditional sliding window approaches that work in fixed time blocks, GCRA enforces rate limits continuously, making it well-suited to our goals.
A rate limit in GCRA is defined by two parameters: the emission interval and the burst tolerance. The emission interval specifies the minimum time that must pass between consecutive requests to maintain a steady rate. For example, an emission interval of one second allows one request per second on average. The burst tolerance determines how much unused capacity can be drawn on to allow short bursts of requests beyond the steady rate.
When a request is received, GCRA compares the current time to the Theoretical Arrival Time (TAT), which indicates when the next request is allowed under the steady rate. If the current time is greater than or equal to the TAT, the request is permitted, and the TAT is updated by adding the emission interval. If the current time plus the burst tolerance is greater than or equal to the TAT, the request is also permitted. In this case, the TAT is updated by adding the emission interval, reducing the remaining burst capacity.
However, if the current time plus the burst tolerance is less than the TAT, the request exceeds the rate limit and is denied. Conveniently, the difference between the TAT and the current time can then be returned to the subscriber in a Retry-After header, informing their client exactly how long to wait before trying again.
To illustrate, consider a rate limit of one request per second (emission interval = 1s) with a burst tolerance of three requests. Up to three requests can arrive back-to-back, but subsequent requests will be delayed until “now” catches up to the TAT, ensuring that the average rate over time remains one request per second.
What sets GCRA apart is its ability to automatically refill capacity gradually and continuously. Unlike sliding windows, where users must wait for an entire time block to reset, GCRA allows users to retry as soon as enough time has passed to maintain the steady rate. This dynamic pacing reduces frustration and provides a smoother, more predictable experience for subscribers.
GCRA is also storage and computationally efficient. It requires tracking only the TAT—stored as a single Unix timestamp—and performing simple arithmetic to enforce limits. This lightweight design allows it to scale to handle billions of requests, with minimal computational and memory overhead.
The Results: Faster, Smoother, and More Scalable
The transition to Redis and GCRA brought immediate, measurable improvements. We cut database load, improved response times, and delivered consistent performance even during periods of peak traffic. Subscribers now experience smoother, more predictable behavior, while the system’s increased permissiveness allows for certificates that the previous approach would have delayed—all achieved without sacrificing scalability or fairness.
Rate Limit Check Latency
Check latency is the extra time added to each request while verifying rate limit compliance. Under the old MariaDB-based system, these checks slowed noticeably during peak traffic, when database contention caused significant delays. Our new Redis-based system dramatically reduced this overhead. The high-traffic “new-order” endpoint saw the greatest improvement, while the “new-account” endpoint—though considerably lighter in traffic—also benefited, especially callers with IPv6 addresses. These results show that our subscribers now experience consistent response times, even under peak load.
Database Health
Our once strained database servers are now operating with ample headroom. In total, MariaDB operations have dropped by 80%, improving responsiveness, reducing contention, and freeing up resources for mission-critical issuance workflows.
Buffer pool requests have decreased by more than 50%, improving caching efficiency and reducing overall memory pressure.
Reads of the authorizations table—a notorious bottleneck—have dropped by over 99%. Previously, this table outpaced all others by more than two orders of magnitude; now it ranks second (the green line below), just narrowly surpassing our third most-read table.
Tracking Zombie Clients
In late 2024, we turned our new rate limiting system toward a longstanding challenge: “zombie clients.” These requesters repeatedly attempt to issue certificates but fail, often because of expired domains or misconfigured DNS records. Together, they generate nearly half of all order attempts yet almost never succeed. We were able to build on this new infrastructure to record consecutive ACME challenge failures by account/domain pair and automatically “pause” this problematic issuance. The result has been a considerable reduction in resource consumption, freeing database and network capacity without disrupting legitimate traffic.
Scalability on Redis
Before deploying the limits to track zombie clients, we maintained just over 12.6 million unique TATs across several Redis databases. Within 24 hours, that number more than doubled to 26 million, and by the end of the week, it peaked at over 30 million. Yet, even with this sharp increase, there was no noticeable impact on rate limit responsiveness. That’s all we’ll share for now about zombie clients—there’s plenty more to unpack, but we’ll save those insights and figures for a future blog post.
What’s Next?
Scaling our rate limits to keep pace with the growth of the Web is a huge achievement, but there’s still more to do. In the near term, many of our other ACME endpoints rely on load balancers to enforce per-IP limits, which works but gives us little control over the feedback provided to subscribers. We’re looking to deploy this new infrastructure across those endpoints as well. Looking further ahead, we’re exploring how we might redefine our rate limits now that we’re no longer constrained by a system that simply counts events between two points in time.
By adopting Redis and GCRA, we’ve built a flexible, efficient rate limit system that promotes fair usage and enables our infrastructure to handle ever-growing demand. We’ll keep adapting to the ever-evolving Web while honoring our primary goal: giving people the certificates they need, for free, in the most user-friendly way we can.
Since its inception, Let’s Encrypt has been sending expiration notification emails to subscribers that have provided an email address to us. We will be ending this service on June 4, 2025. The decision to end this service is the result of the following factors:
-
Over the past 10 years more and more of our subscribers have been able to put reliable automation into place for certificate renewal.
-
Providing expiration notification emails means that we have to retain millions of email addresses connected to issuance records. As an organization that values privacy, removing this requirement is important to us.
-
Providing expiration notifications costs Let’s Encrypt tens of thousands of dollars per year, money that we believe can be better spent on other aspects of our infrastructure.
-
Providing expiration notifications adds complexity to our infrastructure, which takes time and attention to manage and increases the likelihood of mistakes being made. Over the long term, particularly as we add support for new service components, we need to manage overall complexity by phasing out system components that can no longer be justified.
For those who would like to continue receiving expiration notifications, we recommend using a third party service such as Red Sift Certificates Lite (formerly Hardenize). Red Sift’s monitoring service providing expiration emails is free of charge for up to 250 certificates. More monitoring options can be found here.
While we will be minimizing the email addresses we retain connected to issuance data, you can opt in to receive other emails. We’ll keep you informed about technical updates, and other news about Let’s Encrypt and our parent nonprofit, ISRG, based on the preferences you choose. You can sign up for our email lists below:
This year we will continue to pursue our commitment to improving the security of the Web PKI by introducing the option to get certificates with six-day lifetimes (“short-lived certificates”). We will also add support for IP addresses in addition to domain names. Our longer-lived certificates, which currently have a lifetime of 90 days, will continue to be available alongside our six-day offering. Subscribers will be able to opt in to short-lived certificates via a certificate profile mechanism being added to our ACME API.
Shorter Certificate Lifetimes Are Good for Security
When the private key associated with a certificate is compromised, the recommendation has always been to have the certificate revoked so that people will know not to use it. Unfortunately, certificate revocation doesn’t work very well. This means that certificates with compromised keys (or other issues) may continue to be used until they expire. The longer the lifetime of the certificate, the longer the potential for use of a problematic certificate.
The primary advantage of short-lived certificates is that they greatly reduce the potential compromise window because they expire relatively quickly. This reduces the need for certificate revocation, which has historically been unreliable. Our six-day certificates will not include OCSP or CRL URLs. Additionally, short-lived certificates practically require automation, and we believe that automating certificate issuance is important for security.
IP Address Support For Securing Additional Use Cases
We will support including IP addresses as Subject Alternative Names in our six-day certificates. This will enable secure TLS connections, with publicly trusted certificates, to services made available via IP address, without the need for a domain name.
Validation for IP addresses will work much the same as validation for domain names, though validation will be restricted to the http-01 and tls-alpn-01 challenge types. The dns-01 challenge type will not be available because the DNS is not involved in validating IP addresses. Additionally, there is no mechanism to check CAA records for IP addresses.
Timeline
We expect to issue the first valid short-lived certificates to ourselves in February of this year. Around April we will enable short-lived certificates for a small set of early adopting subscribers. We hope to make short-lived certificates generally available by the end of 2025.
The earliest short-lived certificates we issue may not support IP addresses, but we intend to enable IP address support by the time short-lived certificates reach general availability.
How To Get Six-Day and IP Address Certificates
Once short-lived certificates are an option for you, you’ll need to use an ACME client that supports ACME certificate profiles and select the short-lived certificate profile (the name of which will be published at a later date).
Once IP address support is an option for you, requesting an IP address in a certificate will automatically select a short-lived certificate profile.
Looking Ahead
The best way to prepare to take advantage of short-lived certificates is to make sure your ACME client is reliably renewing certificates in an automated fashion. If that’s working well then there should be no costs to switching to short-lived certificates.
If you have questions or comments about our plans, feel free to let us know on our community forums.
We are excited to announce a new extension to Let’s Encrypt’s implementation of the ACME protocol that we are calling “profile selection.” This new feature will allow site operators and ACME clients to opt in to the next evolution of Let’s Encrypt.
As of today, the staging environment is advertising a new field in its directory resource:
GET /directory HTTP/1.1
HTTP/1.1 200 OK
Content-Type: application/json
{
...
"meta": {
"profiles": {
"classic": "The same profile you're accustomed to",
"tlsserver": "https://letsencrypt.org/2025/01/09/acme-profiles/"
}
}
}
Here, the keys are the names of new “profiles”, and the values are human-readable descriptions of those profiles. A profile describes a collection of attributes about the certificate that will be issued, such as what extensions it will contain, how long it will be valid for, and more.
For example, the “classic” profile is exactly what it sounds like: certificates issued under the classic profile will look exactly the same as those that we have always issued, valid for 90 days.
But certificates issued under the “tlsserver” profile will have a number of differences tailored specifically towards TLS server usage:
- No Common Name field (including a CN has been NOT RECOMMENDED by the Baseline Requirements for several years now)
- No Subject Key Identifier (including a SKID is NOT RECOMMENDED by the Baseline Requirements)
- No TLS Client Auth Extended Key Usage (root programs are moving towards requiring “single-purpose” issuance hierarchies, where every certificate has only a single EKU)
- No Key Encipherment Key Usage for certificates with RSA public keys (this KU was used by older RSA-based TLS cipher suites, but is fully unnecessary in TLS 1.3)
Additionally, in the near future we will offer a “shortlived” profile which will be identical to the “tlsserver” profile but with a validity period of only 6 days. This profile isn’t available in Staging just yet, so keep an eye out for further announcements regarding short-lived certificates and why we think they’re exciting.
An ACME client can supply a desired profile name in a new-order request:
POST /acme/new-order HTTP/1.1
Host: example.com
Content-Type: application/jose+json
{
"protected": base64url(...),
"payload": base64url({
"profile": "tlsserver",
"identifiers": [
{ "type": "dns", "value": "www.example.org" },
{ "type": "dns", "value": "example.org" }
],
}),
"signature": "H6ZXtGjTZyUnPeKn...wEA4TklBdh3e454g"
}
If the new-order request is accepted, then the selected profile name will be reflected in the Order object when it is returned, and the resulting certificate after finalization will be issued with the selected profile. If the new-order request does not specify a profile, then the server will select one for it.
Guidance for ACME clients and users
If you are an ACME client author, we encourage you to introduce support for this new field in your client. Start by taking a look at the draft specification in the IETF ACME Working Group. A simple implementation might allow the user to configure a static profile name and include that name in all new-order requests. For a better user experience, check the configured name against the list of profiles advertised in the directory, to ensure that changes to the available profiles don’t result in invalid new-order requests. For clients with a user interface, such as a control panel or interactive command line interface, an implementation could fetch the list of profiles and their descriptions to prompt the user to select one on first run. It could also use a notification mechanism to inform the user of changes to the list of available profiles. We’d also love to hear from you about your experience implementing and deploying this new extension.
If you are a site operator or ACME client user, we encourage you to keep an eye on your ACME client of choice to see when they adopt this new feature, and update your client when they do. We also encourage you to try out the modern “tlsserver” profile in Staging, and let us know what you think of the changes we’ve made to the certificates issued under that profile.
What’s next?
Obviously there is more work to be done here. The draft standard will go through multiple rounds of review and tweaks before becoming an IETF RFC, and our implementation will evolve along with it if necessary. Over the coming weeks and months we will also be providing more information about when we enable profile selection in our production environment, and what our production profile options will be.
Thank you for coming along with us on this journey into the future of the Web PKI. We look forward to your testing and feedback!

This letter was originally published in our 2024 Annual Report.
The past year at ISRG has been a great one and I couldn’t be more proud of our staff, community, funders, and other partners that made it happen. Let’s Encrypt continues to thrive, serving more websites around the world than ever before with excellent security and stability. Our understanding of what it will take to make more privacy-preserving metrics more mainstream via our Divvi Up project is evolving in important ways.
Prossimo has made important investments in making software critical infrastructure safer, from TLS and DNS to the Linux kernel.
Next year is the 10th anniversary of the launch of Let’s Encrypt. Internally things have changed dramatically from what they looked like ten years ago, but outwardly our service hasn’t changed much since launch. That’s because the vision we had for how best to do our job remains as powerful today as it ever was: free 90-day TLS certificates via an automated API. Pretty much as many as you need. More than 500,000,000 websites benefit from this offering today, and the vast majority of the web is encrypted.
Our longstanding offering won’t fundamentally change next year, but we are going to introduce a new offering that’s a big shift from anything we’ve done before - short-lived certificates. Specifically, certificates with a lifetime of six days. This is a big upgrade for the security of the TLS ecosystem because it minimizes exposure time during a key compromise event.
Because we’ve done so much to encourage automation over the past decade, most of our subscribers aren’t going to have to do much in order to switch to shorter lived certificates. We, on the other hand, are going to have to think about the possibility that we will need to issue 20x as many certificates as we do now. It’s not inconceivable that at some point in our next decade we may need to be prepared to issue 100,000,000 certificates per day.
That sounds sort of nuts to me today, but issuing 5,000,000 certificates per day would have sounded crazy to me ten years ago. Here’s the thing though, and this is what I love about the combination of our staff, partners, and funders - whatever it is we need to do to doggedly pursue our mission, we’re going to get it done. It was hard to build Let’s Encrypt. It was difficult to scale it to serve half a billion websites. Getting our Divvi Up service up and running from scratch in three months to service exposure notification applications was not easy. Our Prossimo project was a primary contributor to the creation of a TLS library that provides memory safety while outperforming its peers - a heavy lift.
Charitable contributions from people like you and organizations around the world make this stuff possible. Since 2015, tens of thousands of people have donated. They’ve made a case for corporate sponsorship, given through their DAFs, or set up recurring donations, sometimes to give $3 a month. That’s all added up to millions of dollars that we’ve used to change the Internet for nearly everyone using it. I hope you’ll join these people and help lay the foundation for another great decade.
Josh Aas
Executive Director
Wed, 11 Dec 2024 00:00:00 +0000
Ending OCSP Support in 2025
Earlier this year we announced our intention to introduce short-lived certificates with lifetimes of six days as an option for our subscribers. Yesterday we issued our first short-lived certificate. You can see the certificate at the bottom of our post, or here thanks to Certificate Transparency logs. We issued it to ourselves and then immediately revoked it so we can observe the certificate’s whole lifecycle. This is the first step towards making short-lived certificates available to all subscribers.
The next step is for us to make short-lived certificates available to a small set of our subscribers so we can make sure our systems scale as expected prior to general availability. We expect this next phase to begin during Q2 of this year.
We expect short-lived certificates to be generally available by the end of this year.
How To Get Six-Day Certificates
Once short-lived certificates are an option for you, you’ll need to use an ACME client that supports ACME certificate profiles and select the short-lived certificate profile (“shortlived”). The lego
client recently added this functionality.
In the meantime, the best way to prepare to take advantage of short-lived certificates is to make sure your ACME client is reliably renewing certificates in an automated fashion. If that’s working well then there should be no costs to switching to short-lived certificates.
You’ll also want to be sure your ACME client is running frequently - both for the sake of renewing short-lived certificates and so as to take advantage of ACME Renewal Information (ARI). ARI allows Let’s Encrypt to notify your client if it should renew early for some reason. ARI checks should happen at least once per day, and short-lived certificates should be renewed every two to three days, so we recommend having your client run at least once per day.
Shorter Certificate Lifetimes Are Good for Security
When the private key associated with a certificate is compromised, the recommendation has always been to have the certificate revoked so that people will know not to use it. Unfortunately, certificate revocation doesn’t work very well. This means that certificates with compromised keys (or other issues) may continue to be used until they expire. The longer the lifetime of the certificate, the longer the potential for use of a problematic certificate.
The primary advantage of short-lived certificates is that they greatly reduce the potential compromise window because they expire relatively quickly. This reduces the need for certificate revocation, which has historically been unreliable. Our six-day certificates will not include OCSP or CRL URLs. Additionally, short-lived certificates practically require automation, and we believe that automating certificate issuance is important for security.
Questions
If you have questions or comments about our plans, feel free to let us know on our community forums.
We’d like to thank Open Technology Fund for supporting this work.
Our First 6-Day Certificate
PEM format:
-----BEGIN CERTIFICATE-----
MIIDSzCCAtGgAwIBAgISA7CwFcGk4mQWEXMacRtxHeDvMAoGCCqGSM49BAMDMDIx
CzAJBgNVBAYTAlVTMRYwFAYDVQQKEw1MZXQncyBFbmNyeXB0MQswCQYDVQQDEwJF
NjAeFw0yNTAyMTkxNzMwMDFaFw0yNTAyMjYwOTMwMDBaMAAwWTATBgcqhkjOPQIB
BggqhkjOPQMBBwNCAAQoSItt2V1aocI5dxrKR8iLfmm0KiVvOhiwKByzu2kLeC7C
0BdfAgtwdICdkuEhAXokhXLq6DNZZgmh5T4flVwZo4IB9zCCAfMwDgYDVR0PAQH/
BAQDAgeAMBMGA1UdJQQMMAoGCCsGAQUFBwMBMAwGA1UdEwEB/wQCMAAwHwYDVR0j
BBgwFoAUkydGmAOpUWiOmNbEQkjbI79YlNIwVQYIKwYBBQUHAQEESTBHMCEGCCsG
AQUFBzABhhVodHRwOi8vZTYuby5sZW5jci5vcmcwIgYIKwYBBQUHMAKGFmh0dHA6
Ly9lNi5pLmxlbmNyLm9yZy8wKAYDVR0RAQH/BB4wHIIaaGVsbG93b3JsZC5sZXRz
ZW5jcnlwdC5vcmcwEwYDVR0gBAwwCjAIBgZngQwBAgEwggEFBgorBgEEAdZ5AgQC
BIH2BIHzAPEAdgDM+w9qhXEJZf6Vm1PO6bJ8IumFXA2XjbapflTA/kwNsAAAAZUf
d/zOAAAEAwBHMEUCIFNd51TfSNiJrO+294t49C5ANc4oC7gTUzf7xnlNlhKsAiEA
wi5hfiC9SsKLxlTQ0sctUxhLmdYh40r6ECWQS/yWw2AAdwDgkrP8DB3I52g2H95h
uZZNClJ4GYpy1nLEsE2lbW9UBAAAAZUfd/0TAAAEAwBIMEYCIQCs2NuZIUIloOaH
1t9eXDKb8bjoWESBPsK4i2BxMvEIswIhAOMNaQNyr1YkzrcNUz15qGV0oVLg5BJN
+ikWxXOdcRHFMAoGCCqGSM49BAMDA2gAMGUCMDANqy7G09AIwzXcd7SNl7uFwhC+
xlfduvp1PeEDHc/FA9K3mRYkGXuKtzNdOh7wcAIxALjEMDmBQiwXbB447oGkaZAe
0rqxA3EtNV5wj0obeObluj/NgUsVEG9OqiBIoggFRw==
-----END CERTIFICATE-----
openssl x509 -text
output:
Certificate:
Data:
Version: 3 (0x2)
Serial Number:
03:b0:b0:15:c1:a4:e2:64:16:11:73:1a:71:1b:71:1d:e0:ef
Signature Algorithm: ecdsa-with-SHA384
Issuer: C=US, O=Let's Encrypt, CN=E6
Validity
Not Before: Feb 19 17:30:01 2025 GMT
Not After : Feb 26 09:30:00 2025 GMT
Subject:
Subject Public Key Info:
Public Key Algorithm: id-ecPublicKey
Public-Key: (256 bit)
pub:
04:28:48:8b:6d:d9:5d:5a:a1:c2:39:77:1a:ca:47:
c8:8b:7e:69:b4:2a:25:6f:3a:18:b0:28:1c:b3:bb:
69:0b:78:2e:c2:d0:17:5f:02:0b:70:74:80:9d:92:
e1:21:01:7a:24:85:72:ea:e8:33:59:66:09:a1:e5:
3e:1f:95:5c:19
ASN1 OID: prime256v1
NIST CURVE: P-256
X509v3 extensions:
X509v3 Key Usage: critical
Digital Signature
X509v3 Extended Key Usage:
TLS Web Server Authentication
X509v3 Basic Constraints: critical
CA:FALSE
X509v3 Authority Key Identifier:
93:27:46:98:03:A9:51:68:8E:98:D6:C4:42:48:DB:23:BF:58:94:D2
Authority Information Access:
OCSP - URI:http://e6.o.lencr.org
CA Issuers - URI:http://e6.i.lencr.org/
X509v3 Subject Alternative Name: critical
DNS:helloworld.letsencrypt.org
X509v3 Certificate Policies:
Policy: 2.23.140.1.2.1
CT Precertificate SCTs:
Signed Certificate Timestamp:
Version : v1 (0x0)
Log ID : CC:FB:0F:6A:85:71:09:65:FE:95:9B:53:CE:E9:B2:7C:
22:E9:85:5C:0D:97:8D:B6:A9:7E:54:C0:FE:4C:0D:B0
Timestamp : Feb 19 18:28:32.078 2025 GMT
Extensions: none
Signature : ecdsa-with-SHA256
30:45:02:20:53:5D:E7:54:DF:48:D8:89:AC:EF:B6:F7:
8B:78:F4:2E:40:35:CE:28:0B:B8:13:53:37:FB:C6:79:
4D:96:12:AC:02:21:00:C2:2E:61:7E:20:BD:4A:C2:8B:
C6:54:D0:D2:C7:2D:53:18:4B:99:D6:21:E3:4A:FA:10:
25:90:4B:FC:96:C3:60
Signed Certificate Timestamp:
Version : v1 (0x0)
Log ID : E0:92:B3:FC:0C:1D:C8:E7:68:36:1F:DE:61:B9:96:4D:
0A:52:78:19:8A:72:D6:72:C4:B0:4D:A5:6D:6F:54:04
Timestamp : Feb 19 18:28:32.147 2025 GMT
Extensions: none
Signature : ecdsa-with-SHA256
30:46:02:21:00:AC:D8:DB:99:21:42:25:A0:E6:87:D6:
DF:5E:5C:32:9B:F1:B8:E8:58:44:81:3E:C2:B8:8B:60:
71:32:F1:08:B3:02:21:00:E3:0D:69:03:72:AF:56:24:
CE:B7:0D:53:3D:79:A8:65:74:A1:52:E0:E4:12:4D:FA:
29:16:C5:73:9D:71:11:C5
Signature Algorithm: ecdsa-with-SHA384
Signature Value:
30:65:02:30:30:0d:ab:2e:c6:d3:d0:08:c3:35:dc:77:b4:8d:
97:bb:85:c2:10:be:c6:57:dd:ba:fa:75:3d:e1:03:1d:cf:c5:
03:d2:b7:99:16:24:19:7b:8a:b7:33:5d:3a:1e:f0:70:02:31:
00:b8:c4:30:39:81:42:2c:17:6c:1e:38:ee:81:a4:69:90:1e:
d2:ba:b1:03:71:2d:35:5e:70:8f:4a:1b:78:e6:e5:ba:3f:cd:
81:4b:15:10:6f:4e:aa:20:48:a2:08:05:47

2025 marks ten years of Let’s Encrypt. Already this year we’ve taken steps to continue to deliver on our values of user privacy, efficiency, and innovation, all with the intent of continuing to deliver free TLS certificates to as many people as possible; to deliver encryption for everybody.
And while we’re excited about the technical progress we’ll make this year, we’re also going to celebrate this tenth anniversary by highlighting the people around the world who make our impact possible. It’s no small village.
From a community forum that has provided free technical support, to our roster of sponsors who provide vital funding, to the thousands of individual supporters who contribute financially to Let’s Encrypt each year, free TLS at Internet scale works because people have supported it year in, year out, for ten years.
Each month we’ll highlight a different set of people behind our “everybody.” Who do you want to see us highlight? What use cases of Let’s Encrypt have you seen that amazed you? What about our work do you hope we’ll continue or improve as we go forward? Let us know on LinkedIn, or drop a note to outreach@letsencrypt.org.
Encryption for Everybody is our unofficial tagline for this tenth anniversary year. What we love about it is that, yes, it captures our commitment to ensuring anyone around the world can easily get a cert for free. But more importantly, it captures the reality that technical innovation won’t work without people believing in it and supporting it. We’re grateful that, for ten years (and counting!), our community of supporters has made an impact on the lives of billions of Internet users—an impact that’s made theWeb more secure and privacy respecting for everybody, everywhere.
Internet Security Research Group (ISRG) is the parent organization of Let’s Encrypt, Prossimo, and Divvi Up. ISRG is a 501(c)(3) nonprofit. If you’d like to support our work, please consider getting involved, donating, or encouraging your company to become a sponsor.
Let’s Encrypt protects a vast portion of the Web by providing TLS certificates to over 550 million websites—a figure that has grown by 42% in the last year alone. We currently issue over 340,000 certificates per hour. To manage this immense traffic and maintain responsiveness under high demand, our infrastructure relies on rate limiting. In 2015, we introduced our first rate limiting system, built on MariaDB. It evolved alongside our rapidly growing service but eventually revealed its limits: straining database servers, forcing long reset times on subscribers, and slowing down every request.
We needed a solution built for the future—one that could scale with demand, reduce the load on MariaDB, and adapt to real-world subscriber request patterns. The result was a new rate limiting system powered by Redis and a proven virtual scheduling algorithm from the mid-90s. Efficient and scalable, and capable of handling over a billion active certificates.
Rate Limiting a Free Service is Hard
In 2015, Let’s Encrypt was in early preview, and we faced a unique challenge. We were poised to become incredibly popular, offering certificates freely and without requiring contact information or email verification. Ensuring fair usage and preventing abuse without traditional safeguards demanded an atypical approach to rate limiting.
We decided to limit the number of certificates issued—per week—for each registered domain. Registered domains are a limited resource with real costs, making them a natural and effective basis for rate limiting—one that mirrors the structure of the Web itself. Specifically, this approach targets the effective Top-Level Domain (eTLD), as defined by the Public Suffix List (PSL), plus one additional label to the left. For example, in new.blog.example.co.uk
, the eTLD is .co.uk
, making example.co.uk
the eTLD+1.
Counting Events Was Easy
For each successfully issued certificate, we logged an entry in a table that recorded the registered domain, the issuance date, and other relevant details. To enforce rate limits, the system scanned this table, counted the rows matching a given registered domain within a specific time window, and compared the total to a configured threshold. This simple design formed the basis for all future rate limits.
Counting a Lot of Events Got Expensive
By 2019, we had added six new rate limits to protect our infrastructure as demand for certificates surged. Enforcing these limits required frequent scans of database tables to count recent matching events. These operations, especially on our heavily-used authorizations table, caused significant overhead, with reads outpacing all other tables—often by an order of magnitude.
Rate limit calculations were performed early in request processing and often. Counting rows in MariaDB, particularly for accounts with rate limit overrides, was inherently expensive and quickly became a scaling bottleneck.
Adding new limits required careful trade-offs. Decisions about whether to reuse existing schema, optimize indexes, or design purpose-built tables helped balance performance, complexity, and long-term maintainability.
Buying Runway — Offloading Reads
In late 2021, we updated our control plane and Boulder—our in-house CA software—to route most API reads, including rate limit checks, to database replicas. This reduced the load on the primary database and improved its overall health. At the same time, however, latency of rate limit checks during peak hours continued to rise, highlighting the limitations of scaling reads alone.
Sliding Windows Got Frustrating
Subscribers were frequently hitting rate limits unexpectedly, leaving them unable to request certificates for days. This issue stemmed from our use of relatively large rate limiting windows—most spanning a week. Subscribers could deplete their entire limit in just a few moments by repeating the same request, and find themselves locked out for the remainder of the week. This approach was inflexible and disruptive, causing unnecessary frustration and delays.
In early 2022, we patched the Duplicate Certificate limit to address this rigidity. Using a naive token-bucket approach, we allowed users to “earn back” requests incrementally, cutting the wait time—once rate limited—to about 1.4 days. The patch worked by fetching recent issuance timestamps and calculating the time between them to grant requests based on the time waited. This change also allowed us to include a Retry-After timestamp in rate limited responses. While this improved the user experience for this one limit, we understood it to be a temporary fix for a system in need of a larger overhaul.
When a Problem Grows Large Enough, It Finds the Time for You
Setting aside time for a complete overhaul of our rate-limiting system wasn’t easy. Our development team, composed of just three permanent engineers, typically juggles several competing priorities. Yet by 2023, our flagging rate limits code had begun to endanger the reliability of our MariaDB databases.
Our authorizations table was now regularly read an order of magnitude more than any other. Individually identifying and deleting unnecessary rows—or specific values—had proved unworkable due to poor MariaDB delete performance. Storage engines like InnoDB must maintain indexes, foreign key constraints, and transaction logs for every deletion, which significantly increases overhead for concurrent transactions and leads to gruelingly slow deletes.
Our SRE team automated the cleanup of old rows for many tables using the PARTITION
command, which worked well for bookkeeping and compliance data. Unfortunately, we couldn’t apply it to most of our purpose-built rate limit tables. These tables depend on ON DUPLICATE KEY UPDATE
, a mechanism that requires the targeted column to be a unique index or primary key, while partitioning demands that the primary key be included in the partitioning key.
Indexes on these tables—such as those tracking requested hostnames—often grew larger than the tables themselves and, in some cases, exceeded the memory of our smaller staging environment databases, eventually forcing us to periodically wipe them entirely.
By late 2023, this cascading confluence of complexities required a reckoning. We set out to design a rate limiting system built for the future.
The Solution: Redis + GCRA
We designed a system from the ground up that combines Redis for storage and the Generic Cell Rate Algorithm (GCRA) for managing request flow.
Why Redis?
Our engineers were already familiar with Redis, having recently deployed it to cache and serve OCSP responses. Its high throughput and low latency made it a candidate for tracking rate limit state as well.
By moving this data from MariaDB to Redis, we could eliminate the need for ever-expanding, purpose-built tables and indexes, significantly reducing read and write pressure. Redis’s feature set made it a perfect fit for the task. Most rate limit data is ephemeral—after a few days, or sometimes just minutes, it becomes irrelevant unless the subscriber calls us again. Redis’s per-key Time-To-Live would allow us to expire this data the moment it was no longer needed.
Redis also supports atomic integer operations, enabling fast, reliable counter updates, even when increments occur concurrently. Its “set if not exist” functionality ensures efficient initialization of keys, while pipeline support allows us to get and set multiple keys in bulk. This combination of familiarity, speed, simplicity, and flexibility made Redis the natural choice.
Why GCRA?
The Generic Cell Rate Algorithm (GCRA) is a virtual scheduling algorithm originally designed for telecommunication networks to regulate traffic and prevent congestion. Unlike traditional sliding window approaches that work in fixed time blocks, GCRA enforces rate limits continuously, making it well-suited to our goals.
A rate limit in GCRA is defined by two parameters: the emission interval and the burst tolerance. The emission interval specifies the minimum time that must pass between consecutive requests to maintain a steady rate. For example, an emission interval of one second allows one request per second on average. The burst tolerance determines how much unused capacity can be drawn on to allow short bursts of requests beyond the steady rate.
When a request is received, GCRA compares the current time to the Theoretical Arrival Time (TAT), which indicates when the next request is allowed under the steady rate. If the current time is greater than or equal to the TAT, the request is permitted, and the TAT is updated by adding the emission interval. If the current time plus the burst tolerance is greater than or equal to the TAT, the request is also permitted. In this case, the TAT is updated by adding the emission interval, reducing the remaining burst capacity.
However, if the current time plus the burst tolerance is less than the TAT, the request exceeds the rate limit and is denied. Conveniently, the difference between the TAT and the current time can then be returned to the subscriber in a Retry-After header, informing their client exactly how long to wait before trying again.
To illustrate, consider a rate limit of one request per second (emission interval = 1s) with a burst tolerance of three requests. Up to three requests can arrive back-to-back, but subsequent requests will be delayed until “now” catches up to the TAT, ensuring that the average rate over time remains one request per second.
What sets GCRA apart is its ability to automatically refill capacity gradually and continuously. Unlike sliding windows, where users must wait for an entire time block to reset, GCRA allows users to retry as soon as enough time has passed to maintain the steady rate. This dynamic pacing reduces frustration and provides a smoother, more predictable experience for subscribers.
GCRA is also storage and computationally efficient. It requires tracking only the TAT—stored as a single Unix timestamp—and performing simple arithmetic to enforce limits. This lightweight design allows it to scale to handle billions of requests, with minimal computational and memory overhead.
The Results: Faster, Smoother, and More Scalable
The transition to Redis and GCRA brought immediate, measurable improvements. We cut database load, improved response times, and delivered consistent performance even during periods of peak traffic. Subscribers now experience smoother, more predictable behavior, while the system’s increased permissiveness allows for certificates that the previous approach would have delayed—all achieved without sacrificing scalability or fairness.
Rate Limit Check Latency
Check latency is the extra time added to each request while verifying rate limit compliance. Under the old MariaDB-based system, these checks slowed noticeably during peak traffic, when database contention caused significant delays. Our new Redis-based system dramatically reduced this overhead. The high-traffic “new-order” endpoint saw the greatest improvement, while the “new-account” endpoint—though considerably lighter in traffic—also benefited, especially callers with IPv6 addresses. These results show that our subscribers now experience consistent response times, even under peak load.
Database Health
Our once strained database servers are now operating with ample headroom. In total, MariaDB operations have dropped by 80%, improving responsiveness, reducing contention, and freeing up resources for mission-critical issuance workflows.
Buffer pool requests have decreased by more than 50%, improving caching efficiency and reducing overall memory pressure.
Reads of the authorizations table—a notorious bottleneck—have dropped by over 99%. Previously, this table outpaced all others by more than two orders of magnitude; now it ranks second (the green line below), just narrowly surpassing our third most-read table.
Tracking Zombie Clients
In late 2024, we turned our new rate limiting system toward a longstanding challenge: “zombie clients.” These requesters repeatedly attempt to issue certificates but fail, often because of expired domains or misconfigured DNS records. Together, they generate nearly half of all order attempts yet almost never succeed. We were able to build on this new infrastructure to record consecutive ACME challenge failures by account/domain pair and automatically “pause” this problematic issuance. The result has been a considerable reduction in resource consumption, freeing database and network capacity without disrupting legitimate traffic.
Scalability on Redis
Before deploying the limits to track zombie clients, we maintained just over 12.6 million unique TATs across several Redis databases. Within 24 hours, that number more than doubled to 26 million, and by the end of the week, it peaked at over 30 million. Yet, even with this sharp increase, there was no noticeable impact on rate limit responsiveness. That’s all we’ll share for now about zombie clients—there’s plenty more to unpack, but we’ll save those insights and figures for a future blog post.
What’s Next?
Scaling our rate limits to keep pace with the growth of the Web is a huge achievement, but there’s still more to do. In the near term, many of our other ACME endpoints rely on load balancers to enforce per-IP limits, which works but gives us little control over the feedback provided to subscribers. We’re looking to deploy this new infrastructure across those endpoints as well. Looking further ahead, we’re exploring how we might redefine our rate limits now that we’re no longer constrained by a system that simply counts events between two points in time.
By adopting Redis and GCRA, we’ve built a flexible, efficient rate limit system that promotes fair usage and enables our infrastructure to handle ever-growing demand. We’ll keep adapting to the ever-evolving Web while honoring our primary goal: giving people the certificates they need, for free, in the most user-friendly way we can.
Since its inception, Let’s Encrypt has been sending expiration notification emails to subscribers that have provided an email address to us. We will be ending this service on June 4, 2025. The decision to end this service is the result of the following factors:
-
Over the past 10 years more and more of our subscribers have been able to put reliable automation into place for certificate renewal.
-
Providing expiration notification emails means that we have to retain millions of email addresses connected to issuance records. As an organization that values privacy, removing this requirement is important to us.
-
Providing expiration notifications costs Let’s Encrypt tens of thousands of dollars per year, money that we believe can be better spent on other aspects of our infrastructure.
-
Providing expiration notifications adds complexity to our infrastructure, which takes time and attention to manage and increases the likelihood of mistakes being made. Over the long term, particularly as we add support for new service components, we need to manage overall complexity by phasing out system components that can no longer be justified.
For those who would like to continue receiving expiration notifications, we recommend using a third party service such as Red Sift Certificates Lite (formerly Hardenize). Red Sift’s monitoring service providing expiration emails is free of charge for up to 250 certificates. More monitoring options can be found here.
While we will be minimizing the email addresses we retain connected to issuance data, you can opt in to receive other emails. We’ll keep you informed about technical updates, and other news about Let’s Encrypt and our parent nonprofit, ISRG, based on the preferences you choose. You can sign up for our email lists below:
This year we will continue to pursue our commitment to improving the security of the Web PKI by introducing the option to get certificates with six-day lifetimes (“short-lived certificates”). We will also add support for IP addresses in addition to domain names. Our longer-lived certificates, which currently have a lifetime of 90 days, will continue to be available alongside our six-day offering. Subscribers will be able to opt in to short-lived certificates via a certificate profile mechanism being added to our ACME API.
Shorter Certificate Lifetimes Are Good for Security
When the private key associated with a certificate is compromised, the recommendation has always been to have the certificate revoked so that people will know not to use it. Unfortunately, certificate revocation doesn’t work very well. This means that certificates with compromised keys (or other issues) may continue to be used until they expire. The longer the lifetime of the certificate, the longer the potential for use of a problematic certificate.
The primary advantage of short-lived certificates is that they greatly reduce the potential compromise window because they expire relatively quickly. This reduces the need for certificate revocation, which has historically been unreliable. Our six-day certificates will not include OCSP or CRL URLs. Additionally, short-lived certificates practically require automation, and we believe that automating certificate issuance is important for security.
IP Address Support For Securing Additional Use Cases
We will support including IP addresses as Subject Alternative Names in our six-day certificates. This will enable secure TLS connections, with publicly trusted certificates, to services made available via IP address, without the need for a domain name.
Validation for IP addresses will work much the same as validation for domain names, though validation will be restricted to the http-01 and tls-alpn-01 challenge types. The dns-01 challenge type will not be available because the DNS is not involved in validating IP addresses. Additionally, there is no mechanism to check CAA records for IP addresses.
Timeline
We expect to issue the first valid short-lived certificates to ourselves in February of this year. Around April we will enable short-lived certificates for a small set of early adopting subscribers. We hope to make short-lived certificates generally available by the end of 2025.
The earliest short-lived certificates we issue may not support IP addresses, but we intend to enable IP address support by the time short-lived certificates reach general availability.
How To Get Six-Day and IP Address Certificates
Once short-lived certificates are an option for you, you’ll need to use an ACME client that supports ACME certificate profiles and select the short-lived certificate profile (the name of which will be published at a later date).
Once IP address support is an option for you, requesting an IP address in a certificate will automatically select a short-lived certificate profile.
Looking Ahead
The best way to prepare to take advantage of short-lived certificates is to make sure your ACME client is reliably renewing certificates in an automated fashion. If that’s working well then there should be no costs to switching to short-lived certificates.
If you have questions or comments about our plans, feel free to let us know on our community forums.
We are excited to announce a new extension to Let’s Encrypt’s implementation of the ACME protocol that we are calling “profile selection.” This new feature will allow site operators and ACME clients to opt in to the next evolution of Let’s Encrypt.
As of today, the staging environment is advertising a new field in its directory resource:
GET /directory HTTP/1.1
HTTP/1.1 200 OK
Content-Type: application/json
{
...
"meta": {
"profiles": {
"classic": "The same profile you're accustomed to",
"tlsserver": "https://letsencrypt.org/2025/01/09/acme-profiles/"
}
}
}
Here, the keys are the names of new “profiles”, and the values are human-readable descriptions of those profiles. A profile describes a collection of attributes about the certificate that will be issued, such as what extensions it will contain, how long it will be valid for, and more.
For example, the “classic” profile is exactly what it sounds like: certificates issued under the classic profile will look exactly the same as those that we have always issued, valid for 90 days.
But certificates issued under the “tlsserver” profile will have a number of differences tailored specifically towards TLS server usage:
- No Common Name field (including a CN has been NOT RECOMMENDED by the Baseline Requirements for several years now)
- No Subject Key Identifier (including a SKID is NOT RECOMMENDED by the Baseline Requirements)
- No TLS Client Auth Extended Key Usage (root programs are moving towards requiring “single-purpose” issuance hierarchies, where every certificate has only a single EKU)
- No Key Encipherment Key Usage for certificates with RSA public keys (this KU was used by older RSA-based TLS cipher suites, but is fully unnecessary in TLS 1.3)
Additionally, in the near future we will offer a “shortlived” profile which will be identical to the “tlsserver” profile but with a validity period of only 6 days. This profile isn’t available in Staging just yet, so keep an eye out for further announcements regarding short-lived certificates and why we think they’re exciting.
An ACME client can supply a desired profile name in a new-order request:
POST /acme/new-order HTTP/1.1
Host: example.com
Content-Type: application/jose+json
{
"protected": base64url(...),
"payload": base64url({
"profile": "tlsserver",
"identifiers": [
{ "type": "dns", "value": "www.example.org" },
{ "type": "dns", "value": "example.org" }
],
}),
"signature": "H6ZXtGjTZyUnPeKn...wEA4TklBdh3e454g"
}
If the new-order request is accepted, then the selected profile name will be reflected in the Order object when it is returned, and the resulting certificate after finalization will be issued with the selected profile. If the new-order request does not specify a profile, then the server will select one for it.
Guidance for ACME clients and users
If you are an ACME client author, we encourage you to introduce support for this new field in your client. Start by taking a look at the draft specification in the IETF ACME Working Group. A simple implementation might allow the user to configure a static profile name and include that name in all new-order requests. For a better user experience, check the configured name against the list of profiles advertised in the directory, to ensure that changes to the available profiles don’t result in invalid new-order requests. For clients with a user interface, such as a control panel or interactive command line interface, an implementation could fetch the list of profiles and their descriptions to prompt the user to select one on first run. It could also use a notification mechanism to inform the user of changes to the list of available profiles. We’d also love to hear from you about your experience implementing and deploying this new extension.
If you are a site operator or ACME client user, we encourage you to keep an eye on your ACME client of choice to see when they adopt this new feature, and update your client when they do. We also encourage you to try out the modern “tlsserver” profile in Staging, and let us know what you think of the changes we’ve made to the certificates issued under that profile.
What’s next?
Obviously there is more work to be done here. The draft standard will go through multiple rounds of review and tweaks before becoming an IETF RFC, and our implementation will evolve along with it if necessary. Over the coming weeks and months we will also be providing more information about when we enable profile selection in our production environment, and what our production profile options will be.
Thank you for coming along with us on this journey into the future of the Web PKI. We look forward to your testing and feedback!

This letter was originally published in our 2024 Annual Report.
The past year at ISRG has been a great one and I couldn’t be more proud of our staff, community, funders, and other partners that made it happen. Let’s Encrypt continues to thrive, serving more websites around the world than ever before with excellent security and stability. Our understanding of what it will take to make more privacy-preserving metrics more mainstream via our Divvi Up project is evolving in important ways.
Prossimo has made important investments in making software critical infrastructure safer, from TLS and DNS to the Linux kernel.
Next year is the 10th anniversary of the launch of Let’s Encrypt. Internally things have changed dramatically from what they looked like ten years ago, but outwardly our service hasn’t changed much since launch. That’s because the vision we had for how best to do our job remains as powerful today as it ever was: free 90-day TLS certificates via an automated API. Pretty much as many as you need. More than 500,000,000 websites benefit from this offering today, and the vast majority of the web is encrypted.
Our longstanding offering won’t fundamentally change next year, but we are going to introduce a new offering that’s a big shift from anything we’ve done before - short-lived certificates. Specifically, certificates with a lifetime of six days. This is a big upgrade for the security of the TLS ecosystem because it minimizes exposure time during a key compromise event.
Because we’ve done so much to encourage automation over the past decade, most of our subscribers aren’t going to have to do much in order to switch to shorter lived certificates. We, on the other hand, are going to have to think about the possibility that we will need to issue 20x as many certificates as we do now. It’s not inconceivable that at some point in our next decade we may need to be prepared to issue 100,000,000 certificates per day.
That sounds sort of nuts to me today, but issuing 5,000,000 certificates per day would have sounded crazy to me ten years ago. Here’s the thing though, and this is what I love about the combination of our staff, partners, and funders - whatever it is we need to do to doggedly pursue our mission, we’re going to get it done. It was hard to build Let’s Encrypt. It was difficult to scale it to serve half a billion websites. Getting our Divvi Up service up and running from scratch in three months to service exposure notification applications was not easy. Our Prossimo project was a primary contributor to the creation of a TLS library that provides memory safety while outperforming its peers - a heavy lift.
Charitable contributions from people like you and organizations around the world make this stuff possible. Since 2015, tens of thousands of people have donated. They’ve made a case for corporate sponsorship, given through their DAFs, or set up recurring donations, sometimes to give $3 a month. That’s all added up to millions of dollars that we’ve used to change the Internet for nearly everyone using it. I hope you’ll join these people and help lay the foundation for another great decade.
Josh Aas
Executive Director
Earlier this year we announced our intent to provide certificate revocation information exclusively via Certificate Revocation Lists (CRLs), ending support for providing certificate revocation information via the Online Certificate Status Protocol (OCSP). Today we are providing a timeline for ending OCSP services:
- January 30, 2025
- OCSP Must-Staple requests will fail, unless the requesting account has previously issued a certificate containing the OCSP Must Staple extension
- May 7, 2025
- Prior to this date we will have added CRL URLs to certificates
- On this date we will drop OCSP URLs from certificates
- On this date all requests including the OCSP Must Staple extension will fail
- August 6, 2025
- On this date we will turn off our OCSP responders
Additionally, a very small percentage of our subscribers request certificates with the OCSP Must Staple Extension. If you have manually configured your ACME client to request that extension, action is required before May 7. See “Must Staple” below for details.
OCSP and CRLs are both mechanisms by which CAs can communicate certificate revocation information, but CRLs have significant advantages over OCSP. Let’s Encrypt has been providing an OCSP responder since our launch nearly ten years ago. We added support for CRLs in 2022.
Websites and people who visit them will not be affected by this change, but some non-browser software might be.
We plan to end support for OCSP primarily because it represents a considerable risk to privacy on the Internet. When someone visits a website using a browser or other software that checks for certificate revocation via OCSP, the Certificate Authority (CA) operating the OCSP responder immediately becomes aware of which website is being visited from that visitor’s particular IP address. Even when a CA intentionally does not retain this information, as is the case with Let’s Encrypt, CAs could be legally compelled to collect it. CRLs do not have this issue.
We are also taking this step because keeping our CA infrastructure as simple as possible is critical for the continuity of compliance, reliability, and efficiency at Let’s Encrypt. For every year that we have existed, operating OCSP services has taken up considerable resources that can soon be better spent on other aspects of our operations. Now that we support CRLs, our OCSP service has become unnecessary.
We recommend that anyone relying on OCSP services today start the process of ending that reliance as soon as possible. If you use Let’s Encrypt certificates to secure non-browser communications such as a VPN, you should ensure that your software operates correctly if certificates contain no OCSP URL.
Must Staple
Because of the privacy issues with OCSP, browsers and servers implement a feature called “OCSP Stapling”, where the web server sends a copy of the appropriate OCSP response during the TLS handshake, and the browser skips making a request to the CA, thus better preserving privacy.
In addition to OCSP Stapling (a TLS feature negotiated at handshake time), there’s an extension that can be added to certificates at issuance time, colloquially called “OCSP Must Staple.” This tells browsers that, if they see that extension in a certificate, they should never contact the CA about it and should instead expect to see a stapled copy in the handshake. Failing that, browsers should refuse to connect. This was designed to solve some security problems with revocation.
Let’s Encrypt has supported OCSP Must Staple for a long time, because of the potential to improve both privacy and security. However, Must Staple has failed to get wide browser support after many years. And popular web servers still implement OCSP Stapling in ways that create serious risks of downtime.
As part of removing OCSP, we’ll also be removing support for OCSP Must Staple. CRLs have wide browser support and can provide privacy benefits to all sites, without requiring special web server configuration. Thanks to all our subscribers who have helped with the OCSP Must Staple experiment.
If you are not certain whether you are using OCSP Must Staple, you can check this list of hostnames and certificate serials (11.1 MB, .zip).
As of January 30, 2025, issuance requests that include the OCSP Must Staple extension will fail, unless the requesting account has previously issued a certificate containing the OCSP Must Staple extension.
As of May 7, all issuance requests that include the OCSP Must Staple extension will fail, including renewals. Please change your ACME client configuration to not request the extension.
Thu, 05 Dec 2024 00:00:00 +0000
Intent to End OCSP Service
Earlier this year we announced our intention to introduce short-lived certificates with lifetimes of six days as an option for our subscribers. Yesterday we issued our first short-lived certificate. You can see the certificate at the bottom of our post, or here thanks to Certificate Transparency logs. We issued it to ourselves and then immediately revoked it so we can observe the certificate’s whole lifecycle. This is the first step towards making short-lived certificates available to all subscribers.
The next step is for us to make short-lived certificates available to a small set of our subscribers so we can make sure our systems scale as expected prior to general availability. We expect this next phase to begin during Q2 of this year.
We expect short-lived certificates to be generally available by the end of this year.
How To Get Six-Day Certificates
Once short-lived certificates are an option for you, you’ll need to use an ACME client that supports ACME certificate profiles and select the short-lived certificate profile (“shortlived”). The lego
client recently added this functionality.
In the meantime, the best way to prepare to take advantage of short-lived certificates is to make sure your ACME client is reliably renewing certificates in an automated fashion. If that’s working well then there should be no costs to switching to short-lived certificates.
You’ll also want to be sure your ACME client is running frequently - both for the sake of renewing short-lived certificates and so as to take advantage of ACME Renewal Information (ARI). ARI allows Let’s Encrypt to notify your client if it should renew early for some reason. ARI checks should happen at least once per day, and short-lived certificates should be renewed every two to three days, so we recommend having your client run at least once per day.
Shorter Certificate Lifetimes Are Good for Security
When the private key associated with a certificate is compromised, the recommendation has always been to have the certificate revoked so that people will know not to use it. Unfortunately, certificate revocation doesn’t work very well. This means that certificates with compromised keys (or other issues) may continue to be used until they expire. The longer the lifetime of the certificate, the longer the potential for use of a problematic certificate.
The primary advantage of short-lived certificates is that they greatly reduce the potential compromise window because they expire relatively quickly. This reduces the need for certificate revocation, which has historically been unreliable. Our six-day certificates will not include OCSP or CRL URLs. Additionally, short-lived certificates practically require automation, and we believe that automating certificate issuance is important for security.
Questions
If you have questions or comments about our plans, feel free to let us know on our community forums.
We’d like to thank Open Technology Fund for supporting this work.
Our First 6-Day Certificate
PEM format:
-----BEGIN CERTIFICATE-----
MIIDSzCCAtGgAwIBAgISA7CwFcGk4mQWEXMacRtxHeDvMAoGCCqGSM49BAMDMDIx
CzAJBgNVBAYTAlVTMRYwFAYDVQQKEw1MZXQncyBFbmNyeXB0MQswCQYDVQQDEwJF
NjAeFw0yNTAyMTkxNzMwMDFaFw0yNTAyMjYwOTMwMDBaMAAwWTATBgcqhkjOPQIB
BggqhkjOPQMBBwNCAAQoSItt2V1aocI5dxrKR8iLfmm0KiVvOhiwKByzu2kLeC7C
0BdfAgtwdICdkuEhAXokhXLq6DNZZgmh5T4flVwZo4IB9zCCAfMwDgYDVR0PAQH/
BAQDAgeAMBMGA1UdJQQMMAoGCCsGAQUFBwMBMAwGA1UdEwEB/wQCMAAwHwYDVR0j
BBgwFoAUkydGmAOpUWiOmNbEQkjbI79YlNIwVQYIKwYBBQUHAQEESTBHMCEGCCsG
AQUFBzABhhVodHRwOi8vZTYuby5sZW5jci5vcmcwIgYIKwYBBQUHMAKGFmh0dHA6
Ly9lNi5pLmxlbmNyLm9yZy8wKAYDVR0RAQH/BB4wHIIaaGVsbG93b3JsZC5sZXRz
ZW5jcnlwdC5vcmcwEwYDVR0gBAwwCjAIBgZngQwBAgEwggEFBgorBgEEAdZ5AgQC
BIH2BIHzAPEAdgDM+w9qhXEJZf6Vm1PO6bJ8IumFXA2XjbapflTA/kwNsAAAAZUf
d/zOAAAEAwBHMEUCIFNd51TfSNiJrO+294t49C5ANc4oC7gTUzf7xnlNlhKsAiEA
wi5hfiC9SsKLxlTQ0sctUxhLmdYh40r6ECWQS/yWw2AAdwDgkrP8DB3I52g2H95h
uZZNClJ4GYpy1nLEsE2lbW9UBAAAAZUfd/0TAAAEAwBIMEYCIQCs2NuZIUIloOaH
1t9eXDKb8bjoWESBPsK4i2BxMvEIswIhAOMNaQNyr1YkzrcNUz15qGV0oVLg5BJN
+ikWxXOdcRHFMAoGCCqGSM49BAMDA2gAMGUCMDANqy7G09AIwzXcd7SNl7uFwhC+
xlfduvp1PeEDHc/FA9K3mRYkGXuKtzNdOh7wcAIxALjEMDmBQiwXbB447oGkaZAe
0rqxA3EtNV5wj0obeObluj/NgUsVEG9OqiBIoggFRw==
-----END CERTIFICATE-----
openssl x509 -text
output:
Certificate:
Data:
Version: 3 (0x2)
Serial Number:
03:b0:b0:15:c1:a4:e2:64:16:11:73:1a:71:1b:71:1d:e0:ef
Signature Algorithm: ecdsa-with-SHA384
Issuer: C=US, O=Let's Encrypt, CN=E6
Validity
Not Before: Feb 19 17:30:01 2025 GMT
Not After : Feb 26 09:30:00 2025 GMT
Subject:
Subject Public Key Info:
Public Key Algorithm: id-ecPublicKey
Public-Key: (256 bit)
pub:
04:28:48:8b:6d:d9:5d:5a:a1:c2:39:77:1a:ca:47:
c8:8b:7e:69:b4:2a:25:6f:3a:18:b0:28:1c:b3:bb:
69:0b:78:2e:c2:d0:17:5f:02:0b:70:74:80:9d:92:
e1:21:01:7a:24:85:72:ea:e8:33:59:66:09:a1:e5:
3e:1f:95:5c:19
ASN1 OID: prime256v1
NIST CURVE: P-256
X509v3 extensions:
X509v3 Key Usage: critical
Digital Signature
X509v3 Extended Key Usage:
TLS Web Server Authentication
X509v3 Basic Constraints: critical
CA:FALSE
X509v3 Authority Key Identifier:
93:27:46:98:03:A9:51:68:8E:98:D6:C4:42:48:DB:23:BF:58:94:D2
Authority Information Access:
OCSP - URI:http://e6.o.lencr.org
CA Issuers - URI:http://e6.i.lencr.org/
X509v3 Subject Alternative Name: critical
DNS:helloworld.letsencrypt.org
X509v3 Certificate Policies:
Policy: 2.23.140.1.2.1
CT Precertificate SCTs:
Signed Certificate Timestamp:
Version : v1 (0x0)
Log ID : CC:FB:0F:6A:85:71:09:65:FE:95:9B:53:CE:E9:B2:7C:
22:E9:85:5C:0D:97:8D:B6:A9:7E:54:C0:FE:4C:0D:B0
Timestamp : Feb 19 18:28:32.078 2025 GMT
Extensions: none
Signature : ecdsa-with-SHA256
30:45:02:20:53:5D:E7:54:DF:48:D8:89:AC:EF:B6:F7:
8B:78:F4:2E:40:35:CE:28:0B:B8:13:53:37:FB:C6:79:
4D:96:12:AC:02:21:00:C2:2E:61:7E:20:BD:4A:C2:8B:
C6:54:D0:D2:C7:2D:53:18:4B:99:D6:21:E3:4A:FA:10:
25:90:4B:FC:96:C3:60
Signed Certificate Timestamp:
Version : v1 (0x0)
Log ID : E0:92:B3:FC:0C:1D:C8:E7:68:36:1F:DE:61:B9:96:4D:
0A:52:78:19:8A:72:D6:72:C4:B0:4D:A5:6D:6F:54:04
Timestamp : Feb 19 18:28:32.147 2025 GMT
Extensions: none
Signature : ecdsa-with-SHA256
30:46:02:21:00:AC:D8:DB:99:21:42:25:A0:E6:87:D6:
DF:5E:5C:32:9B:F1:B8:E8:58:44:81:3E:C2:B8:8B:60:
71:32:F1:08:B3:02:21:00:E3:0D:69:03:72:AF:56:24:
CE:B7:0D:53:3D:79:A8:65:74:A1:52:E0:E4:12:4D:FA:
29:16:C5:73:9D:71:11:C5
Signature Algorithm: ecdsa-with-SHA384
Signature Value:
30:65:02:30:30:0d:ab:2e:c6:d3:d0:08:c3:35:dc:77:b4:8d:
97:bb:85:c2:10:be:c6:57:dd:ba:fa:75:3d:e1:03:1d:cf:c5:
03:d2:b7:99:16:24:19:7b:8a:b7:33:5d:3a:1e:f0:70:02:31:
00:b8:c4:30:39:81:42:2c:17:6c:1e:38:ee:81:a4:69:90:1e:
d2:ba:b1:03:71:2d:35:5e:70:8f:4a:1b:78:e6:e5:ba:3f:cd:
81:4b:15:10:6f:4e:aa:20:48:a2:08:05:47

2025 marks ten years of Let’s Encrypt. Already this year we’ve taken steps to continue to deliver on our values of user privacy, efficiency, and innovation, all with the intent of continuing to deliver free TLS certificates to as many people as possible; to deliver encryption for everybody.
And while we’re excited about the technical progress we’ll make this year, we’re also going to celebrate this tenth anniversary by highlighting the people around the world who make our impact possible. It’s no small village.
From a community forum that has provided free technical support, to our roster of sponsors who provide vital funding, to the thousands of individual supporters who contribute financially to Let’s Encrypt each year, free TLS at Internet scale works because people have supported it year in, year out, for ten years.
Each month we’ll highlight a different set of people behind our “everybody.” Who do you want to see us highlight? What use cases of Let’s Encrypt have you seen that amazed you? What about our work do you hope we’ll continue or improve as we go forward? Let us know on LinkedIn, or drop a note to outreach@letsencrypt.org.
Encryption for Everybody is our unofficial tagline for this tenth anniversary year. What we love about it is that, yes, it captures our commitment to ensuring anyone around the world can easily get a cert for free. But more importantly, it captures the reality that technical innovation won’t work without people believing in it and supporting it. We’re grateful that, for ten years (and counting!), our community of supporters has made an impact on the lives of billions of Internet users—an impact that’s made theWeb more secure and privacy respecting for everybody, everywhere.
Internet Security Research Group (ISRG) is the parent organization of Let’s Encrypt, Prossimo, and Divvi Up. ISRG is a 501(c)(3) nonprofit. If you’d like to support our work, please consider getting involved, donating, or encouraging your company to become a sponsor.
Let’s Encrypt protects a vast portion of the Web by providing TLS certificates to over 550 million websites—a figure that has grown by 42% in the last year alone. We currently issue over 340,000 certificates per hour. To manage this immense traffic and maintain responsiveness under high demand, our infrastructure relies on rate limiting. In 2015, we introduced our first rate limiting system, built on MariaDB. It evolved alongside our rapidly growing service but eventually revealed its limits: straining database servers, forcing long reset times on subscribers, and slowing down every request.
We needed a solution built for the future—one that could scale with demand, reduce the load on MariaDB, and adapt to real-world subscriber request patterns. The result was a new rate limiting system powered by Redis and a proven virtual scheduling algorithm from the mid-90s. Efficient and scalable, and capable of handling over a billion active certificates.
Rate Limiting a Free Service is Hard
In 2015, Let’s Encrypt was in early preview, and we faced a unique challenge. We were poised to become incredibly popular, offering certificates freely and without requiring contact information or email verification. Ensuring fair usage and preventing abuse without traditional safeguards demanded an atypical approach to rate limiting.
We decided to limit the number of certificates issued—per week—for each registered domain. Registered domains are a limited resource with real costs, making them a natural and effective basis for rate limiting—one that mirrors the structure of the Web itself. Specifically, this approach targets the effective Top-Level Domain (eTLD), as defined by the Public Suffix List (PSL), plus one additional label to the left. For example, in new.blog.example.co.uk
, the eTLD is .co.uk
, making example.co.uk
the eTLD+1.
Counting Events Was Easy
For each successfully issued certificate, we logged an entry in a table that recorded the registered domain, the issuance date, and other relevant details. To enforce rate limits, the system scanned this table, counted the rows matching a given registered domain within a specific time window, and compared the total to a configured threshold. This simple design formed the basis for all future rate limits.
Counting a Lot of Events Got Expensive
By 2019, we had added six new rate limits to protect our infrastructure as demand for certificates surged. Enforcing these limits required frequent scans of database tables to count recent matching events. These operations, especially on our heavily-used authorizations table, caused significant overhead, with reads outpacing all other tables—often by an order of magnitude.
Rate limit calculations were performed early in request processing and often. Counting rows in MariaDB, particularly for accounts with rate limit overrides, was inherently expensive and quickly became a scaling bottleneck.
Adding new limits required careful trade-offs. Decisions about whether to reuse existing schema, optimize indexes, or design purpose-built tables helped balance performance, complexity, and long-term maintainability.
Buying Runway — Offloading Reads
In late 2021, we updated our control plane and Boulder—our in-house CA software—to route most API reads, including rate limit checks, to database replicas. This reduced the load on the primary database and improved its overall health. At the same time, however, latency of rate limit checks during peak hours continued to rise, highlighting the limitations of scaling reads alone.
Sliding Windows Got Frustrating
Subscribers were frequently hitting rate limits unexpectedly, leaving them unable to request certificates for days. This issue stemmed from our use of relatively large rate limiting windows—most spanning a week. Subscribers could deplete their entire limit in just a few moments by repeating the same request, and find themselves locked out for the remainder of the week. This approach was inflexible and disruptive, causing unnecessary frustration and delays.
In early 2022, we patched the Duplicate Certificate limit to address this rigidity. Using a naive token-bucket approach, we allowed users to “earn back” requests incrementally, cutting the wait time—once rate limited—to about 1.4 days. The patch worked by fetching recent issuance timestamps and calculating the time between them to grant requests based on the time waited. This change also allowed us to include a Retry-After timestamp in rate limited responses. While this improved the user experience for this one limit, we understood it to be a temporary fix for a system in need of a larger overhaul.
When a Problem Grows Large Enough, It Finds the Time for You
Setting aside time for a complete overhaul of our rate-limiting system wasn’t easy. Our development team, composed of just three permanent engineers, typically juggles several competing priorities. Yet by 2023, our flagging rate limits code had begun to endanger the reliability of our MariaDB databases.
Our authorizations table was now regularly read an order of magnitude more than any other. Individually identifying and deleting unnecessary rows—or specific values—had proved unworkable due to poor MariaDB delete performance. Storage engines like InnoDB must maintain indexes, foreign key constraints, and transaction logs for every deletion, which significantly increases overhead for concurrent transactions and leads to gruelingly slow deletes.
Our SRE team automated the cleanup of old rows for many tables using the PARTITION
command, which worked well for bookkeeping and compliance data. Unfortunately, we couldn’t apply it to most of our purpose-built rate limit tables. These tables depend on ON DUPLICATE KEY UPDATE
, a mechanism that requires the targeted column to be a unique index or primary key, while partitioning demands that the primary key be included in the partitioning key.
Indexes on these tables—such as those tracking requested hostnames—often grew larger than the tables themselves and, in some cases, exceeded the memory of our smaller staging environment databases, eventually forcing us to periodically wipe them entirely.
By late 2023, this cascading confluence of complexities required a reckoning. We set out to design a rate limiting system built for the future.
The Solution: Redis + GCRA
We designed a system from the ground up that combines Redis for storage and the Generic Cell Rate Algorithm (GCRA) for managing request flow.
Why Redis?
Our engineers were already familiar with Redis, having recently deployed it to cache and serve OCSP responses. Its high throughput and low latency made it a candidate for tracking rate limit state as well.
By moving this data from MariaDB to Redis, we could eliminate the need for ever-expanding, purpose-built tables and indexes, significantly reducing read and write pressure. Redis’s feature set made it a perfect fit for the task. Most rate limit data is ephemeral—after a few days, or sometimes just minutes, it becomes irrelevant unless the subscriber calls us again. Redis’s per-key Time-To-Live would allow us to expire this data the moment it was no longer needed.
Redis also supports atomic integer operations, enabling fast, reliable counter updates, even when increments occur concurrently. Its “set if not exist” functionality ensures efficient initialization of keys, while pipeline support allows us to get and set multiple keys in bulk. This combination of familiarity, speed, simplicity, and flexibility made Redis the natural choice.
Why GCRA?
The Generic Cell Rate Algorithm (GCRA) is a virtual scheduling algorithm originally designed for telecommunication networks to regulate traffic and prevent congestion. Unlike traditional sliding window approaches that work in fixed time blocks, GCRA enforces rate limits continuously, making it well-suited to our goals.
A rate limit in GCRA is defined by two parameters: the emission interval and the burst tolerance. The emission interval specifies the minimum time that must pass between consecutive requests to maintain a steady rate. For example, an emission interval of one second allows one request per second on average. The burst tolerance determines how much unused capacity can be drawn on to allow short bursts of requests beyond the steady rate.
When a request is received, GCRA compares the current time to the Theoretical Arrival Time (TAT), which indicates when the next request is allowed under the steady rate. If the current time is greater than or equal to the TAT, the request is permitted, and the TAT is updated by adding the emission interval. If the current time plus the burst tolerance is greater than or equal to the TAT, the request is also permitted. In this case, the TAT is updated by adding the emission interval, reducing the remaining burst capacity.
However, if the current time plus the burst tolerance is less than the TAT, the request exceeds the rate limit and is denied. Conveniently, the difference between the TAT and the current time can then be returned to the subscriber in a Retry-After header, informing their client exactly how long to wait before trying again.
To illustrate, consider a rate limit of one request per second (emission interval = 1s) with a burst tolerance of three requests. Up to three requests can arrive back-to-back, but subsequent requests will be delayed until “now” catches up to the TAT, ensuring that the average rate over time remains one request per second.
What sets GCRA apart is its ability to automatically refill capacity gradually and continuously. Unlike sliding windows, where users must wait for an entire time block to reset, GCRA allows users to retry as soon as enough time has passed to maintain the steady rate. This dynamic pacing reduces frustration and provides a smoother, more predictable experience for subscribers.
GCRA is also storage and computationally efficient. It requires tracking only the TAT—stored as a single Unix timestamp—and performing simple arithmetic to enforce limits. This lightweight design allows it to scale to handle billions of requests, with minimal computational and memory overhead.
The Results: Faster, Smoother, and More Scalable
The transition to Redis and GCRA brought immediate, measurable improvements. We cut database load, improved response times, and delivered consistent performance even during periods of peak traffic. Subscribers now experience smoother, more predictable behavior, while the system’s increased permissiveness allows for certificates that the previous approach would have delayed—all achieved without sacrificing scalability or fairness.
Rate Limit Check Latency
Check latency is the extra time added to each request while verifying rate limit compliance. Under the old MariaDB-based system, these checks slowed noticeably during peak traffic, when database contention caused significant delays. Our new Redis-based system dramatically reduced this overhead. The high-traffic “new-order” endpoint saw the greatest improvement, while the “new-account” endpoint—though considerably lighter in traffic—also benefited, especially callers with IPv6 addresses. These results show that our subscribers now experience consistent response times, even under peak load.
Database Health
Our once strained database servers are now operating with ample headroom. In total, MariaDB operations have dropped by 80%, improving responsiveness, reducing contention, and freeing up resources for mission-critical issuance workflows.
Buffer pool requests have decreased by more than 50%, improving caching efficiency and reducing overall memory pressure.
Reads of the authorizations table—a notorious bottleneck—have dropped by over 99%. Previously, this table outpaced all others by more than two orders of magnitude; now it ranks second (the green line below), just narrowly surpassing our third most-read table.
Tracking Zombie Clients
In late 2024, we turned our new rate limiting system toward a longstanding challenge: “zombie clients.” These requesters repeatedly attempt to issue certificates but fail, often because of expired domains or misconfigured DNS records. Together, they generate nearly half of all order attempts yet almost never succeed. We were able to build on this new infrastructure to record consecutive ACME challenge failures by account/domain pair and automatically “pause” this problematic issuance. The result has been a considerable reduction in resource consumption, freeing database and network capacity without disrupting legitimate traffic.
Scalability on Redis
Before deploying the limits to track zombie clients, we maintained just over 12.6 million unique TATs across several Redis databases. Within 24 hours, that number more than doubled to 26 million, and by the end of the week, it peaked at over 30 million. Yet, even with this sharp increase, there was no noticeable impact on rate limit responsiveness. That’s all we’ll share for now about zombie clients—there’s plenty more to unpack, but we’ll save those insights and figures for a future blog post.
What’s Next?
Scaling our rate limits to keep pace with the growth of the Web is a huge achievement, but there’s still more to do. In the near term, many of our other ACME endpoints rely on load balancers to enforce per-IP limits, which works but gives us little control over the feedback provided to subscribers. We’re looking to deploy this new infrastructure across those endpoints as well. Looking further ahead, we’re exploring how we might redefine our rate limits now that we’re no longer constrained by a system that simply counts events between two points in time.
By adopting Redis and GCRA, we’ve built a flexible, efficient rate limit system that promotes fair usage and enables our infrastructure to handle ever-growing demand. We’ll keep adapting to the ever-evolving Web while honoring our primary goal: giving people the certificates they need, for free, in the most user-friendly way we can.
Since its inception, Let’s Encrypt has been sending expiration notification emails to subscribers that have provided an email address to us. We will be ending this service on June 4, 2025. The decision to end this service is the result of the following factors:
-
Over the past 10 years more and more of our subscribers have been able to put reliable automation into place for certificate renewal.
-
Providing expiration notification emails means that we have to retain millions of email addresses connected to issuance records. As an organization that values privacy, removing this requirement is important to us.
-
Providing expiration notifications costs Let’s Encrypt tens of thousands of dollars per year, money that we believe can be better spent on other aspects of our infrastructure.
-
Providing expiration notifications adds complexity to our infrastructure, which takes time and attention to manage and increases the likelihood of mistakes being made. Over the long term, particularly as we add support for new service components, we need to manage overall complexity by phasing out system components that can no longer be justified.
For those who would like to continue receiving expiration notifications, we recommend using a third party service such as Red Sift Certificates Lite (formerly Hardenize). Red Sift’s monitoring service providing expiration emails is free of charge for up to 250 certificates. More monitoring options can be found here.
While we will be minimizing the email addresses we retain connected to issuance data, you can opt in to receive other emails. We’ll keep you informed about technical updates, and other news about Let’s Encrypt and our parent nonprofit, ISRG, based on the preferences you choose. You can sign up for our email lists below:
This year we will continue to pursue our commitment to improving the security of the Web PKI by introducing the option to get certificates with six-day lifetimes (“short-lived certificates”). We will also add support for IP addresses in addition to domain names. Our longer-lived certificates, which currently have a lifetime of 90 days, will continue to be available alongside our six-day offering. Subscribers will be able to opt in to short-lived certificates via a certificate profile mechanism being added to our ACME API.
Shorter Certificate Lifetimes Are Good for Security
When the private key associated with a certificate is compromised, the recommendation has always been to have the certificate revoked so that people will know not to use it. Unfortunately, certificate revocation doesn’t work very well. This means that certificates with compromised keys (or other issues) may continue to be used until they expire. The longer the lifetime of the certificate, the longer the potential for use of a problematic certificate.
The primary advantage of short-lived certificates is that they greatly reduce the potential compromise window because they expire relatively quickly. This reduces the need for certificate revocation, which has historically been unreliable. Our six-day certificates will not include OCSP or CRL URLs. Additionally, short-lived certificates practically require automation, and we believe that automating certificate issuance is important for security.
IP Address Support For Securing Additional Use Cases
We will support including IP addresses as Subject Alternative Names in our six-day certificates. This will enable secure TLS connections, with publicly trusted certificates, to services made available via IP address, without the need for a domain name.
Validation for IP addresses will work much the same as validation for domain names, though validation will be restricted to the http-01 and tls-alpn-01 challenge types. The dns-01 challenge type will not be available because the DNS is not involved in validating IP addresses. Additionally, there is no mechanism to check CAA records for IP addresses.
Timeline
We expect to issue the first valid short-lived certificates to ourselves in February of this year. Around April we will enable short-lived certificates for a small set of early adopting subscribers. We hope to make short-lived certificates generally available by the end of 2025.
The earliest short-lived certificates we issue may not support IP addresses, but we intend to enable IP address support by the time short-lived certificates reach general availability.
How To Get Six-Day and IP Address Certificates
Once short-lived certificates are an option for you, you’ll need to use an ACME client that supports ACME certificate profiles and select the short-lived certificate profile (the name of which will be published at a later date).
Once IP address support is an option for you, requesting an IP address in a certificate will automatically select a short-lived certificate profile.
Looking Ahead
The best way to prepare to take advantage of short-lived certificates is to make sure your ACME client is reliably renewing certificates in an automated fashion. If that’s working well then there should be no costs to switching to short-lived certificates.
If you have questions or comments about our plans, feel free to let us know on our community forums.
We are excited to announce a new extension to Let’s Encrypt’s implementation of the ACME protocol that we are calling “profile selection.” This new feature will allow site operators and ACME clients to opt in to the next evolution of Let’s Encrypt.
As of today, the staging environment is advertising a new field in its directory resource:
GET /directory HTTP/1.1
HTTP/1.1 200 OK
Content-Type: application/json
{
...
"meta": {
"profiles": {
"classic": "The same profile you're accustomed to",
"tlsserver": "https://letsencrypt.org/2025/01/09/acme-profiles/"
}
}
}
Here, the keys are the names of new “profiles”, and the values are human-readable descriptions of those profiles. A profile describes a collection of attributes about the certificate that will be issued, such as what extensions it will contain, how long it will be valid for, and more.
For example, the “classic” profile is exactly what it sounds like: certificates issued under the classic profile will look exactly the same as those that we have always issued, valid for 90 days.
But certificates issued under the “tlsserver” profile will have a number of differences tailored specifically towards TLS server usage:
- No Common Name field (including a CN has been NOT RECOMMENDED by the Baseline Requirements for several years now)
- No Subject Key Identifier (including a SKID is NOT RECOMMENDED by the Baseline Requirements)
- No TLS Client Auth Extended Key Usage (root programs are moving towards requiring “single-purpose” issuance hierarchies, where every certificate has only a single EKU)
- No Key Encipherment Key Usage for certificates with RSA public keys (this KU was used by older RSA-based TLS cipher suites, but is fully unnecessary in TLS 1.3)
Additionally, in the near future we will offer a “shortlived” profile which will be identical to the “tlsserver” profile but with a validity period of only 6 days. This profile isn’t available in Staging just yet, so keep an eye out for further announcements regarding short-lived certificates and why we think they’re exciting.
An ACME client can supply a desired profile name in a new-order request:
POST /acme/new-order HTTP/1.1
Host: example.com
Content-Type: application/jose+json
{
"protected": base64url(...),
"payload": base64url({
"profile": "tlsserver",
"identifiers": [
{ "type": "dns", "value": "www.example.org" },
{ "type": "dns", "value": "example.org" }
],
}),
"signature": "H6ZXtGjTZyUnPeKn...wEA4TklBdh3e454g"
}
If the new-order request is accepted, then the selected profile name will be reflected in the Order object when it is returned, and the resulting certificate after finalization will be issued with the selected profile. If the new-order request does not specify a profile, then the server will select one for it.
Guidance for ACME clients and users
If you are an ACME client author, we encourage you to introduce support for this new field in your client. Start by taking a look at the draft specification in the IETF ACME Working Group. A simple implementation might allow the user to configure a static profile name and include that name in all new-order requests. For a better user experience, check the configured name against the list of profiles advertised in the directory, to ensure that changes to the available profiles don’t result in invalid new-order requests. For clients with a user interface, such as a control panel or interactive command line interface, an implementation could fetch the list of profiles and their descriptions to prompt the user to select one on first run. It could also use a notification mechanism to inform the user of changes to the list of available profiles. We’d also love to hear from you about your experience implementing and deploying this new extension.
If you are a site operator or ACME client user, we encourage you to keep an eye on your ACME client of choice to see when they adopt this new feature, and update your client when they do. We also encourage you to try out the modern “tlsserver” profile in Staging, and let us know what you think of the changes we’ve made to the certificates issued under that profile.
What’s next?
Obviously there is more work to be done here. The draft standard will go through multiple rounds of review and tweaks before becoming an IETF RFC, and our implementation will evolve along with it if necessary. Over the coming weeks and months we will also be providing more information about when we enable profile selection in our production environment, and what our production profile options will be.
Thank you for coming along with us on this journey into the future of the Web PKI. We look forward to your testing and feedback!

This letter was originally published in our 2024 Annual Report.
The past year at ISRG has been a great one and I couldn’t be more proud of our staff, community, funders, and other partners that made it happen. Let’s Encrypt continues to thrive, serving more websites around the world than ever before with excellent security and stability. Our understanding of what it will take to make more privacy-preserving metrics more mainstream via our Divvi Up project is evolving in important ways.
Prossimo has made important investments in making software critical infrastructure safer, from TLS and DNS to the Linux kernel.
Next year is the 10th anniversary of the launch of Let’s Encrypt. Internally things have changed dramatically from what they looked like ten years ago, but outwardly our service hasn’t changed much since launch. That’s because the vision we had for how best to do our job remains as powerful today as it ever was: free 90-day TLS certificates via an automated API. Pretty much as many as you need. More than 500,000,000 websites benefit from this offering today, and the vast majority of the web is encrypted.
Our longstanding offering won’t fundamentally change next year, but we are going to introduce a new offering that’s a big shift from anything we’ve done before - short-lived certificates. Specifically, certificates with a lifetime of six days. This is a big upgrade for the security of the TLS ecosystem because it minimizes exposure time during a key compromise event.
Because we’ve done so much to encourage automation over the past decade, most of our subscribers aren’t going to have to do much in order to switch to shorter lived certificates. We, on the other hand, are going to have to think about the possibility that we will need to issue 20x as many certificates as we do now. It’s not inconceivable that at some point in our next decade we may need to be prepared to issue 100,000,000 certificates per day.
That sounds sort of nuts to me today, but issuing 5,000,000 certificates per day would have sounded crazy to me ten years ago. Here’s the thing though, and this is what I love about the combination of our staff, partners, and funders - whatever it is we need to do to doggedly pursue our mission, we’re going to get it done. It was hard to build Let’s Encrypt. It was difficult to scale it to serve half a billion websites. Getting our Divvi Up service up and running from scratch in three months to service exposure notification applications was not easy. Our Prossimo project was a primary contributor to the creation of a TLS library that provides memory safety while outperforming its peers - a heavy lift.
Charitable contributions from people like you and organizations around the world make this stuff possible. Since 2015, tens of thousands of people have donated. They’ve made a case for corporate sponsorship, given through their DAFs, or set up recurring donations, sometimes to give $3 a month. That’s all added up to millions of dollars that we’ve used to change the Internet for nearly everyone using it. I hope you’ll join these people and help lay the foundation for another great decade.
Josh Aas
Executive Director
Earlier this year we announced our intent to provide certificate revocation information exclusively via Certificate Revocation Lists (CRLs), ending support for providing certificate revocation information via the Online Certificate Status Protocol (OCSP). Today we are providing a timeline for ending OCSP services:
- January 30, 2025
- OCSP Must-Staple requests will fail, unless the requesting account has previously issued a certificate containing the OCSP Must Staple extension
- May 7, 2025
- Prior to this date we will have added CRL URLs to certificates
- On this date we will drop OCSP URLs from certificates
- On this date all requests including the OCSP Must Staple extension will fail
- August 6, 2025
- On this date we will turn off our OCSP responders
Additionally, a very small percentage of our subscribers request certificates with the OCSP Must Staple Extension. If you have manually configured your ACME client to request that extension, action is required before May 7. See “Must Staple” below for details.
OCSP and CRLs are both mechanisms by which CAs can communicate certificate revocation information, but CRLs have significant advantages over OCSP. Let’s Encrypt has been providing an OCSP responder since our launch nearly ten years ago. We added support for CRLs in 2022.
Websites and people who visit them will not be affected by this change, but some non-browser software might be.
We plan to end support for OCSP primarily because it represents a considerable risk to privacy on the Internet. When someone visits a website using a browser or other software that checks for certificate revocation via OCSP, the Certificate Authority (CA) operating the OCSP responder immediately becomes aware of which website is being visited from that visitor’s particular IP address. Even when a CA intentionally does not retain this information, as is the case with Let’s Encrypt, CAs could be legally compelled to collect it. CRLs do not have this issue.
We are also taking this step because keeping our CA infrastructure as simple as possible is critical for the continuity of compliance, reliability, and efficiency at Let’s Encrypt. For every year that we have existed, operating OCSP services has taken up considerable resources that can soon be better spent on other aspects of our operations. Now that we support CRLs, our OCSP service has become unnecessary.
We recommend that anyone relying on OCSP services today start the process of ending that reliance as soon as possible. If you use Let’s Encrypt certificates to secure non-browser communications such as a VPN, you should ensure that your software operates correctly if certificates contain no OCSP URL.
Must Staple
Because of the privacy issues with OCSP, browsers and servers implement a feature called “OCSP Stapling”, where the web server sends a copy of the appropriate OCSP response during the TLS handshake, and the browser skips making a request to the CA, thus better preserving privacy.
In addition to OCSP Stapling (a TLS feature negotiated at handshake time), there’s an extension that can be added to certificates at issuance time, colloquially called “OCSP Must Staple.” This tells browsers that, if they see that extension in a certificate, they should never contact the CA about it and should instead expect to see a stapled copy in the handshake. Failing that, browsers should refuse to connect. This was designed to solve some security problems with revocation.
Let’s Encrypt has supported OCSP Must Staple for a long time, because of the potential to improve both privacy and security. However, Must Staple has failed to get wide browser support after many years. And popular web servers still implement OCSP Stapling in ways that create serious risks of downtime.
As part of removing OCSP, we’ll also be removing support for OCSP Must Staple. CRLs have wide browser support and can provide privacy benefits to all sites, without requiring special web server configuration. Thanks to all our subscribers who have helped with the OCSP Must Staple experiment.
If you are not certain whether you are using OCSP Must Staple, you can check this list of hostnames and certificate serials (11.1 MB, .zip).
As of January 30, 2025, issuance requests that include the OCSP Must Staple extension will fail, unless the requesting account has previously issued a certificate containing the OCSP Must Staple extension.
As of May 7, all issuance requests that include the OCSP Must Staple extension will fail, including renewals. Please change your ACME client configuration to not request the extension.
Today we are announcing our intent to end Online Certificate Status Protocol (OCSP) support in favor of Certificate Revocation Lists (CRLs) as soon as possible. OCSP and CRLs are both mechanisms by which CAs can communicate certificate revocation information, but CRLs have significant advantages over OCSP. Let’s Encrypt has been providing an OCSP responder since our launch nearly ten years ago. We added support for CRLs in 2022.
Websites and people who visit them will not be affected by this change, but some non-browser software might be.
We plan to end support for OCSP primarily because it represents a considerable risk to privacy on the Internet. When someone visits a website using a browser or other software that checks for certificate revocation via OCSP, the Certificate Authority (CA) operating the OCSP responder immediately becomes aware of which website is being visited from that visitor’s particular IP address. Even when a CA intentionally does not retain this information, as is the case with Let’s Encrypt, CAs could be legally compelled to collect it. CRLs do not have this issue.
We are also taking this step because keeping our CA infrastructure as simple as possible is critical for the continuity of compliance, reliability, and efficiency at Let’s Encrypt. For every year that we have existed, operating OCSP services has taken up considerable resources that can soon be better spent on other aspects of our operations. Now that we support CRLs, our OCSP service has become unnecessary.
In August of 2023 the CA/Browser Forum passed a ballot to make providing OCSP services optional for publicly trusted CAs like Let’s Encrypt. With one exception, Microsoft, the root programs themselves no longer require OCSP. As soon as the Microsoft Root Program also makes OCSP optional, which we are optimistic will happen within the next six to twelve months, Let’s Encrypt intends to announce a specific and rapid timeline for shutting down our OCSP services. We hope to serve our last OCSP response between three and six months after that announcement. The best way to stay apprised of updates on these plans is to subscribe to our API Announcements category on Discourse.
We recommend that anyone relying on OCSP services today start the process of ending that reliance as soon as possible. If you use Let’s Encrypt certificates to secure non-browser communications such as a VPN, you should ensure that your software operates correctly if certificates contain no OCSP URL. Fortunately, most OCSP implementations “fail open” which means that an inability to fetch an OCSP response will not break the system.
Internet Security Research Group (ISRG) is the parent organization of Let’s Encrypt, Prossimo, and Divvi Up. ISRG is a 501(c)(3) nonprofit. If you’d like to support our work, please consider getting involved, donating, or encouraging your company to become a sponsor.
Tue, 23 Jul 2024 00:00:00 +0000
More Memory Safety for Let’s Encrypt: Deploying ntpd-rs
Earlier this year we announced our intention to introduce short-lived certificates with lifetimes of six days as an option for our subscribers. Yesterday we issued our first short-lived certificate. You can see the certificate at the bottom of our post, or here thanks to Certificate Transparency logs. We issued it to ourselves and then immediately revoked it so we can observe the certificate’s whole lifecycle. This is the first step towards making short-lived certificates available to all subscribers.
The next step is for us to make short-lived certificates available to a small set of our subscribers so we can make sure our systems scale as expected prior to general availability. We expect this next phase to begin during Q2 of this year.
We expect short-lived certificates to be generally available by the end of this year.
How To Get Six-Day Certificates
Once short-lived certificates are an option for you, you’ll need to use an ACME client that supports ACME certificate profiles and select the short-lived certificate profile (“shortlived”). The lego
client recently added this functionality.
In the meantime, the best way to prepare to take advantage of short-lived certificates is to make sure your ACME client is reliably renewing certificates in an automated fashion. If that’s working well then there should be no costs to switching to short-lived certificates.
You’ll also want to be sure your ACME client is running frequently - both for the sake of renewing short-lived certificates and so as to take advantage of ACME Renewal Information (ARI). ARI allows Let’s Encrypt to notify your client if it should renew early for some reason. ARI checks should happen at least once per day, and short-lived certificates should be renewed every two to three days, so we recommend having your client run at least once per day.
Shorter Certificate Lifetimes Are Good for Security
When the private key associated with a certificate is compromised, the recommendation has always been to have the certificate revoked so that people will know not to use it. Unfortunately, certificate revocation doesn’t work very well. This means that certificates with compromised keys (or other issues) may continue to be used until they expire. The longer the lifetime of the certificate, the longer the potential for use of a problematic certificate.
The primary advantage of short-lived certificates is that they greatly reduce the potential compromise window because they expire relatively quickly. This reduces the need for certificate revocation, which has historically been unreliable. Our six-day certificates will not include OCSP or CRL URLs. Additionally, short-lived certificates practically require automation, and we believe that automating certificate issuance is important for security.
Questions
If you have questions or comments about our plans, feel free to let us know on our community forums.
We’d like to thank Open Technology Fund for supporting this work.
Our First 6-Day Certificate
PEM format:
-----BEGIN CERTIFICATE-----
MIIDSzCCAtGgAwIBAgISA7CwFcGk4mQWEXMacRtxHeDvMAoGCCqGSM49BAMDMDIx
CzAJBgNVBAYTAlVTMRYwFAYDVQQKEw1MZXQncyBFbmNyeXB0MQswCQYDVQQDEwJF
NjAeFw0yNTAyMTkxNzMwMDFaFw0yNTAyMjYwOTMwMDBaMAAwWTATBgcqhkjOPQIB
BggqhkjOPQMBBwNCAAQoSItt2V1aocI5dxrKR8iLfmm0KiVvOhiwKByzu2kLeC7C
0BdfAgtwdICdkuEhAXokhXLq6DNZZgmh5T4flVwZo4IB9zCCAfMwDgYDVR0PAQH/
BAQDAgeAMBMGA1UdJQQMMAoGCCsGAQUFBwMBMAwGA1UdEwEB/wQCMAAwHwYDVR0j
BBgwFoAUkydGmAOpUWiOmNbEQkjbI79YlNIwVQYIKwYBBQUHAQEESTBHMCEGCCsG
AQUFBzABhhVodHRwOi8vZTYuby5sZW5jci5vcmcwIgYIKwYBBQUHMAKGFmh0dHA6
Ly9lNi5pLmxlbmNyLm9yZy8wKAYDVR0RAQH/BB4wHIIaaGVsbG93b3JsZC5sZXRz
ZW5jcnlwdC5vcmcwEwYDVR0gBAwwCjAIBgZngQwBAgEwggEFBgorBgEEAdZ5AgQC
BIH2BIHzAPEAdgDM+w9qhXEJZf6Vm1PO6bJ8IumFXA2XjbapflTA/kwNsAAAAZUf
d/zOAAAEAwBHMEUCIFNd51TfSNiJrO+294t49C5ANc4oC7gTUzf7xnlNlhKsAiEA
wi5hfiC9SsKLxlTQ0sctUxhLmdYh40r6ECWQS/yWw2AAdwDgkrP8DB3I52g2H95h
uZZNClJ4GYpy1nLEsE2lbW9UBAAAAZUfd/0TAAAEAwBIMEYCIQCs2NuZIUIloOaH
1t9eXDKb8bjoWESBPsK4i2BxMvEIswIhAOMNaQNyr1YkzrcNUz15qGV0oVLg5BJN
+ikWxXOdcRHFMAoGCCqGSM49BAMDA2gAMGUCMDANqy7G09AIwzXcd7SNl7uFwhC+
xlfduvp1PeEDHc/FA9K3mRYkGXuKtzNdOh7wcAIxALjEMDmBQiwXbB447oGkaZAe
0rqxA3EtNV5wj0obeObluj/NgUsVEG9OqiBIoggFRw==
-----END CERTIFICATE-----
openssl x509 -text
output:
Certificate:
Data:
Version: 3 (0x2)
Serial Number:
03:b0:b0:15:c1:a4:e2:64:16:11:73:1a:71:1b:71:1d:e0:ef
Signature Algorithm: ecdsa-with-SHA384
Issuer: C=US, O=Let's Encrypt, CN=E6
Validity
Not Before: Feb 19 17:30:01 2025 GMT
Not After : Feb 26 09:30:00 2025 GMT
Subject:
Subject Public Key Info:
Public Key Algorithm: id-ecPublicKey
Public-Key: (256 bit)
pub:
04:28:48:8b:6d:d9:5d:5a:a1:c2:39:77:1a:ca:47:
c8:8b:7e:69:b4:2a:25:6f:3a:18:b0:28:1c:b3:bb:
69:0b:78:2e:c2:d0:17:5f:02:0b:70:74:80:9d:92:
e1:21:01:7a:24:85:72:ea:e8:33:59:66:09:a1:e5:
3e:1f:95:5c:19
ASN1 OID: prime256v1
NIST CURVE: P-256
X509v3 extensions:
X509v3 Key Usage: critical
Digital Signature
X509v3 Extended Key Usage:
TLS Web Server Authentication
X509v3 Basic Constraints: critical
CA:FALSE
X509v3 Authority Key Identifier:
93:27:46:98:03:A9:51:68:8E:98:D6:C4:42:48:DB:23:BF:58:94:D2
Authority Information Access:
OCSP - URI:http://e6.o.lencr.org
CA Issuers - URI:http://e6.i.lencr.org/
X509v3 Subject Alternative Name: critical
DNS:helloworld.letsencrypt.org
X509v3 Certificate Policies:
Policy: 2.23.140.1.2.1
CT Precertificate SCTs:
Signed Certificate Timestamp:
Version : v1 (0x0)
Log ID : CC:FB:0F:6A:85:71:09:65:FE:95:9B:53:CE:E9:B2:7C:
22:E9:85:5C:0D:97:8D:B6:A9:7E:54:C0:FE:4C:0D:B0
Timestamp : Feb 19 18:28:32.078 2025 GMT
Extensions: none
Signature : ecdsa-with-SHA256
30:45:02:20:53:5D:E7:54:DF:48:D8:89:AC:EF:B6:F7:
8B:78:F4:2E:40:35:CE:28:0B:B8:13:53:37:FB:C6:79:
4D:96:12:AC:02:21:00:C2:2E:61:7E:20:BD:4A:C2:8B:
C6:54:D0:D2:C7:2D:53:18:4B:99:D6:21:E3:4A:FA:10:
25:90:4B:FC:96:C3:60
Signed Certificate Timestamp:
Version : v1 (0x0)
Log ID : E0:92:B3:FC:0C:1D:C8:E7:68:36:1F:DE:61:B9:96:4D:
0A:52:78:19:8A:72:D6:72:C4:B0:4D:A5:6D:6F:54:04
Timestamp : Feb 19 18:28:32.147 2025 GMT
Extensions: none
Signature : ecdsa-with-SHA256
30:46:02:21:00:AC:D8:DB:99:21:42:25:A0:E6:87:D6:
DF:5E:5C:32:9B:F1:B8:E8:58:44:81:3E:C2:B8:8B:60:
71:32:F1:08:B3:02:21:00:E3:0D:69:03:72:AF:56:24:
CE:B7:0D:53:3D:79:A8:65:74:A1:52:E0:E4:12:4D:FA:
29:16:C5:73:9D:71:11:C5
Signature Algorithm: ecdsa-with-SHA384
Signature Value:
30:65:02:30:30:0d:ab:2e:c6:d3:d0:08:c3:35:dc:77:b4:8d:
97:bb:85:c2:10:be:c6:57:dd:ba:fa:75:3d:e1:03:1d:cf:c5:
03:d2:b7:99:16:24:19:7b:8a:b7:33:5d:3a:1e:f0:70:02:31:
00:b8:c4:30:39:81:42:2c:17:6c:1e:38:ee:81:a4:69:90:1e:
d2:ba:b1:03:71:2d:35:5e:70:8f:4a:1b:78:e6:e5:ba:3f:cd:
81:4b:15:10:6f:4e:aa:20:48:a2:08:05:47

2025 marks ten years of Let’s Encrypt. Already this year we’ve taken steps to continue to deliver on our values of user privacy, efficiency, and innovation, all with the intent of continuing to deliver free TLS certificates to as many people as possible; to deliver encryption for everybody.
And while we’re excited about the technical progress we’ll make this year, we’re also going to celebrate this tenth anniversary by highlighting the people around the world who make our impact possible. It’s no small village.
From a community forum that has provided free technical support, to our roster of sponsors who provide vital funding, to the thousands of individual supporters who contribute financially to Let’s Encrypt each year, free TLS at Internet scale works because people have supported it year in, year out, for ten years.
Each month we’ll highlight a different set of people behind our “everybody.” Who do you want to see us highlight? What use cases of Let’s Encrypt have you seen that amazed you? What about our work do you hope we’ll continue or improve as we go forward? Let us know on LinkedIn, or drop a note to outreach@letsencrypt.org.
Encryption for Everybody is our unofficial tagline for this tenth anniversary year. What we love about it is that, yes, it captures our commitment to ensuring anyone around the world can easily get a cert for free. But more importantly, it captures the reality that technical innovation won’t work without people believing in it and supporting it. We’re grateful that, for ten years (and counting!), our community of supporters has made an impact on the lives of billions of Internet users—an impact that’s made theWeb more secure and privacy respecting for everybody, everywhere.
Internet Security Research Group (ISRG) is the parent organization of Let’s Encrypt, Prossimo, and Divvi Up. ISRG is a 501(c)(3) nonprofit. If you’d like to support our work, please consider getting involved, donating, or encouraging your company to become a sponsor.
Let’s Encrypt protects a vast portion of the Web by providing TLS certificates to over 550 million websites—a figure that has grown by 42% in the last year alone. We currently issue over 340,000 certificates per hour. To manage this immense traffic and maintain responsiveness under high demand, our infrastructure relies on rate limiting. In 2015, we introduced our first rate limiting system, built on MariaDB. It evolved alongside our rapidly growing service but eventually revealed its limits: straining database servers, forcing long reset times on subscribers, and slowing down every request.
We needed a solution built for the future—one that could scale with demand, reduce the load on MariaDB, and adapt to real-world subscriber request patterns. The result was a new rate limiting system powered by Redis and a proven virtual scheduling algorithm from the mid-90s. Efficient and scalable, and capable of handling over a billion active certificates.
Rate Limiting a Free Service is Hard
In 2015, Let’s Encrypt was in early preview, and we faced a unique challenge. We were poised to become incredibly popular, offering certificates freely and without requiring contact information or email verification. Ensuring fair usage and preventing abuse without traditional safeguards demanded an atypical approach to rate limiting.
We decided to limit the number of certificates issued—per week—for each registered domain. Registered domains are a limited resource with real costs, making them a natural and effective basis for rate limiting—one that mirrors the structure of the Web itself. Specifically, this approach targets the effective Top-Level Domain (eTLD), as defined by the Public Suffix List (PSL), plus one additional label to the left. For example, in new.blog.example.co.uk
, the eTLD is .co.uk
, making example.co.uk
the eTLD+1.
Counting Events Was Easy
For each successfully issued certificate, we logged an entry in a table that recorded the registered domain, the issuance date, and other relevant details. To enforce rate limits, the system scanned this table, counted the rows matching a given registered domain within a specific time window, and compared the total to a configured threshold. This simple design formed the basis for all future rate limits.
Counting a Lot of Events Got Expensive
By 2019, we had added six new rate limits to protect our infrastructure as demand for certificates surged. Enforcing these limits required frequent scans of database tables to count recent matching events. These operations, especially on our heavily-used authorizations table, caused significant overhead, with reads outpacing all other tables—often by an order of magnitude.
Rate limit calculations were performed early in request processing and often. Counting rows in MariaDB, particularly for accounts with rate limit overrides, was inherently expensive and quickly became a scaling bottleneck.
Adding new limits required careful trade-offs. Decisions about whether to reuse existing schema, optimize indexes, or design purpose-built tables helped balance performance, complexity, and long-term maintainability.
Buying Runway — Offloading Reads
In late 2021, we updated our control plane and Boulder—our in-house CA software—to route most API reads, including rate limit checks, to database replicas. This reduced the load on the primary database and improved its overall health. At the same time, however, latency of rate limit checks during peak hours continued to rise, highlighting the limitations of scaling reads alone.
Sliding Windows Got Frustrating
Subscribers were frequently hitting rate limits unexpectedly, leaving them unable to request certificates for days. This issue stemmed from our use of relatively large rate limiting windows—most spanning a week. Subscribers could deplete their entire limit in just a few moments by repeating the same request, and find themselves locked out for the remainder of the week. This approach was inflexible and disruptive, causing unnecessary frustration and delays.
In early 2022, we patched the Duplicate Certificate limit to address this rigidity. Using a naive token-bucket approach, we allowed users to “earn back” requests incrementally, cutting the wait time—once rate limited—to about 1.4 days. The patch worked by fetching recent issuance timestamps and calculating the time between them to grant requests based on the time waited. This change also allowed us to include a Retry-After timestamp in rate limited responses. While this improved the user experience for this one limit, we understood it to be a temporary fix for a system in need of a larger overhaul.
When a Problem Grows Large Enough, It Finds the Time for You
Setting aside time for a complete overhaul of our rate-limiting system wasn’t easy. Our development team, composed of just three permanent engineers, typically juggles several competing priorities. Yet by 2023, our flagging rate limits code had begun to endanger the reliability of our MariaDB databases.
Our authorizations table was now regularly read an order of magnitude more than any other. Individually identifying and deleting unnecessary rows—or specific values—had proved unworkable due to poor MariaDB delete performance. Storage engines like InnoDB must maintain indexes, foreign key constraints, and transaction logs for every deletion, which significantly increases overhead for concurrent transactions and leads to gruelingly slow deletes.
Our SRE team automated the cleanup of old rows for many tables using the PARTITION
command, which worked well for bookkeeping and compliance data. Unfortunately, we couldn’t apply it to most of our purpose-built rate limit tables. These tables depend on ON DUPLICATE KEY UPDATE
, a mechanism that requires the targeted column to be a unique index or primary key, while partitioning demands that the primary key be included in the partitioning key.
Indexes on these tables—such as those tracking requested hostnames—often grew larger than the tables themselves and, in some cases, exceeded the memory of our smaller staging environment databases, eventually forcing us to periodically wipe them entirely.
By late 2023, this cascading confluence of complexities required a reckoning. We set out to design a rate limiting system built for the future.
The Solution: Redis + GCRA
We designed a system from the ground up that combines Redis for storage and the Generic Cell Rate Algorithm (GCRA) for managing request flow.
Why Redis?
Our engineers were already familiar with Redis, having recently deployed it to cache and serve OCSP responses. Its high throughput and low latency made it a candidate for tracking rate limit state as well.
By moving this data from MariaDB to Redis, we could eliminate the need for ever-expanding, purpose-built tables and indexes, significantly reducing read and write pressure. Redis’s feature set made it a perfect fit for the task. Most rate limit data is ephemeral—after a few days, or sometimes just minutes, it becomes irrelevant unless the subscriber calls us again. Redis’s per-key Time-To-Live would allow us to expire this data the moment it was no longer needed.
Redis also supports atomic integer operations, enabling fast, reliable counter updates, even when increments occur concurrently. Its “set if not exist” functionality ensures efficient initialization of keys, while pipeline support allows us to get and set multiple keys in bulk. This combination of familiarity, speed, simplicity, and flexibility made Redis the natural choice.
Why GCRA?
The Generic Cell Rate Algorithm (GCRA) is a virtual scheduling algorithm originally designed for telecommunication networks to regulate traffic and prevent congestion. Unlike traditional sliding window approaches that work in fixed time blocks, GCRA enforces rate limits continuously, making it well-suited to our goals.
A rate limit in GCRA is defined by two parameters: the emission interval and the burst tolerance. The emission interval specifies the minimum time that must pass between consecutive requests to maintain a steady rate. For example, an emission interval of one second allows one request per second on average. The burst tolerance determines how much unused capacity can be drawn on to allow short bursts of requests beyond the steady rate.
When a request is received, GCRA compares the current time to the Theoretical Arrival Time (TAT), which indicates when the next request is allowed under the steady rate. If the current time is greater than or equal to the TAT, the request is permitted, and the TAT is updated by adding the emission interval. If the current time plus the burst tolerance is greater than or equal to the TAT, the request is also permitted. In this case, the TAT is updated by adding the emission interval, reducing the remaining burst capacity.
However, if the current time plus the burst tolerance is less than the TAT, the request exceeds the rate limit and is denied. Conveniently, the difference between the TAT and the current time can then be returned to the subscriber in a Retry-After header, informing their client exactly how long to wait before trying again.
To illustrate, consider a rate limit of one request per second (emission interval = 1s) with a burst tolerance of three requests. Up to three requests can arrive back-to-back, but subsequent requests will be delayed until “now” catches up to the TAT, ensuring that the average rate over time remains one request per second.
What sets GCRA apart is its ability to automatically refill capacity gradually and continuously. Unlike sliding windows, where users must wait for an entire time block to reset, GCRA allows users to retry as soon as enough time has passed to maintain the steady rate. This dynamic pacing reduces frustration and provides a smoother, more predictable experience for subscribers.
GCRA is also storage and computationally efficient. It requires tracking only the TAT—stored as a single Unix timestamp—and performing simple arithmetic to enforce limits. This lightweight design allows it to scale to handle billions of requests, with minimal computational and memory overhead.
The Results: Faster, Smoother, and More Scalable
The transition to Redis and GCRA brought immediate, measurable improvements. We cut database load, improved response times, and delivered consistent performance even during periods of peak traffic. Subscribers now experience smoother, more predictable behavior, while the system’s increased permissiveness allows for certificates that the previous approach would have delayed—all achieved without sacrificing scalability or fairness.
Rate Limit Check Latency
Check latency is the extra time added to each request while verifying rate limit compliance. Under the old MariaDB-based system, these checks slowed noticeably during peak traffic, when database contention caused significant delays. Our new Redis-based system dramatically reduced this overhead. The high-traffic “new-order” endpoint saw the greatest improvement, while the “new-account” endpoint—though considerably lighter in traffic—also benefited, especially callers with IPv6 addresses. These results show that our subscribers now experience consistent response times, even under peak load.
Database Health
Our once strained database servers are now operating with ample headroom. In total, MariaDB operations have dropped by 80%, improving responsiveness, reducing contention, and freeing up resources for mission-critical issuance workflows.
Buffer pool requests have decreased by more than 50%, improving caching efficiency and reducing overall memory pressure.
Reads of the authorizations table—a notorious bottleneck—have dropped by over 99%. Previously, this table outpaced all others by more than two orders of magnitude; now it ranks second (the green line below), just narrowly surpassing our third most-read table.
Tracking Zombie Clients
In late 2024, we turned our new rate limiting system toward a longstanding challenge: “zombie clients.” These requesters repeatedly attempt to issue certificates but fail, often because of expired domains or misconfigured DNS records. Together, they generate nearly half of all order attempts yet almost never succeed. We were able to build on this new infrastructure to record consecutive ACME challenge failures by account/domain pair and automatically “pause” this problematic issuance. The result has been a considerable reduction in resource consumption, freeing database and network capacity without disrupting legitimate traffic.
Scalability on Redis
Before deploying the limits to track zombie clients, we maintained just over 12.6 million unique TATs across several Redis databases. Within 24 hours, that number more than doubled to 26 million, and by the end of the week, it peaked at over 30 million. Yet, even with this sharp increase, there was no noticeable impact on rate limit responsiveness. That’s all we’ll share for now about zombie clients—there’s plenty more to unpack, but we’ll save those insights and figures for a future blog post.
What’s Next?
Scaling our rate limits to keep pace with the growth of the Web is a huge achievement, but there’s still more to do. In the near term, many of our other ACME endpoints rely on load balancers to enforce per-IP limits, which works but gives us little control over the feedback provided to subscribers. We’re looking to deploy this new infrastructure across those endpoints as well. Looking further ahead, we’re exploring how we might redefine our rate limits now that we’re no longer constrained by a system that simply counts events between two points in time.
By adopting Redis and GCRA, we’ve built a flexible, efficient rate limit system that promotes fair usage and enables our infrastructure to handle ever-growing demand. We’ll keep adapting to the ever-evolving Web while honoring our primary goal: giving people the certificates they need, for free, in the most user-friendly way we can.
Since its inception, Let’s Encrypt has been sending expiration notification emails to subscribers that have provided an email address to us. We will be ending this service on June 4, 2025. The decision to end this service is the result of the following factors:
-
Over the past 10 years more and more of our subscribers have been able to put reliable automation into place for certificate renewal.
-
Providing expiration notification emails means that we have to retain millions of email addresses connected to issuance records. As an organization that values privacy, removing this requirement is important to us.
-
Providing expiration notifications costs Let’s Encrypt tens of thousands of dollars per year, money that we believe can be better spent on other aspects of our infrastructure.
-
Providing expiration notifications adds complexity to our infrastructure, which takes time and attention to manage and increases the likelihood of mistakes being made. Over the long term, particularly as we add support for new service components, we need to manage overall complexity by phasing out system components that can no longer be justified.
For those who would like to continue receiving expiration notifications, we recommend using a third party service such as Red Sift Certificates Lite (formerly Hardenize). Red Sift’s monitoring service providing expiration emails is free of charge for up to 250 certificates. More monitoring options can be found here.
While we will be minimizing the email addresses we retain connected to issuance data, you can opt in to receive other emails. We’ll keep you informed about technical updates, and other news about Let’s Encrypt and our parent nonprofit, ISRG, based on the preferences you choose. You can sign up for our email lists below:
This year we will continue to pursue our commitment to improving the security of the Web PKI by introducing the option to get certificates with six-day lifetimes (“short-lived certificates”). We will also add support for IP addresses in addition to domain names. Our longer-lived certificates, which currently have a lifetime of 90 days, will continue to be available alongside our six-day offering. Subscribers will be able to opt in to short-lived certificates via a certificate profile mechanism being added to our ACME API.
Shorter Certificate Lifetimes Are Good for Security
When the private key associated with a certificate is compromised, the recommendation has always been to have the certificate revoked so that people will know not to use it. Unfortunately, certificate revocation doesn’t work very well. This means that certificates with compromised keys (or other issues) may continue to be used until they expire. The longer the lifetime of the certificate, the longer the potential for use of a problematic certificate.
The primary advantage of short-lived certificates is that they greatly reduce the potential compromise window because they expire relatively quickly. This reduces the need for certificate revocation, which has historically been unreliable. Our six-day certificates will not include OCSP or CRL URLs. Additionally, short-lived certificates practically require automation, and we believe that automating certificate issuance is important for security.
IP Address Support For Securing Additional Use Cases
We will support including IP addresses as Subject Alternative Names in our six-day certificates. This will enable secure TLS connections, with publicly trusted certificates, to services made available via IP address, without the need for a domain name.
Validation for IP addresses will work much the same as validation for domain names, though validation will be restricted to the http-01 and tls-alpn-01 challenge types. The dns-01 challenge type will not be available because the DNS is not involved in validating IP addresses. Additionally, there is no mechanism to check CAA records for IP addresses.
Timeline
We expect to issue the first valid short-lived certificates to ourselves in February of this year. Around April we will enable short-lived certificates for a small set of early adopting subscribers. We hope to make short-lived certificates generally available by the end of 2025.
The earliest short-lived certificates we issue may not support IP addresses, but we intend to enable IP address support by the time short-lived certificates reach general availability.
How To Get Six-Day and IP Address Certificates
Once short-lived certificates are an option for you, you’ll need to use an ACME client that supports ACME certificate profiles and select the short-lived certificate profile (the name of which will be published at a later date).
Once IP address support is an option for you, requesting an IP address in a certificate will automatically select a short-lived certificate profile.
Looking Ahead
The best way to prepare to take advantage of short-lived certificates is to make sure your ACME client is reliably renewing certificates in an automated fashion. If that’s working well then there should be no costs to switching to short-lived certificates.
If you have questions or comments about our plans, feel free to let us know on our community forums.
We are excited to announce a new extension to Let’s Encrypt’s implementation of the ACME protocol that we are calling “profile selection.” This new feature will allow site operators and ACME clients to opt in to the next evolution of Let’s Encrypt.
As of today, the staging environment is advertising a new field in its directory resource:
GET /directory HTTP/1.1
HTTP/1.1 200 OK
Content-Type: application/json
{
...
"meta": {
"profiles": {
"classic": "The same profile you're accustomed to",
"tlsserver": "https://letsencrypt.org/2025/01/09/acme-profiles/"
}
}
}
Here, the keys are the names of new “profiles”, and the values are human-readable descriptions of those profiles. A profile describes a collection of attributes about the certificate that will be issued, such as what extensions it will contain, how long it will be valid for, and more.
For example, the “classic” profile is exactly what it sounds like: certificates issued under the classic profile will look exactly the same as those that we have always issued, valid for 90 days.
But certificates issued under the “tlsserver” profile will have a number of differences tailored specifically towards TLS server usage:
- No Common Name field (including a CN has been NOT RECOMMENDED by the Baseline Requirements for several years now)
- No Subject Key Identifier (including a SKID is NOT RECOMMENDED by the Baseline Requirements)
- No TLS Client Auth Extended Key Usage (root programs are moving towards requiring “single-purpose” issuance hierarchies, where every certificate has only a single EKU)
- No Key Encipherment Key Usage for certificates with RSA public keys (this KU was used by older RSA-based TLS cipher suites, but is fully unnecessary in TLS 1.3)
Additionally, in the near future we will offer a “shortlived” profile which will be identical to the “tlsserver” profile but with a validity period of only 6 days. This profile isn’t available in Staging just yet, so keep an eye out for further announcements regarding short-lived certificates and why we think they’re exciting.
An ACME client can supply a desired profile name in a new-order request:
POST /acme/new-order HTTP/1.1
Host: example.com
Content-Type: application/jose+json
{
"protected": base64url(...),
"payload": base64url({
"profile": "tlsserver",
"identifiers": [
{ "type": "dns", "value": "www.example.org" },
{ "type": "dns", "value": "example.org" }
],
}),
"signature": "H6ZXtGjTZyUnPeKn...wEA4TklBdh3e454g"
}
If the new-order request is accepted, then the selected profile name will be reflected in the Order object when it is returned, and the resulting certificate after finalization will be issued with the selected profile. If the new-order request does not specify a profile, then the server will select one for it.
Guidance for ACME clients and users
If you are an ACME client author, we encourage you to introduce support for this new field in your client. Start by taking a look at the draft specification in the IETF ACME Working Group. A simple implementation might allow the user to configure a static profile name and include that name in all new-order requests. For a better user experience, check the configured name against the list of profiles advertised in the directory, to ensure that changes to the available profiles don’t result in invalid new-order requests. For clients with a user interface, such as a control panel or interactive command line interface, an implementation could fetch the list of profiles and their descriptions to prompt the user to select one on first run. It could also use a notification mechanism to inform the user of changes to the list of available profiles. We’d also love to hear from you about your experience implementing and deploying this new extension.
If you are a site operator or ACME client user, we encourage you to keep an eye on your ACME client of choice to see when they adopt this new feature, and update your client when they do. We also encourage you to try out the modern “tlsserver” profile in Staging, and let us know what you think of the changes we’ve made to the certificates issued under that profile.
What’s next?
Obviously there is more work to be done here. The draft standard will go through multiple rounds of review and tweaks before becoming an IETF RFC, and our implementation will evolve along with it if necessary. Over the coming weeks and months we will also be providing more information about when we enable profile selection in our production environment, and what our production profile options will be.
Thank you for coming along with us on this journey into the future of the Web PKI. We look forward to your testing and feedback!

This letter was originally published in our 2024 Annual Report.
The past year at ISRG has been a great one and I couldn’t be more proud of our staff, community, funders, and other partners that made it happen. Let’s Encrypt continues to thrive, serving more websites around the world than ever before with excellent security and stability. Our understanding of what it will take to make more privacy-preserving metrics more mainstream via our Divvi Up project is evolving in important ways.
Prossimo has made important investments in making software critical infrastructure safer, from TLS and DNS to the Linux kernel.
Next year is the 10th anniversary of the launch of Let’s Encrypt. Internally things have changed dramatically from what they looked like ten years ago, but outwardly our service hasn’t changed much since launch. That’s because the vision we had for how best to do our job remains as powerful today as it ever was: free 90-day TLS certificates via an automated API. Pretty much as many as you need. More than 500,000,000 websites benefit from this offering today, and the vast majority of the web is encrypted.
Our longstanding offering won’t fundamentally change next year, but we are going to introduce a new offering that’s a big shift from anything we’ve done before - short-lived certificates. Specifically, certificates with a lifetime of six days. This is a big upgrade for the security of the TLS ecosystem because it minimizes exposure time during a key compromise event.
Because we’ve done so much to encourage automation over the past decade, most of our subscribers aren’t going to have to do much in order to switch to shorter lived certificates. We, on the other hand, are going to have to think about the possibility that we will need to issue 20x as many certificates as we do now. It’s not inconceivable that at some point in our next decade we may need to be prepared to issue 100,000,000 certificates per day.
That sounds sort of nuts to me today, but issuing 5,000,000 certificates per day would have sounded crazy to me ten years ago. Here’s the thing though, and this is what I love about the combination of our staff, partners, and funders - whatever it is we need to do to doggedly pursue our mission, we’re going to get it done. It was hard to build Let’s Encrypt. It was difficult to scale it to serve half a billion websites. Getting our Divvi Up service up and running from scratch in three months to service exposure notification applications was not easy. Our Prossimo project was a primary contributor to the creation of a TLS library that provides memory safety while outperforming its peers - a heavy lift.
Charitable contributions from people like you and organizations around the world make this stuff possible. Since 2015, tens of thousands of people have donated. They’ve made a case for corporate sponsorship, given through their DAFs, or set up recurring donations, sometimes to give $3 a month. That’s all added up to millions of dollars that we’ve used to change the Internet for nearly everyone using it. I hope you’ll join these people and help lay the foundation for another great decade.
Josh Aas
Executive Director
Earlier this year we announced our intent to provide certificate revocation information exclusively via Certificate Revocation Lists (CRLs), ending support for providing certificate revocation information via the Online Certificate Status Protocol (OCSP). Today we are providing a timeline for ending OCSP services:
- January 30, 2025
- OCSP Must-Staple requests will fail, unless the requesting account has previously issued a certificate containing the OCSP Must Staple extension
- May 7, 2025
- Prior to this date we will have added CRL URLs to certificates
- On this date we will drop OCSP URLs from certificates
- On this date all requests including the OCSP Must Staple extension will fail
- August 6, 2025
- On this date we will turn off our OCSP responders
Additionally, a very small percentage of our subscribers request certificates with the OCSP Must Staple Extension. If you have manually configured your ACME client to request that extension, action is required before May 7. See “Must Staple” below for details.
OCSP and CRLs are both mechanisms by which CAs can communicate certificate revocation information, but CRLs have significant advantages over OCSP. Let’s Encrypt has been providing an OCSP responder since our launch nearly ten years ago. We added support for CRLs in 2022.
Websites and people who visit them will not be affected by this change, but some non-browser software might be.
We plan to end support for OCSP primarily because it represents a considerable risk to privacy on the Internet. When someone visits a website using a browser or other software that checks for certificate revocation via OCSP, the Certificate Authority (CA) operating the OCSP responder immediately becomes aware of which website is being visited from that visitor’s particular IP address. Even when a CA intentionally does not retain this information, as is the case with Let’s Encrypt, CAs could be legally compelled to collect it. CRLs do not have this issue.
We are also taking this step because keeping our CA infrastructure as simple as possible is critical for the continuity of compliance, reliability, and efficiency at Let’s Encrypt. For every year that we have existed, operating OCSP services has taken up considerable resources that can soon be better spent on other aspects of our operations. Now that we support CRLs, our OCSP service has become unnecessary.
We recommend that anyone relying on OCSP services today start the process of ending that reliance as soon as possible. If you use Let’s Encrypt certificates to secure non-browser communications such as a VPN, you should ensure that your software operates correctly if certificates contain no OCSP URL.
Must Staple
Because of the privacy issues with OCSP, browsers and servers implement a feature called “OCSP Stapling”, where the web server sends a copy of the appropriate OCSP response during the TLS handshake, and the browser skips making a request to the CA, thus better preserving privacy.
In addition to OCSP Stapling (a TLS feature negotiated at handshake time), there’s an extension that can be added to certificates at issuance time, colloquially called “OCSP Must Staple.” This tells browsers that, if they see that extension in a certificate, they should never contact the CA about it and should instead expect to see a stapled copy in the handshake. Failing that, browsers should refuse to connect. This was designed to solve some security problems with revocation.
Let’s Encrypt has supported OCSP Must Staple for a long time, because of the potential to improve both privacy and security. However, Must Staple has failed to get wide browser support after many years. And popular web servers still implement OCSP Stapling in ways that create serious risks of downtime.
As part of removing OCSP, we’ll also be removing support for OCSP Must Staple. CRLs have wide browser support and can provide privacy benefits to all sites, without requiring special web server configuration. Thanks to all our subscribers who have helped with the OCSP Must Staple experiment.
If you are not certain whether you are using OCSP Must Staple, you can check this list of hostnames and certificate serials (11.1 MB, .zip).
As of January 30, 2025, issuance requests that include the OCSP Must Staple extension will fail, unless the requesting account has previously issued a certificate containing the OCSP Must Staple extension.
As of May 7, all issuance requests that include the OCSP Must Staple extension will fail, including renewals. Please change your ACME client configuration to not request the extension.
Today we are announcing our intent to end Online Certificate Status Protocol (OCSP) support in favor of Certificate Revocation Lists (CRLs) as soon as possible. OCSP and CRLs are both mechanisms by which CAs can communicate certificate revocation information, but CRLs have significant advantages over OCSP. Let’s Encrypt has been providing an OCSP responder since our launch nearly ten years ago. We added support for CRLs in 2022.
Websites and people who visit them will not be affected by this change, but some non-browser software might be.
We plan to end support for OCSP primarily because it represents a considerable risk to privacy on the Internet. When someone visits a website using a browser or other software that checks for certificate revocation via OCSP, the Certificate Authority (CA) operating the OCSP responder immediately becomes aware of which website is being visited from that visitor’s particular IP address. Even when a CA intentionally does not retain this information, as is the case with Let’s Encrypt, CAs could be legally compelled to collect it. CRLs do not have this issue.
We are also taking this step because keeping our CA infrastructure as simple as possible is critical for the continuity of compliance, reliability, and efficiency at Let’s Encrypt. For every year that we have existed, operating OCSP services has taken up considerable resources that can soon be better spent on other aspects of our operations. Now that we support CRLs, our OCSP service has become unnecessary.
In August of 2023 the CA/Browser Forum passed a ballot to make providing OCSP services optional for publicly trusted CAs like Let’s Encrypt. With one exception, Microsoft, the root programs themselves no longer require OCSP. As soon as the Microsoft Root Program also makes OCSP optional, which we are optimistic will happen within the next six to twelve months, Let’s Encrypt intends to announce a specific and rapid timeline for shutting down our OCSP services. We hope to serve our last OCSP response between three and six months after that announcement. The best way to stay apprised of updates on these plans is to subscribe to our API Announcements category on Discourse.
We recommend that anyone relying on OCSP services today start the process of ending that reliance as soon as possible. If you use Let’s Encrypt certificates to secure non-browser communications such as a VPN, you should ensure that your software operates correctly if certificates contain no OCSP URL. Fortunately, most OCSP implementations “fail open” which means that an inability to fetch an OCSP response will not break the system.
Internet Security Research Group (ISRG) is the parent organization of Let’s Encrypt, Prossimo, and Divvi Up. ISRG is a 501(c)(3) nonprofit. If you’d like to support our work, please consider getting involved, donating, or encouraging your company to become a sponsor.
When we look at the general security posture of Let’s Encrypt, one of the things that worries us most is how much of the operating system and network infrastructure is written in unsafe languages like C and C++. The CA software itself is written in memory safe Golang, but from our server operating systems to our network equipment, lack of memory safety routinely leads to vulnerabilities that need patching.
Partially for the sake of Let’s Encrypt, and partially for the sake of the wider Internet, we started a new project called Prossimo in 2020. Prossimo’s goal is to make some of the most critical software infrastructure for the Internet memory safe. Since then we’ve invested in a range of software components including the Rustls TLS library, Hickory DNS, River reverse proxy, sudo-rs, Rust support for the Linux kernel, and ntpd-rs.
Let’s Encrypt has now taken a step that was a long time in the making: we’ve deployed ntpd-rs, the first piece of memory safe software from Prossimo that has made it into the Let’s Encrypt infrastructure.
Most operating systems use the Network Time Protocol (NTP) to accurately determine what time it is. Keeping track of time is a critical task for an operating system, and since it involves interacting with the Internet it’s important to make sure NTP implementations are secure.
In April of 2022, Prossimo started work on a memory safe and generally more secure NTP implementation called ntpd-rs. Since then, the implementation has matured and is now maintained by Project Pendulum. In April of 2024 ntpd-rs was deployed to the Let’s Encrypt staging environment, and as of now it’s in production.
Over the next few years we plan to continue replacing C or C++ software with memory safe alternatives in the Let’s Encrypt infrastructure: OpenSSL and its derivatives with Rustls, our DNS software with Hickory, Nginx with River, and sudo with sudo-rs. Memory safety is just part of the overall security equation, but it’s an important part and we’re glad to be able to make these improvements.
We depend on contributions from our community of users and supporters in order to provide our services. If your company or organization would like to sponsor Let’s Encrypt please email us at sponsor@letsencrypt.org. We ask that you make an individual contribution if it is within your means.
Mon, 24 Jun 2024 00:00:00 +0000