To play the devils advocate: TLS on websites where you are not logged in is the greatest security hogwash of all times.
For example the cookies of the NYT:
- Store and/or access information on a device 178 vendors
- Use limited data to select advertising 111 vendors
- Create profiles for personalised advertising 135 vendors
- Use profiles to select personalised advertising
- Understand audiences through statistics or combinations
of data from different sources 92 vendors
There is no way to escape any of this unless you spend several hours per week to click through these dialogs and to adjust adblockers.
And even if you block all cookies, ever-cookies and fingerprinting, then there are still cloudflare, amazon, gcp and azure who know your cross-site visits.
The NSA is no longer listening because there is TLS everywhere? Sure, and the earth is flat.
TLS is not just for encryption, but also for integrity. The content you are seeing is exactly as intended by the owner of the domain or webservice (for whatever that is worth). No easy way to mitm or inject content on the way.
> The NSA is no longer listening because there is TLS everywhere? Sure, and the earth is flat.
I’d be very surprised if they haven’t had several of the root trust entities compromised from day one. I wouldn’t rely on TLS with any of the typical widely-deployed trust chains for any secrecy at all if your opponent is US intelligence.
"There is no way to escape any of this unless you spend several hours per week to click through these dialogs and to adjust adblockers."
I read NYT with no cookies, no Javascript and no images. Only the Host, User Agent (googlebot) and Connection headers are sent. TLS forward proxy sends requests over internet, not browser. No SNI. No meaningful "fingerprint" for advertising
This only requires accessing a single IP address used by NYT. No "vendors"
TLS is monitored on the network I own. By me
I inspect all TLS traffic. Otherwise connection fails
TLS is cool for stopping your ISP from MiTMing your traffic (usually to insert shitty banner ads or something).
Otherwise I find it a scourge, particularly when I want to run https over a private network, but browsers have a shitfit because I didn't publicly announce my internal hosts.
There's plenty of traffic that has no need to be encrypted, and where not much privacy is added since the DNS queries are already leaked (as well as what the site operator and their many "partners" can gather).
I'm glad you can get free certs from Let's Encrypt, but I hate that https has become mandatory.
Let's Encrypt did more for privacy than any other organization. Before Let's Encrypt, we'd usually deploy TLS certificates, but as somewhat of an afterthought, and leaving HTTP accessible. They were a pain to (very manually) rotate once a year, too.
It's hard to overstate just how much LE changed things. They made TLS the default, so much that you didn't have to keep unencrypted HTTP around any more. Kudos.
I think it was Snowden who made TLS the default. Let's Encrypt did great work, but basically having the NSA's spying made common knowledge (including revealing some things that were worse than we expected, like stealing the traffic between Google's data centers) created a consensus that unencrypted HTTP had to go, despite the objections of people like Roy Fielding.
> I think it was Snowden who made TLS the default.
Snowden's revelations were a convincing argument, but I would place more weight on Google in its "we are become Evil" phase (realistically, ever since they attained escape velocity to megacorphood and search monopoly status), who strove to amass all that juicy user data and not let the ISPs or whoever else have a peek, retaining exclusivity. A competition-thwarting move with nice side benefits, that is. That's not to say that ISPs would've known to use that data effectively, but somebody might, and why not eliminate a potential threat systemically if possible?
Reading this it seems to me that ISPs missed a trick by not offering privacy features. These features were already baked into mobile wireless it probably wouldn’t have been a huge big deal for them to provide it. That’s what happens when you treat your business as a source of rent
Ironically, the inability to cache TLS on the edge of my network makes the Internet more surveillable since everything has to pass through the Room 641As of the world and subjects us all to more network behavior analysis. The TLS-everything world leaks so much more metadata. It's more secure but less private.
Yes, that's a real problem. Probably moving to a content-centric networking or named-data networking system would help with it, while also creating difficulties for censorship, and IPFS and Filecoin seem to be deploying such a thing in real life as an overlay network over the internet.
Redirection doesn't get the job done, without at least a mechanism so that browsers reliably stop visiting the HTTP site (HSTS) and ideally an HTTPS-everywhere feature which, in turn, was not deployable for ordinary people until almost every common site they visit is HTTPS enabled and works properly.
The problem is that there are active bad guys. Redirection means when there are no bad guys or only passive bad guys, the traffic is encrypted, but bad guys just ensure the redirect sends people to their site instead.
Users who go to http://mysite.example/ would be "redirected" to https://mysite.example/ but that redirection wasn't protected so instead the active bad guy ensures they're redirected to https://scam.example/mysite/ and look, it has the padlock symbol and it says mysite in the bar, what more do you want?
Snowden was definitely a coincidence in the sense that this wasn't a pull decision. Users didn't demand this as a result of Snowden. However, Snowden is why BCP #188 (RFC 7258) aka "Pervasive Monitoring is an Attack" happened, and certainly BCP #188 helped because it was shorthand for why the arguments against encryption everywhere were bogus. One or another advocate for some group who supposedly "need" to be able to snoop on you stands up, gives a twenty minute presentation about why although they think encryption is great, they do need to er, not have encryption, the response in one sentence is "BCP 188 says don't do this". Case closed, go away.
There are always people who insist they have a legitimate need to snoop. Right now in Europe they're pulling on people's "protect the children" heart strings, but we already know - also in Europe that the very moment they get a tiny crack for this narrative in march giant corporations who demand they must snoop to ensure they get their money, and government espionage need to snoop on everybody to ensure they don't get out of line.
> Users who go to http://mysite.example/ would be "redirected" to https://mysite.example/ but that redirection wasn't protected so instead the active bad guy ensures they're redirected to https://scam.example/mysite/ and look, it has the padlock symbol and it says mysite in the bar, what more do you want?
You can do better than this. You can have your mitm proxy follow the SSL redirect itself, but still present plain HTTP to the client. So the client still sees the true "mysite.example" domain in the URL bar (albeit on plain http), and the server has a good SSL session, but the attacker gets to see all of the traffic.
Yeah, I remember when HTTPS was a novelty, used mainly by e-commerce websites like PayPal. The sites that actually had secrets in their traffic that could be worth protecting, and were willing to pay tens of thousands to buy certificates and then pay the compute tax on the traffic encryption.
I remember deploying SSL on NetWare in the late 1990s and being given ... something that the US allowed to be exported as a munition!
I don't recall the exact details but it was basically buggered - short key length. Long enough to challenge a 80386 Beowulf cluster but no match for whatever was humming away in a very well funded machine room.
You could still play with all the other exciting dials and knobs, SANs and so on but in the end it was pretty worthless.
A few years ago a client of mine gave me a big-ish APC UPS. I recently got new batteries for it after the outage here in Portugal, and to turn on SSH I had to agree that I was not a terrorist organisation's nor in a country where encryption can not be exported to.
This protocol definitely made securing the web easier. Thanks to it, I don't need to renew certificates manually (it's now done automatically), which can be tedious...
So, the crucial thing ACME has that the other protocols do not is a hole (and some example ways to fill that hole for your purpose, though others are documented in newer RFCs) for the Proof of Control.
See, SCEP assumes that Bob trusts Alice to make certificates. Alice uses the SCEP server provided by Bob, but she can make any certificate that Bob allows. If she wants to make a certificate claiming she's the US Department of Education, or Hacker News, or Tesco supermarkets, she can do that. For your private Intranet that's probably fine, Alice is head of Cyber Security, she issues certificate according to local rules, OK.
But for the public web we have rules about who we should issue certificates to, and these ultimately boil down to we want to issue certificates only to the people who actually control the name they're getting a certificate for. Historically this had once been extremely hard core (in the mid-1990s when SSL was new) but a race to the bottom ensued and it had become basically "Do you have working email for that domain?" and sometimes not even that.
So in parallel with Let's Encrypt, work happened to drag all the trusted certificate issuers to new rules called the "Ten Blessed Methods" which listed (initially ten) ways you could be sure that this subscriber is allowed a certificate for news.ycombinator.com and so if you want to do so you're allowed to issue that certificate.
Several ACME kinds of Proof of Control are actually directly reflected in the Ten Blessed Methods, and gradually the manual options have been deprecated and more stuff moves to ACME.
e.g. "3.2.2.4.19 Agreed‑Upon Change to Website ‑ ACME" is a specific method which is how your cheesiest "Let's Encrypt in a box" type software tends to work, where we prove we control www.some.example by literally just changing a page on www.some.example in a specific way when requested and that's part of the ACME specification so it can be done automatically without a human in the loop.
Can someone explain why letsencrypt certificates have to be 90 days expiry? I know there is automation available, but what is the rationale for 90 days?
Because companies can't be trusted to set up proper renewal procedures.
If a cert has to be renewed once every 3 years, plenty of companies will build an extremely complicated bureaucratic dance around the process.
In the past this has resulted in CAs saying "something went wrong, and we should revoke, but Bank X is in a Holiday Freeze and won't be able to rotate any time in the next two months, and they are Critical Infrastructure!". Similarly, companies have ended up trying to sue their CA to block an inconvenient revocation.
Most of those have luckily been due to small administrative errors, but it has painfully shown that the industry is institutionally incapable of setting up proper renewal processes.
The solution is automated renewal as you can't make that too complicated, and by shortening the cert validity they are trying to make manual renewal too painful to keep around. After all, you can't set up a two-months-long process if you need to renew every 30 days!
It's so annoying. Eventually we will get to the point that every connection will have its own unique certificate, and so any compromised CA will be able to be “tapped” for a particular target without anybody else being able to compare certs and figure it out.
Has anyone considered the possibility that a CA such as Let's Encrypt could be compromised or even run entirely by intelligence operatives? Of course, there are many other CAs that could be compromised and making money off of customers on top of that. But who knows... What could defend against this possibility? Multiple signatures on a certificate?
Even funnier, if one SIGINT team built a centralized "encryption everywhere" effort (before sites get encryption elsewhere), but that asset had to be need-to-know secret, so another SIGINT team of the same org, not knowing the org already owned "encryption everywhere", responded to the challenge by building a "DoS defense" service that bypasses the encryption, and started DoS driving every site of interest to that service.
(Seriously: I strongly suspect that Let's Encrypt's ISRG are the good guys. But a security mindset should make you question everything, and recognize when you're taking something on faith, or taking a risk, so that it's a conscious decision, and you can re-evaluate it when priorities change.)
Sounds like Cloudflare honestly. There are many issues with CA trust in the modern Internet. The most paranoid among us would do well to remove every trusted CA key from their OS and build a minimal set from scratch, I suppose. Browsers simply make it too easy to overlook CA-related issues, especially if you think a CA is compromised or malicious.
A signature on a certificate doesn't allow CA to snoop. They need access to the private key for that, which ACME (and other certificate signing protocols in general) doesn't share with the CA.
> They need access to the private key for that, which ACME (and other certificate signing protocols in general) doesn't share with the CA.
Modern TLS doesn't even rely on the privacy of the private key 'as much' as it used: nowadays with (perfect) forward secrecy it's mainly used to establish trust, and after which the two parties generate transient session keys.
If the CA is somehow able to control the communication (I think usually they don't, but if they are being run by intelligence operatives then maybe they have that capability, although they probably do not use it a lot if so (in order to reduce the chance of being detected)), they could substitute a certificate with their own keys (and then communicate with the original server using the original keys in order to obtain the information required). However, this does not apply if both sides verify by an independent method that the key is correct (and if not, would allow to detect it).
Adding multiple signatures to a certificate would be difficult because the extensions must be a part of the certificate which will be signed. (However, there are ways to do such thing as web of trust, and I had thought of ways to do this with X.509, although it does not normally do that. Another way would be an extension which is filled with null bytes when calculating the extra signatures and then being filled in with the extra signatures when calculating the normal signature.)
(Other X.509 extensions would also be helpful for various reasons, although the CAs might not allow that, due to various requirements (some of which are unnecessary).)
Another thing that helps is using X.509 client certificates for authentication in addition to server certificates. If you do this, then any MITM will not be able to authenticate (unless at least one side allows them to do so). X.509 client authentication has many other advantages as well.
In addition, it might be helpful to allow you to use those certificates to issue additional certificates (e.g. to subdomains); but, whoever verifies the certificate (usually the client, but it can also be the server in case of a client certificate) would then need to check the entire certificate chain to check the permissions allowed by the certificate.
There is also the possibility that certificate authorities will refuse to issue certificates to you for whatever reasons.
Even access to the private key doesn't permit a passive adversary to snoop on traffic that's using a ciphersuite that provides perfect forward secrecy, because the private key is only used to authenticate the session key negotiation protocol, which generates a session key that cannot be computed from the captured session traffic. Most SSL and TLS ciphersuites provide PFS nowadays.
An active adversary engaging in a man-in-the-middle attack on HTTPS can do it with the private key, as you suggest, but they can also do it with a completely separate private key that is signed by any CA the browser trusts. There are firewall vendors that openly do this to every single HTTPS connection through the firewall.
HPKP was a defense against this (https://en.wikipedia.org/wiki/HTTP_Public_Key_Pinning) but HPKP caused other, worse problems, and was deprecated in 02017 and later removed. CT logging is another, possibly weaker defense. (It only works for CAs that participate in CT, and it only detects attacks after the fact; it doesn't make them impossible.)
In fact knowing the private key for other people's certificate you issue is strictly forbidden for the publicly trusted CAs. That's what happened years back when a "reseller" company named Trustico literally sent the private keys for all their customers to the issuing CA apparently under the impression this would somehow result in refunding or re-issuing or something. The CA checked, went "These are real, WTF?" and revoked all the now useless certificates.
It is called a private key for a reason. Don't tell anybody. It's not a secret that you're supposed to share with somebody, it's private, tell nobody. Which in this case means - don't let your "reseller" choose the key, that's now their key, your key should be private which means you don't tell anybody what it is.
If you're thinking "But wait, if I don't tell anybody, how can that work?" then congratulations - this is tricky mathematics they didn't cover in school, it is called "Public key cryptography" and it was only invented in the 20th century. You don't need to understand how it works, but if you want to know, the easiest kind still used today is called the RSA Digital Signature so you can watch videos or read a tutorial about that.
If you're just wondering about Let's Encrypt, well, Let's Encrypt don't know or want to know anybody else's private keys either, the ACME software you use will, in entirely automated cases, pick random keys, not tell anybody, but store them for use by the server software and obtain suitable certificate for those keys, despite not telling anybody what the key is.
I know that. But presumably, Let's Encrypt could participate in a MITM attack since they can sign another key, so that even the visitor who knows that you use them as a CA can't tell there is a MITM. Checking multiple signatures on the same key could raise the bar for a MITM attack, requiring multiple CA's to participate. I can't be the first person to think of this. I'm not even a web security guy.
It might be interesting for ACME to be updated to support signing the same key with multiple CA's. Three sounds like a good number. You ought to be able to trust CA's enough to believe that there won't be 3 of them conspiring against you, but you never really know.
This problem was solved in the mid 2010s by Certificate Transparency. Every issued certificate that browsers trust must be logged to a public append-only certificate transparency log. As a result, you can scan the logs to see if any certs were issued for your domain for keys that you don't control (and many tools and companies exist to do this).
Having Chrome/Firefox asynchronously check the CT log 0.1% of the time would probably be enough to solve that.
CT logging is mandatory, and even a single missing cert is probably going to be an existential threat to any CA.
The fact that someone is checking is already enough of a deterrent to prevent large-scale attacks. And if you're worried about spearphishing-via-MitM, you should probably stick to Tor.
The signing keys used by the Certificate Authority to assert that the client (leaf) certificate is authentic through cryptographic signing differ from the private keys used to secure communication with the host(s) referenced in the x509 CN/SAN fields.
I know that. At issue is the fact that the signing keys can be used to sign a MITM key. If there were multiple signatures on the original key, it would (or could) be a lot harder to MITM (presumably). Do you trust any CA enough to never be involved in this kind of scandal? Certainly government CA's and corporate CA's MITM people all the time.
Edit: I'm gonna be rate limited, but let me just say now that Certificate Transparency sounds interesting. I need to look into that more, but it amounts to a 3rd party certificate verification service. Now, we have to figure out how to connect to that service securely lol... Thanks, you've given me something to go read about.
I mean, it doesn't help that the browser duopoly is making it harder and harder to use self-signed certificates these days. Why, if I were more paranoid, I might come to a similar conclusion.
it seems like all this infrastructure could be replaced by a DNS TXT record with a public key that browsers could use to check the cert sent from the web server. A web server would load a self-signed cert (or whatever cert they wanted), and put the cert's public key into a DNS record for that hostname. Every visit to a website would need two lookups, one for address and one for key. It puts control back into the hands of the domain owners and eliminates the need for letsencrypt.
I'm not sure what that would solve. You would still need some central entity to sign the DNS TXT record, to ensure that the HTTPS client does not use a tampered DNS TXT record.
To play the devils advocate: TLS on websites where you are not logged in is the greatest security hogwash of all times.
For example the cookies of the NYT:
There is no way to escape any of this unless you spend several hours per week to click through these dialogs and to adjust adblockers.And even if you block all cookies, ever-cookies and fingerprinting, then there are still cloudflare, amazon, gcp and azure who know your cross-site visits.
The NSA is no longer listening because there is TLS everywhere? Sure, and the earth is flat.
TLS is not just for encryption, but also for integrity. The content you are seeing is exactly as intended by the owner of the domain or webservice (for whatever that is worth). No easy way to mitm or inject content on the way.
> The NSA is no longer listening because there is TLS everywhere? Sure, and the earth is flat.
I’d be very surprised if they haven’t had several of the root trust entities compromised from day one. I wouldn’t rely on TLS with any of the typical widely-deployed trust chains for any secrecy at all if your opponent is US intelligence.
"There is no way to escape any of this unless you spend several hours per week to click through these dialogs and to adjust adblockers."
I read NYT with no cookies, no Javascript and no images. Only the Host, User Agent (googlebot) and Connection headers are sent. TLS forward proxy sends requests over internet, not browser. No SNI. No meaningful "fingerprint" for advertising
This only requires accessing a single IP address used by NYT. No "vendors"
TLS is monitored on the network I own. By me
I inspect all TLS traffic. Otherwise connection fails
This has nothing to do with TLS’s security model. You still have to trust the site you’re connecting to.
TLS is cool for stopping your ISP from MiTMing your traffic (usually to insert shitty banner ads or something).
Otherwise I find it a scourge, particularly when I want to run https over a private network, but browsers have a shitfit because I didn't publicly announce my internal hosts.
There's plenty of traffic that has no need to be encrypted, and where not much privacy is added since the DNS queries are already leaked (as well as what the site operator and their many "partners" can gather).
I'm glad you can get free certs from Let's Encrypt, but I hate that https has become mandatory.
Let's Encrypt did more for privacy than any other organization. Before Let's Encrypt, we'd usually deploy TLS certificates, but as somewhat of an afterthought, and leaving HTTP accessible. They were a pain to (very manually) rotate once a year, too.
It's hard to overstate just how much LE changed things. They made TLS the default, so much that you didn't have to keep unencrypted HTTP around any more. Kudos.
And with that, kudos to Mozilla, EFF and the University of Michigan for founding Let's Encrypt for just that purpose.
(I do work at Mozilla now, but this predates me. Still think it's one of its most significant (and sadly often overlooked) contributions though.)
Yeah, massive massive contribution to the world. I can't think of many other nonprofits that had such an impact for the betterment if humanity.
I think it was Snowden who made TLS the default. Let's Encrypt did great work, but basically having the NSA's spying made common knowledge (including revealing some things that were worse than we expected, like stealing the traffic between Google's data centers) created a consensus that unencrypted HTTP had to go, despite the objections of people like Roy Fielding.
> I think it was Snowden who made TLS the default.
Snowden's revelations were a convincing argument, but I would place more weight on Google in its "we are become Evil" phase (realistically, ever since they attained escape velocity to megacorphood and search monopoly status), who strove to amass all that juicy user data and not let the ISPs or whoever else have a peek, retaining exclusivity. A competition-thwarting move with nice side benefits, that is. That's not to say that ISPs would've known to use that data effectively, but somebody might, and why not eliminate a potential threat systemically if possible?
Reading this it seems to me that ISPs missed a trick by not offering privacy features. These features were already baked into mobile wireless it probably wouldn’t have been a huge big deal for them to provide it. That’s what happens when you treat your business as a source of rent
Ironically, the inability to cache TLS on the edge of my network makes the Internet more surveillable since everything has to pass through the Room 641As of the world and subjects us all to more network behavior analysis. The TLS-everything world leaks so much more metadata. It's more secure but less private.
Yes, that's a real problem. Probably moving to a content-centric networking or named-data networking system would help with it, while also creating difficulties for censorship, and IPFS and Filecoin seem to be deploying such a thing in real life as an overlay network over the internet.
You can do it if you're happy to deploy your CA to your network, can't you? Deploying CA certs sucks, though. I wish it was easier.
The article claims http was kept around. My experience was, that once you setup https you just redirected http, like today.
Snowden may have been a coincidence, too. We knew encryption was better, it was just too much of a hassle for most sites.
Redirection doesn't get the job done, without at least a mechanism so that browsers reliably stop visiting the HTTP site (HSTS) and ideally an HTTPS-everywhere feature which, in turn, was not deployable for ordinary people until almost every common site they visit is HTTPS enabled and works properly.
The problem is that there are active bad guys. Redirection means when there are no bad guys or only passive bad guys, the traffic is encrypted, but bad guys just ensure the redirect sends people to their site instead.
Users who go to http://mysite.example/ would be "redirected" to https://mysite.example/ but that redirection wasn't protected so instead the active bad guy ensures they're redirected to https://scam.example/mysite/ and look, it has the padlock symbol and it says mysite in the bar, what more do you want?
Snowden was definitely a coincidence in the sense that this wasn't a pull decision. Users didn't demand this as a result of Snowden. However, Snowden is why BCP #188 (RFC 7258) aka "Pervasive Monitoring is an Attack" happened, and certainly BCP #188 helped because it was shorthand for why the arguments against encryption everywhere were bogus. One or another advocate for some group who supposedly "need" to be able to snoop on you stands up, gives a twenty minute presentation about why although they think encryption is great, they do need to er, not have encryption, the response in one sentence is "BCP 188 says don't do this". Case closed, go away.
There are always people who insist they have a legitimate need to snoop. Right now in Europe they're pulling on people's "protect the children" heart strings, but we already know - also in Europe that the very moment they get a tiny crack for this narrative in march giant corporations who demand they must snoop to ensure they get their money, and government espionage need to snoop on everybody to ensure they don't get out of line.
> Users who go to http://mysite.example/ would be "redirected" to https://mysite.example/ but that redirection wasn't protected so instead the active bad guy ensures they're redirected to https://scam.example/mysite/ and look, it has the padlock symbol and it says mysite in the bar, what more do you want?
You can do better than this. You can have your mitm proxy follow the SSL redirect itself, but still present plain HTTP to the client. So the client still sees the true "mysite.example" domain in the URL bar (albeit on plain http), and the server has a good SSL session, but the attacker gets to see all of the traffic.
Yeah, I remember when HTTPS was a novelty, used mainly by e-commerce websites like PayPal. The sites that actually had secrets in their traffic that could be worth protecting, and were willing to pay tens of thousands to buy certificates and then pay the compute tax on the traffic encryption.
Thank you Let’s Encrypt, you changed the world and made it better.
Sorry to everyone else who was listening in on the wire. Come back with a warrant, I guess?!
Seriously, talk about impact. That one non-profit has almost single-handedly encrypted most of the web, 700 million sites now! Amazing work.
sorry to a basket of my old devices which I would still use
I remember deploying SSL on NetWare in the late 1990s and being given ... something that the US allowed to be exported as a munition!
I don't recall the exact details but it was basically buggered - short key length. Long enough to challenge a 80386 Beowulf cluster but no match for whatever was humming away in a very well funded machine room.
You could still play with all the other exciting dials and knobs, SANs and so on but in the end it was pretty worthless.
A few years ago a client of mine gave me a big-ish APC UPS. I recently got new batteries for it after the outage here in Portugal, and to turn on SSH I had to agree that I was not a terrorist organisation's nor in a country where encryption can not be exported to.
I'm glad it had that. If you were, say, a member of ISIS and used the UPS, they'd be able to successfully sue you for breach.
> I had to agree that I was not a terrorist organisation's nor in a country where encryption can not be exported to.
Don't forget when flying to the USA, ticking the box to say you won't try to overthrow the government.
I'm sure that clause has stopped many an invading army in their tracks.
Right, 40-bit export-grade SSL.
This protocol definitely made securing the web easier. Thanks to it, I don't need to renew certificates manually (it's now done automatically), which can be tedious...
There are several other certificate provisioning protocols:
* https://en.wikipedia.org/wiki/Simple_Certificate_Enrollment_...
So, the crucial thing ACME has that the other protocols do not is a hole (and some example ways to fill that hole for your purpose, though others are documented in newer RFCs) for the Proof of Control.
See, SCEP assumes that Bob trusts Alice to make certificates. Alice uses the SCEP server provided by Bob, but she can make any certificate that Bob allows. If she wants to make a certificate claiming she's the US Department of Education, or Hacker News, or Tesco supermarkets, she can do that. For your private Intranet that's probably fine, Alice is head of Cyber Security, she issues certificate according to local rules, OK.
But for the public web we have rules about who we should issue certificates to, and these ultimately boil down to we want to issue certificates only to the people who actually control the name they're getting a certificate for. Historically this had once been extremely hard core (in the mid-1990s when SSL was new) but a race to the bottom ensued and it had become basically "Do you have working email for that domain?" and sometimes not even that.
So in parallel with Let's Encrypt, work happened to drag all the trusted certificate issuers to new rules called the "Ten Blessed Methods" which listed (initially ten) ways you could be sure that this subscriber is allowed a certificate for news.ycombinator.com and so if you want to do so you're allowed to issue that certificate.
Several ACME kinds of Proof of Control are actually directly reflected in the Ten Blessed Methods, and gradually the manual options have been deprecated and more stuff moves to ACME.
e.g. "3.2.2.4.19 Agreed‑Upon Change to Website ‑ ACME" is a specific method which is how your cheesiest "Let's Encrypt in a box" type software tends to work, where we prove we control www.some.example by literally just changing a page on www.some.example in a specific way when requested and that's part of the ACME specification so it can be done automatically without a human in the loop.
"The challenge is based on device attestation and what’s new in this case is the arrival of a third party, the attestation server."
Can someone explain why letsencrypt certificates have to be 90 days expiry? I know there is automation available, but what is the rationale for 90 days?
Because companies can't be trusted to set up proper renewal procedures.
If a cert has to be renewed once every 3 years, plenty of companies will build an extremely complicated bureaucratic dance around the process.
In the past this has resulted in CAs saying "something went wrong, and we should revoke, but Bank X is in a Holiday Freeze and won't be able to rotate any time in the next two months, and they are Critical Infrastructure!". Similarly, companies have ended up trying to sue their CA to block an inconvenient revocation.
Most of those have luckily been due to small administrative errors, but it has painfully shown that the industry is institutionally incapable of setting up proper renewal processes.
The solution is automated renewal as you can't make that too complicated, and by shortening the cert validity they are trying to make manual renewal too painful to keep around. After all, you can't set up a two-months-long process if you need to renew every 30 days!
Others have already given your answer, but heads up, LE is lowering the certificate lifetime to 45 days[0].
- [0] https://letsencrypt.org/2025/12/02/from-90-to-45
I’ve heard one rationale that it is short enough to force you to set up the automation, but don’t know if this was actually a consideration or not
You can just read their explanation: https://letsencrypt.org/2015/11/09/why-90-days
Tl;dr is to limit damage from leaked certs and to encourage automation.
Related recently:
Decreasing Certificate Lifetimes to 45 Days
https://news.ycombinator.com/item?id=46117126
The best computer possible on the Earth today can crack it for 91 days in the best case for him.
It's so annoying. Eventually we will get to the point that every connection will have its own unique certificate, and so any compromised CA will be able to be “tapped” for a particular target without anybody else being able to compare certs and figure it out.
Thank you for your service
Has anyone considered the possibility that a CA such as Let's Encrypt could be compromised or even run entirely by intelligence operatives? Of course, there are many other CAs that could be compromised and making money off of customers on top of that. But who knows... What could defend against this possibility? Multiple signatures on a certificate?
Even funnier, if one SIGINT team built a centralized "encryption everywhere" effort (before sites get encryption elsewhere), but that asset had to be need-to-know secret, so another SIGINT team of the same org, not knowing the org already owned "encryption everywhere", responded to the challenge by building a "DoS defense" service that bypasses the encryption, and started DoS driving every site of interest to that service.
(Seriously: I strongly suspect that Let's Encrypt's ISRG are the good guys. But a security mindset should make you question everything, and recognize when you're taking something on faith, or taking a risk, so that it's a conscious decision, and you can re-evaluate it when priorities change.)
Sounds like Cloudflare honestly. There are many issues with CA trust in the modern Internet. The most paranoid among us would do well to remove every trusted CA key from their OS and build a minimal set from scratch, I suppose. Browsers simply make it too easy to overlook CA-related issues, especially if you think a CA is compromised or malicious.
A signature on a certificate doesn't allow CA to snoop. They need access to the private key for that, which ACME (and other certificate signing protocols in general) doesn't share with the CA.
> They need access to the private key for that, which ACME (and other certificate signing protocols in general) doesn't share with the CA.
Modern TLS doesn't even rely on the privacy of the private key 'as much' as it used: nowadays with (perfect) forward secrecy it's mainly used to establish trust, and after which the two parties generate transient session keys.
* https://en.wikipedia.org/wiki/Forward_secrecy
So even if the private key is compromised sometime in the future, past conversation cannot be decrypted.
If the CA is somehow able to control the communication (I think usually they don't, but if they are being run by intelligence operatives then maybe they have that capability, although they probably do not use it a lot if so (in order to reduce the chance of being detected)), they could substitute a certificate with their own keys (and then communicate with the original server using the original keys in order to obtain the information required). However, this does not apply if both sides verify by an independent method that the key is correct (and if not, would allow to detect it).
Adding multiple signatures to a certificate would be difficult because the extensions must be a part of the certificate which will be signed. (However, there are ways to do such thing as web of trust, and I had thought of ways to do this with X.509, although it does not normally do that. Another way would be an extension which is filled with null bytes when calculating the extra signatures and then being filled in with the extra signatures when calculating the normal signature.)
(Other X.509 extensions would also be helpful for various reasons, although the CAs might not allow that, due to various requirements (some of which are unnecessary).)
Another thing that helps is using X.509 client certificates for authentication in addition to server certificates. If you do this, then any MITM will not be able to authenticate (unless at least one side allows them to do so). X.509 client authentication has many other advantages as well.
In addition, it might be helpful to allow you to use those certificates to issue additional certificates (e.g. to subdomains); but, whoever verifies the certificate (usually the client, but it can also be the server in case of a client certificate) would then need to check the entire certificate chain to check the permissions allowed by the certificate.
There is also the possibility that certificate authorities will refuse to issue certificates to you for whatever reasons.
Even access to the private key doesn't permit a passive adversary to snoop on traffic that's using a ciphersuite that provides perfect forward secrecy, because the private key is only used to authenticate the session key negotiation protocol, which generates a session key that cannot be computed from the captured session traffic. Most SSL and TLS ciphersuites provide PFS nowadays.
An active adversary engaging in a man-in-the-middle attack on HTTPS can do it with the private key, as you suggest, but they can also do it with a completely separate private key that is signed by any CA the browser trusts. There are firewall vendors that openly do this to every single HTTPS connection through the firewall.
HPKP was a defense against this (https://en.wikipedia.org/wiki/HTTP_Public_Key_Pinning) but HPKP caused other, worse problems, and was deprecated in 02017 and later removed. CT logging is another, possibly weaker defense. (It only works for CAs that participate in CT, and it only detects attacks after the fact; it doesn't make them impossible.)
In fact knowing the private key for other people's certificate you issue is strictly forbidden for the publicly trusted CAs. That's what happened years back when a "reseller" company named Trustico literally sent the private keys for all their customers to the issuing CA apparently under the impression this would somehow result in refunding or re-issuing or something. The CA checked, went "These are real, WTF?" and revoked all the now useless certificates.
It is called a private key for a reason. Don't tell anybody. It's not a secret that you're supposed to share with somebody, it's private, tell nobody. Which in this case means - don't let your "reseller" choose the key, that's now their key, your key should be private which means you don't tell anybody what it is.
If you're thinking "But wait, if I don't tell anybody, how can that work?" then congratulations - this is tricky mathematics they didn't cover in school, it is called "Public key cryptography" and it was only invented in the 20th century. You don't need to understand how it works, but if you want to know, the easiest kind still used today is called the RSA Digital Signature so you can watch videos or read a tutorial about that.
If you're just wondering about Let's Encrypt, well, Let's Encrypt don't know or want to know anybody else's private keys either, the ACME software you use will, in entirely automated cases, pick random keys, not tell anybody, but store them for use by the server software and obtain suitable certificate for those keys, despite not telling anybody what the key is.
I know that. But presumably, Let's Encrypt could participate in a MITM attack since they can sign another key, so that even the visitor who knows that you use them as a CA can't tell there is a MITM. Checking multiple signatures on the same key could raise the bar for a MITM attack, requiring multiple CA's to participate. I can't be the first person to think of this. I'm not even a web security guy.
It might be interesting for ACME to be updated to support signing the same key with multiple CA's. Three sounds like a good number. You ought to be able to trust CA's enough to believe that there won't be 3 of them conspiring against you, but you never really know.
This problem was solved in the mid 2010s by Certificate Transparency. Every issued certificate that browsers trust must be logged to a public append-only certificate transparency log. As a result, you can scan the logs to see if any certs were issued for your domain for keys that you don't control (and many tools and companies exist to do this).
I wouldn’t consider it “solved” because most organizations and people don’t actually check the log.
And a malicious actor can abuse this fact.
Having Chrome/Firefox asynchronously check the CT log 0.1% of the time would probably be enough to solve that.
CT logging is mandatory, and even a single missing cert is probably going to be an existential threat to any CA.
The fact that someone is checking is already enough of a deterrent to prevent large-scale attacks. And if you're worried about spearphishing-via-MitM, you should probably stick to Tor.
The signing keys used by the Certificate Authority to assert that the client (leaf) certificate is authentic through cryptographic signing differ from the private keys used to secure communication with the host(s) referenced in the x509 CN/SAN fields.
I know that. At issue is the fact that the signing keys can be used to sign a MITM key. If there were multiple signatures on the original key, it would (or could) be a lot harder to MITM (presumably). Do you trust any CA enough to never be involved in this kind of scandal? Certainly government CA's and corporate CA's MITM people all the time.
Edit: I'm gonna be rate limited, but let me just say now that Certificate Transparency sounds interesting. I need to look into that more, but it amounts to a 3rd party certificate verification service. Now, we have to figure out how to connect to that service securely lol... Thanks, you've given me something to go read about.
This is where Certificate Transparency -- and it being mandatory for browser trust -- comes in to save the day.
I mean, it doesn't help that the browser duopoly is making it harder and harder to use self-signed certificates these days. Why, if I were more paranoid, I might come to a similar conclusion.
it seems like all this infrastructure could be replaced by a DNS TXT record with a public key that browsers could use to check the cert sent from the web server. A web server would load a self-signed cert (or whatever cert they wanted), and put the cert's public key into a DNS record for that hostname. Every visit to a website would need two lookups, one for address and one for key. It puts control back into the hands of the domain owners and eliminates the need for letsencrypt.
E.g. DNS-Based Authentication of Named Entities? https://www.rfc-editor.org/rfc/rfc6698
There's a TLSA resource record for certificates instead of a TXT encoding.
As far as I know no major browser supports it, and adoption is hindered by DNSSEC adoption.
I'm not sure what that would solve. You would still need some central entity to sign the DNS TXT record, to ensure that the HTTPS client does not use a tampered DNS TXT record.
If someone can tamper with your DNS TXT records now they can get a certificate for your domain.
Not tamper with the record directly, but MitM it on the way to a target.
That should be prevented by dnssec no?
That's what DNSSEC is for.
Yes, but that's just PKI again, which is what the OP was trying to avoid.
That's already the case with dns-01 verification, no?
Besides, if someone has access to your TXT records then chances are they can also change A records, and you've lost already.
Ah but then how would nations spy on people by compromising the root certificate?
You're insinuating that the Let's Encrypt roots are compromised?
https://letsencrypt.org/repository/#isrg-legal-transparency-...
No, but it’s a well-established fact that some CAs are run by governments, some of which are publicly trusted by browsers.
I’m sorry, who the heck wrote this and why should I trust them? Very poorly written, also.
It’s bizarre. There is a photo at the top, no name, no site title. No about page. Extremely untrustworthy.
No! It's not bizarre.
Scroll down to the footer--> click on "Homepage"
Then you will get to his homepage: https://www.brocas.org/
It certainly affected Wile E Coyote.
And plan9 users worldwide!
(There's dozens of us!)