Overthrow the Cert Mafia!
The certificate system is badly broken on a couple of levels and the most recent revelation that Turktrust accidentally issued two intermediate SSL CAs which enabled the recipients to issue presumptively valid arbitrary certificates. This is just the most recent (probably the most recent, this seems to happen a lot) compromise in a disastrously flawed system including the recent Diginotar and Comodo attacks. There are 650 root CAs that can issue certs, including some CA‘s operated by governments with potentially conflicting political interests or poor human rights records and your browser probably trusts most or all completely by default.
It is useful to think about what we use SSL certs for:
- Establishing an encrypted link between our network client and a remote server to foil eavesdropping and surveillance.
- To verify that the remote server is who we believe it to be.
Encryption is by far the most important, so much more important than verification that verification is almost irrelevant, and fundamental flaws with verification in the current CA system make even trying to enforce verification almost pointless. Most users have no idea what what any of the cryptic (no pun intended) and increasingly annoying alerts warning of “unvalidated certs” mean or even what SSL is.
Google recently started rejecting self-signed certs when attempting to establish an SSL encrypted POP connection via Gmail, an idiotically counterproductive move that will only make the internet less secure by forcing individual mail servers to connect unencrypted. And this is from the company who’s cert management between their round-robin servers is a total nightmare and there’s no practical way to ever be sure if a connection has been MITMed or not as certs come randomly from any number of registrars and change constantly.
What I find most annoying is that the extraordinary protective value of SSL encrypted communication is systematically undermined by browsers like Firefox in an intrinsically useless effort to convince users to care about verification. I have never, not once, ever not clicked through SSL warnings. And even though I often access web sites from areas that are suspected of occasionally attempting to infiltrate dissident organizations with MITM attacks, I still have yet to see a legit MITM attack in the wild myself. But I do know for sure that without SSL encryption my passwords would be compromised. Encryption really matters and is really important to keeping communication secure; anything that adds friction to encryption should be rejected. Verification would be nice if it worked.
Self-signed certs and community verified certs (like CAcert.org) should be accepted without any warnings that might slow down a user at all so that all websites, even non-commercial or personal ones, have as little disincentive to adding encryption as possible. HTTPSEverywhere, damnit. Routers should be configured to block non-SSL traffic (and HTML email, but that’s another rant. Get off my lawn.)
Verification is unsolvable with SSL certs for a couple of reason, some due to the current model, some due to reasonable human behavior, some due to relatively legitimate law-enforcement concerns, but mostly because absolute remote verification is probably an intractable problem.
Even at a well run notary, human error is likely to occur. A simple typo can, because registrar certs are by default trusted globally, compromise anyone in the world. One simple mistake and everybody is at risk. Pinning does not actually reduce this risk as breaks have so far been from generally well regarded notaries, though rapid response to discovered breaches can limit the damage. Tools like Convergence, Perspectives, and CrossBear could mitigate the problem, but only if they have sufficiently few false positives that people pay attention to the warnings and are built in by default.
But even if issuance were somehow fixed with teams of on-the-ground inspectors and biometrics and colonoscopies, it wouldn’t necessarily help. Most people would happily click through to www.bankomerica.com without thinking twice. Indeed, as companies may have purchased almost every spelling variation and point them all toward their “most reasonable” domain name, it isn’t unreasonable to do so. If bankomerica.com asked for a cert in Ubeki-beki-beki-stan-stan, would they (or even should they) be denied? No – valid green bar, invalid site. Even if misdirections were non-SSL encrypted, it isn’t practical to typo-test every legit URL against every possible fake, and the vast majority of users would never notice if their usual bank site came up unencrypted one day with a DNS attack to a site not even pretending to fake a cert (in fact, studies suggest that no users would notice). This user limitation fundamentally obviates the value of certs for identifying sites. But even a typo-misdirection is assuming too much of the user – all of my phishing spam uses brand names in anchortext leading to completely random URLs, rarely even reflective of the cover story, and the volume of such spam suggests this is a perfectly viable attack. Verification attacks don’t even need to go to a vaguely similar domain let alone go to all the trouble of attacking SSL.
One would hope that dissidents or political activists in democracy challenged environments that may be subject to MITM attacks might actually pay attention to cert errors or use perspectives, convergence, or crossbear. User education should help, but in the end you can’t really solve the stupid user problem with technology. If people will send bank details to Nigeria so that a nationality abandoned astronaut can expatriate his back pay, there is no way to educate them on the difference between https://www.bankofamerica.com and http://www.bankomerica.com. The only useful path is to SSL encrypt all sites and try to verify them via a distributed trust mechanism as implemented by GPG (explicit chain of trust), Perspectives (wisdom of the masses), or Convergence (consensus of representatives); all of these seem infinitely more reliable than trusting any certificate registry, whether national or commercial and as a bonus they escape the cert mafia by obviating the need for a central authority and the overhead entailed; but this only works if these tools have more valid positives than false positives, which is currently far from the case.
Further, law enforcement makes plausible arguments for requiring invisible access to communication. Ignoring the problematic but understandable preference for push-button access without review and presuming that sufficient legal barriers are in place to ensure such capabilities protect the innocent and are only used for good, it is not rational to believe that law enforcement will elect to give up on demanding lawful intercept capabilities wherever possible. Such intercept is currently enabled by law enforcement certificates which permit authorized MITM attacks to capture encrypted data without tipping off the target of the investigation. Of course, if the US has the tool, every other country wants it too. Sooner or later, even with the best vetting, there is a regime change and control of such tools falls into nefarious hands (much like any data you entrust to a cloud service will sooner or later be sold off in an asset auction to whoever can scrape some residual value out of your data under whatever terms suit them, but that too is a different rant). Thus it is not reasonable for activists in democracy challenged environments to assume that SSL certs are a secure way to ensure their data is not being surveilled. Changing the model from intrinsic, automatic trust of authority to a web-of-trust model would substantially mitigate the risk of lawful intercept certs falling into the wrong hands, though also making such certs useless or far harder to implement.
There is no perfect answer to verification because remote authentication is Really Hard. You have to trust someone as a proxy and the current model is to trust all or most of the random, faceless, profit or nefarious motive driven certificate authorities. Where verification cannot be quickly made and is essential to security, out of band verification is the only effective mechanism such as transmitting a hash or fingerprint of the target’s cryptographic certificate via voice or postal mail or perhaps via public key cryptography.
Sadly, the effort to prop up SSL as a verification mechanism has been made at the compromise of widespread, low friction encryption. False security is being promoted at the expense of real security.
That’s just stupid.
Category: Technology
Google APIs Suck
Off-Site scripts are annoying.
To explain – I use noscript (as everyone should) with Firefox (it doesn’t work with Chrome: I might consider trusting Google’s browser for some mainstream websites when it does, but I don’t really like that Chrome logs every keystroke back to Google and I’m not sure why anyone would tolerate that). NoScript enables me to give per-site permission to execute scripts.
The best sites don’t need any scripts to give me the information I need. It is OK if the whizzy experience is degraded somewhat for security’s sake, as long as that is my choice. Offsite scripting can add useful functionality, but the visitor should be able to opt out.
Most sites use offsite scripting for privacy invasion – generally they have made a deal with some heinous data aggregator who’s business model is to compile dossiers of every petty interest and quirk you might personally have and sell them to whoever can make money off them: advertisers, insurance companies, potential employers, national governments, anyone who can pay. In return for letting them scrounge your data off the site, they give the site operator some slick graphs (and who doesn’t love slick graphs). But you lose. Or you block google analytics with noscript. This was easy – block offsite scripts if you’re not using private browsing or switch to private browsing (and Chrome’s private browsing mode is probably fine) and enjoy the fully scripted experience.
But I’ve noticed recently a lot of sites are borrowing basic functionality from Google APIs. Simple things, for which there are plenty of open source scripts to use like uploading images – this basic functionality is being sold to them in an easy to integrate form in exchange for your personal information: in effect, you’re paying for their code with your privacy. And you either have to temporarily allow Google APIs to execute scripts in your browser and suck up your personal information or you can’t use the site.
Category: Politics • Technology
-
Recent Posts
- Technology: maximizing individual radius of lethality. 2023 February 05
- Sidebar featured images only on single post pages 2023 January 24
- LastPass: The Cloud is Public and Ephemeral 2023 January 05
- Some gnuplot and datamash adventures 2022 December 29
- Smol bash script for finding oversize media files 2022 September 02
- Deep Learning Image Compression: nearly 10,000:1 compression ratio! 2022 June 28
- Audio Compression for Speech 2022 June 28
- Audio Processing Workflow 2022 April 18
- Ancient history: DEF CON 9 Talk on Quantum Computers 2021 November 21
- South Lake Tahoe Caldor Fire Timelapse 2021 September 03
- Links
- Archives
- Post History
- Categories
- Search