(First part of this rant here.)
If there was ever a time to be skeptical about what software is doing in your machine, this would be it.
As referenced in our previous instalment, this happened.
Now, before anything else, it might be a good idea to untwist our panties. And prune the misinformation that has been circulating a little bit. Only after that can we try to really sort this mess out.
Here’s the gist of it.
First of all, the CIA is not able to crack the encryption algorithms of apps like WhatsApp, Telegram and Signal. Which is why they went for the easier route of cracking the phones’ OSes instead. This can be done remotely.
Second, CIA has the ability to crack computers (Windows, MacOS and Linux) and Samsung Smart TVs, although nothing I’ve read so far indicates that this can be done in a purely remote fashion. Apparently, physical access to the target device is required in one form or another (I’m still waiting to see more details though).
Third, it looks like these operations were not of the dragnet type (as the ones referenced in the Snowden leaks), rather they were more the kind of targeted operation.
Fourth, the large number of vulnerabilities that existed in CIA’s catalog is surprising, considering that the three letter folk are not supposed to hoard vulnerabilities and keep them hidden from US companies, as set forth by the Vulnerability Equities Process.
Fifth, a large number of IP addresses corresponding to US locations were under active CIA monitoring. This is surprising because, with few exceptions, the CIA has jurisdiction only over foreign territories.
If said Joe is American, it means that Joe can now “initiate the debate” about the possible overreach of the agency’s activities. He may also question the agency’s attitude towards VEP and the US tech industry. There are “laws” which these bodies are supposed to “follow”, and then the citizen, with the help of the courts is supposed to make them accountable.
If said Joe is not American… Tough shit. As an awesome band once said, “We’re all living in America”. But we are not all American, and there’s a big difference.
See Joe, if you are not American, the US of A has no responsibility to your particular individual. None. You are a data point. And in this case, the more interesting the data point is, the worse off the data point is likely to be.
In both cases, we can all see the pointlessness of this particular discussion.
First, because the American taxpayer is paying the CIA to do exactly what it is doing. Whether the taxes are well spent or not is a totally different discussion. The CIA is an intelligence agency, and as such, collects intelligence. They are, in essence, doing their job and doing it in a very successful fashion.
Second, for citizens of other nations: what are you complaining about? You bought the computer/phone/toaster knowing it was American. It can come with a backdoor from the factory itself, for all you care. Or maybe the device is Korean, but the OS most likely is not. Redmondites and Cupertinites cannot complain about this: you put your hand in the lion’s mouth, voluntarily.
It’s your rented, filthy kitchen, and you can’t blame the mice.
So just sit back, relax, enjoy your DRM’d content. If you are being spied on by the CIA (you probably aren’t – your bulk data is going to the NSA instead and there is no budget to do targeted surveillance on everyone), just rejoice in the fact that you are effectively not an interesting person, and the CIA will find nothing on you because there is nothing to find.
It means a couple of bad things. The first bad thing is that your devices, where you store your personal thoughts and private conversations can, under certain conditions, reveal those private bits of information to parties you do not necessarily trust.
You might personally trust the CIA or the NSA (or not), but most of us would agree that a stolen computer is, when stolen, in the hands of an untrusted party. If you’re a journalist, a physician, a therapist or an attorney, this is obviously a big problem. And the key facts about the list of vulnerabilities in the CIA menu are that they exist and they are exploitable.
Vulnerabilities are not particularly selective about who exploits them. The usual fallacy accompanying these debates is that “only the CIA knew about said holes”. Which is unprovable at best, and false if we are really unlucky.
The other bad thing is that you don’t really have anyone to fully trust right now. Your device is not giving you an adequate amount of privacy protections, and lets not talk about cloud services for brevity. Spy agencies broke their promises of sharing the holes with the tech industry so they can be fixed, and the tech industry continues to ship shitty software because “OMG quarterly statement! New Seed Round! YAS!”
Your options right now, are to refuse technology, which is extremely inconvenient, or live with the risks (which to be fair are not all that bad if your are already doing all the good practices, but are unnecessary and could be avoided). In particular, for nationals of countries other than the US, who have no good reason to trust the American spy agencies by default (due to the lack of a bond, be it legal, patriotic or otherwise), it raises the questions of “what is my own country doing to protect me in this case?” And we all no what the answer to that question is.
This one is mostly to the software engineers, developers and programmers (sysadmins, you are in the clear, but only today).
Here is a simple fact of life for us: We are writing crap software. But not just any crap. Crap of the dangerously negligent persuasion. Don’t give me the usual “all software has bugs” discourse, because this is not entirely true. And worse than spreading a lie, is spreading one which creates in software users the belief that it is ok if software has bugs because that’s just the way things are. One software bug due to an honest programming mistake is not the same as three compounded bugs, which can be combined in order to get root on the machine, all of which can be traced back to errors your compiler should be able to catch by itself.
There are well known ways of writing software which does not break (or at least, breaks in a predictable, non destructive way). There are, for instance, languages which check array boundaries automatically to prevent buffer overflows (and hence, arbitrary code execution). Many of them. There are languages which, upon compilation, can prove, that a program or code unit is coherent with its own specification. Languages which have embedded continuous testing constructs. Languages which allow you to write a massively scaled web service that is sane, secure, and does not need to be restarted every 5 minutes (true story, google “ruby” and “Zed Shaw” to get started). Languages which allow you to, interactively, inspect everything the program is doing and why (and I was told there are machines that can do this too).
To top it all off there are methods of all sorts to avoid software development pain, but the teaching institutions are terrible at forwarding this knowledge because they lack real world experience and in general are too slow to pick up on the changes. The industrial institutions on the other hand, have the environment where this is needed and can be practiced, but because management is more interested in bottom lines than in the morale of their “warriors” (something I actually heard at one point) and they see software development as just another kind of brick-laying, you are left to your own devices to figure out what real software engineering is about.
You read about these hotshots that could swap two locations in memory without a temporary slot and can implement quicksort by heart (why?) and pretty soon you are indoctrinated to think “garbage collectors are for pussies”, that all that overhead will make your program slow and programming in C is the shit. If your program is more than 5K SLOC of C, it probably is the shit. Just not in the way you think. Same applies to C++. A turd with class is still a turd. There’s a reason the suckless crowd write software that small with such a limited feature set: using these types of tools to build software, particularly the ones which are hostile to modern testing practices due to their nature, requires that the entire spec of the program can fit in ones head while developing, lest the developer fall into a small distraction and create another Heartbleed or Cloudbleed. Sane design, as they define it, calls for a ruthless ability to refuse adding features.
I’ll pause here a moment to semi-exclude embedded developers and those who build real time systems from this, who obviously don’t have many choices in terms of their tools. For now. Available memory and resources tend to go up. So this will not be an excuse in the future. And obviously, the fact that you are using C or C++ by necessity does not mean you are excused from producing correct code which either works as intended, or not at all. It merely excuses you from the choice of a better platform, because currently there isn’t one. The IoT security mess is still there to be cleaned up. And the mop has your name on it.
For everyone else, there are no excuses at all. RAM is cheap, CPU is cheap, disk is cheap and bandwidth is cheaper. We have all the elements needed to build sane, humanly uncomplicated systems, which may be complicated for the computer to deal with, but that is why we build them anyway (to deal with complicated comptutations), and do what the user intended and nothing more. They would perhaps lack a couple of “features”, but they wouldn’t put the users’ privacy at risk.
But of course, in the “real world”, we can’t have that, because the user wants content (or marketing says they do anyway) and a red squiggly mark beneath their misspelled words. The content must be decrypted with a half tested blob and toy ciphers that can’t be audited. The squiggly (which is in fact a useful feature) must be implemented as a “temporary” hack because the previous “engineers” (who are now VPs of Sales, go figure) fucked up the graphics implementation, as they “were on a tight deadline”. As a result, the API is terrible and the implementation is worse. And of course, the lump has no testing framework whatsoever, so you’re afraid to do the much needed refactoring because you fear it will just break everything. You are now forced to write your own little lumps on top of this big lump because of “backwards compatibility” (which is usually backwards only, and not compatible in any way).
And before the holders of the status quo flagstaff come to ask “Well, what are you going to do? Come and fix all our security issues in our software then”, let me remind you of the fact that such a proposition makes no sense. A civil engineer pointing out a flaw in the development process of a couple hundred buildings would never be asked to go in and fix the flaws themselves. They would, probably, be asked for input, and thanked (or possibly paid) for it instead. Handling the flaws in the process is still the responsibility of someone else (whoever signed for the particular building). The security vulnerabilities introduced by your code are your responsibility. No one else’s.
The complete pill for this is, that we, as a profession, stop being spineless minions and learn to say no. I can’t account for how many times I’ve heard some sort of responsibility dismissing comment along the lines of “management wants it like that” or “I just write the code”. Well, as we’ve seen from the leaks, management does not know (or doesn’t care, which is worse) about the consequences and the code you “just wrote” is now putting millions of people at risk.
Next time refuse to write broken software from the start. That’s how messes like these eventually get cleaned up. Because doing it the other way around is how we got to the mess in the first place. Especially, when the code you write is used by millions of people every day.