Now the French and Germans are at it - wanting to ban true end to end encryption. See article.
It is not complicated - such things can never work as intended and can only harm normal people.
Please spend 4 minutes listening to me try, once again, to explain it - and tell people.
Subscribe to:
Post Comments (Atom)
Fencing
Bit of fun... We usually put up some Christmas lights on the house - some fairy lights on the metal fencing at the front, but a pain as mean...
-
Broadband services are a wonderful innovation of our time, using multiple frequency bands (hence the name) to carry signals over wires (us...
-
For many years I used a small stand-alone air-conditioning unit in my study (the box room in the house) and I even had a hole in the wall fo...
-
It seems there is something of a standard test string for anti virus ( wikipedia has more on this). The idea is that systems that look fo...
Adrian - Just imagine for a moment that instead of running A&A, you were working at GCHQ (or the french equivalent) and you had been tasked with writing surveillance software to help the authorities work out what the bad guys are doing and who they are talking to. You might well have access to terabytes of intercept data to sift through every day, but you write your software to find the interesting connections or people with a certain profile. With your excellent network technical knowledge, I imagine that you would actually be quite good at this. Imagine now that you have been doing this for some time and had actually been quite successful at finding useful information that directly led to lives being saved - In this position, wouldn't you be pushing for these powers too?
ReplyDeleteI spoke to someone who works on this stuff in government earlier this summer and he told me that current mass surveillance *IS* producing them useful information that is helping them catch terrorists. He told me that it has directly led to terrorist atrocities being stopped and this is why they are pushing for it to be put on a proper legal footing in parliament.
I think you are missing the point when you say that terrorists can use end to end encryption - Technically, you are right of course, but if iMessage could be decrypted by Apple and hence was no longer completely secure for terrorists to use, it would help the authorities: Either the terrorists are lazy and use iMessage et al anyway (in which case the authorities could ask the party in the middle to decrypt it) or by using their own end-to-end encryption, the terrorists would be demonstrating a new type of activity to help the authorities work out who they need to look at more closely. The manual paper encryption you refer to of course works, but you don't mention that this form of encryption is cumbersome and therefore not very practical to use for a high volume of routine day to day communications.
Most of the intelligence game is about working out who is talking to whom and the people that are in the conspirator network, not just what the content of the messages are.
I don't agree with you that changing the encryption so that a central service provider like Apple, Microsoft or Facebook could decrypt the traffic would meaningfully weaken security for everyday users. Yes, in theory, it is a bit weaker than end to end encryption, but it is very unlikely that the email spam criminals you refer to in your video would have access to the decryption keys, so in practice in would only be very slightly less secure than the end to end encryption we use today.
There would need to be careful design of the management of decryption keys by central parties so that no individual corrupt employee at one of these central parties could get hold of them, but designing a secure method for accessing the decryption key should not be too difficult for them.
The majority of the normal non-technical people I have spoken to want to keep using iPhones and secure messaging, but they also want the authorities to catch the terrorists and prevent their atrocities. They do not have an issue with mass surveillance where it is used under proper legal authority to help the authorities catch terrorists. They mostly feel that there is nothing terribly interesting in their private data and that the authorities could look there, if they want to and are suitably legally authorised to do so.
The authorities are under huge pressure to capture the terrorists that would quite happy kill all of us, given the opportunity to do so. Whilst I don't think we should give carte blanche to the authorities to look at anything anytime they want to (as this would be subject to too much abuse), I feel we do need to help them do their job to protect the country, with the right legal controls in place.
One of the issues is that huge pressure. Terrorists are criminals and need our attention, but they are statistically quite a minor threat to live compared to many other things. What is worse is, the name is a clue, they are trying to create "terror" and we seem to be falling for it. If terrorist deaths were reported at the same level as car related deaths the most of the country would not even know the word "terrorist". End to end encryption is an important and necessary tool against criminals and weakening that is bad. What is worse is that simple things like https are end to end, just that one end is a web server. You could easily use that protocol to a phone and so make your app look completely innocent. There are many other ways to hide the encryption. If a terrorist group made an app and had any sense the use of the app would be pretty much undetectable. You also have the huge hole in the proposals that there need not be an "over the top provider" to target with laws. If an app is open source - who does the warrant requiring decryption get severed on exactly?
Delete" spoke to someone who works on this stuff in government earlier this summer and he told me that current mass surveillance *IS* producing them useful information that is helping them catch terrorists. He told me that it has directly led to terrorist atrocities being stopped and this is why they are pushing for it to be put on a proper legal footing in parliament."
DeleteWell, they would say that, wouldn't they?
I mean, sure maybe everyone who has access to surveillance data at GCHQ or the NSA is a saint. But maybe they include some bad guys who just want the powers in order to gain personal power, and we cannot tell the difference based on the unsubstantiated claims they make. Whenever an agency like this has been forced to produce real evidence or back down, they've backed down or sent up a smokescreen of minor cases that could have been solved with honest police work. That is insufficient evidence on which to force the entire country to give up the use of opaque envelopes.
I've also spoken to people who work on this stuff in government (indeed, I considered a job working on it, but declined as the salaries on offer were laughable). They do believe that it *is* producing useful information, but they don't have proper metrics for it; they know how much of their output is used by anti-terror police, and that's about it for metrics.
DeleteSo, there's no false positive or false negative rate; there's no attempt made to determine how often their data is necessary to an investigation (i.e. where without the mass surveillance data, the investigation would not have begun, or not have successfully concluded). They don't know what the precision, specificity, accuracy or recall of their models actually is against the wider population, because they don't track that, either - they're only tracking against persons of interest.
Now, it's possible that they lied to me about what they do track - but in that case, why should I trust their assertion that they're beneficial?
Taking the claims at face value, they don't know at least two things that I consider extremely important:
1) How often are they responsible for actually stopping someone bad? I'm conflating two things here - one is when they trigger an investigation that would not otherwise happen, the other is when they provide the "straw" of evidence that "breaks the camel's back" of the presumption of innocence.
2) How often do they flag up someone harmless as a terrorist, only to be filtered out downstream? It's great and all if you flag every terrorist - but I'm not sure it's beneficial if you also flag 10 million innocent people.
Neither of these are questions they could answer at all, even in vague terms, to someone they'd security cleared for interview.
It doesn't really matter whether the app is open source or not - Clearly if the authorities can't find someone to serve a warrant on to decrypt the traffic, then they won't do so. If the app is using a central server, the IP address of that server would still provide useful information for the authorities to search for in the IP traffic streams of other users. What the authorities need to find is activity markers in the data streams that help them identify the people they need to look at more closely.
ReplyDeleteAt present, it is much easier for the terrorists to make themselves look like any other normal user and to hide their activities amongst all the traffic going to Facebook / Twitter / iMessage etc. If this option became unsafe for the terrorists to use (because it could be decrypted by the likes of Facebook / Twitter / Apple etc. by law) then they would be forced to use other means of communication which would inevitably be different to what ordinary members of the general public are using. As mentioned previously, this would not in reality make those services significantly less safe for ordinary users than they are today.
You seem to suggest that we should just ignore the terrorists, because the number of deaths they cause are too small to matter. Given that ISIS want a worldwide caliphate, such an approach would surely lead them to ultimately succeed. The authorities are already stopping lots of attacks and the new surveillance powers are about enabling them to continue with that success and thus keep successful terrorist attacks to a minimum.
With regards to car related deaths, the authorities have been continually trying to deal with those by making cars and roads progressively safer - We have lots of law for this mandating safety features in cars, outlawing drink driving, banning use of mobiles while driving etc. There were 1,713 road related deaths in the UK in 2013 and this is down from 3,400 in 2000. By way of comparison to terrorists, the 9/11 attack in the USA resulted in 2,996 deaths, so this demonstrates that terrorist activity is already at levels comparable to road related deaths and of course this would be much higher if the authorities were not actively working to prevent terrorist attacks.
We all have to give up a certain amount of privacy to safely live in a civilised country. We are subject to the rule of law and have to disclose information to the authorities when necessary (e.g. our income for tax collection purposes). If the authorities need mass surveillance capabilities to help them hunt down terrorists, is it really reasonable to deny them this capability?
At the end of the day, the volume of the data they collect will mean that it can only realistically be crunched by computers, so your personal data will remain very unlikely to be looked at in detail by a human working for the government, unless your on-line profile has all the hallmarks of a terrorist, in which case they will apply for suitable authorisation and then have lots of people looking over your personal data in detail.
Are the proposed powers really much different to carrying out covert surveillance of a house and then getting a search warrant to enter and search the property, once suspicion has been raised sufficiently?
Your stats are screwed a bit, comparing one year in UK is close to total terrorist deaths in US in 20 years (and that is because of one event) similar to US road deaths in each month. And I did not say ignore terrorists. They do not, however, need the level of attention and invasion of privacy and "terror" they get. It will be interesting when Apple make access to iMessage by apps easier as that will allow apps to do end to end encryption and then use iMessage as the transmission medium, so again, hiding in the noise - only if you order a decryption of iMessage would you then see you cannot actually read the message. There are also plenty of ways to convey encrypted message without a central server for that purpose, and ways to make it hard to see who is accessing messages (i.e. track the meta data).
DeleteIf I were a terrorist, tell me why the following method of communications wouldn't avoid detection, even without any encryption:
ReplyDeleteI set up an email client, such as microsoft outlook on my computer and then allow my terrorist associates to remotely access my computer. All messages are then simply placed in the outbox or draft messages of my email client. Provided everyone else uses a secure VPN connection when remotely logging in to my computer to read or leave a message there should be nothing incriminating in any ICRs that the ISP may have been ordered to retain and no messages are ever sent over the internet. How will the stupid IP Bill prevent that or similar subterfuge?