IETF Security director Stephen Farrell offers a report card on evolving defences
FEATURE After three years of work on making the Internet more secure, the Internet Engineering Task Force (IETF) still faces bottlenecks: ordinary peoples’ perception of risk, sysadmins worried about how to manage encrypted networks, and – more even than state snooping – an advertising-heavy ‘net business model that relies on collecting as much information as possible.
In a wide-ranging 45-minute, 4,000-word interview (full transcript in this PDF), IETF Security Area Director Stephen Farrell gave a report card of what’s happened since the Internet Architecture Board declared that “pervasive monitoring is an attack”, in RFC 7258. Much of the discussion used Farrell’s presentation to the NORDUnet conference in September, and the slides are here.
Let’s boil the ocean, so we can cook an elephant.

And eat it.
Given the sheer scale of the effort involved – the IETF’s list of RFCs passed the 8,000 mark in November – nobody expected the world to get a private Internet quickly, but Farrell told The Register some of the key in-IETF efforts have progressed well: its UTA (Using TLS in Applications), DPRIVE (DNS Privacy), and TCPINC (TCP INCreased security, which among other things is working to revive the tcpcrypt proposal rejected earlier in the decade).
UTA: The idea is to get rid of the nasty surprises that happen when someone realises a standard (and therefore code written to that standard) still references a “laggard” protocol – so, for example, nobody gets burned complying with a standard that happens to reference a deprecated SSL or TLS standard.
“The UTA working group produced RFC 7525 (Recommendations for Secure Use of Transport Layer Security (TLS) and Datagram Transport Layer Security (DTLS), https://tools.ietf.org/html/rfc7525 here).

The last time I looked, there were something like 50 RFCs that are referencing that [The Register checked this list, provided by Farrell – it seems to be close to 70 already].”
The idea of UTA is that a protocol written 10 or 15 years ago should be updated so it no longer references the then-current version of TLS, he said.
“That’s being used in order to provide a common reference: as people update their implementations, they’ll reference a more modern version of TLS, currently TLS 1.2, and as TLS 1.3 is finished, we have an automated-ish way of getting those updates percolating through to the documentation sets.
“That’s quite successful, I think, because it normalises and updates and modernises a bunch of recommendations.”
DNSPRIV: Readers will recall that IETF 97 was the venue for the launch of Stubby, a demonstrator for securing DNS queries from the user to their DNS responder.
Stubby, a demonstrator of DNS privacy work

That, Farrell said, is a good example of where DNSPRIV is at – on the user side, it’s ready for experimental code to go into service.
“DNS privacy is something that is ready to experiment with.

The current work in DPRIVE was how to [secure] the hop between and the next DNS provider you talk to.
“That’s an easy problem to tackle – you talk to that DNS resolver a lot, and you have some shared space, so the overhead of doing the crypto stuff is nowhere.”
Getting upstream to where DNS queries become recursive – your ISP can’t answer, so they pass the query upwards – is much harder, he said.
“Assuming that [the ISP] needs to find “where is theregister.co.uk?”, he’ll eventually talk to the UK ccTLD, and then he’ll go talk to .co.uk and then he’ll go talk to theregister.co.uk – it’s forking the communications a lot more, and it’s a little harder to see how to efficiently amortise the crypto.
“The DPRIVE working group are now examining whether they think they can produce some technology that will work for that part of the problem.”
TCPINC: Some of the questions in this working group may never be seen by ordinary Internet users, but they’re still important, Farrell said.
“I think we’re close to having some TCP-crypt-based RFCs issued, there’s been code for that all along. Whether or not we’ll get much deployment of that, we’ll see.”
“I think there are a bunch of applications that maybe wouldn’t be visible to the general public. Let’s say you have an application server that has to run over a socket – an application that runs on top of the Linux kernel, say, where you have to use the kernel because of the interfaces involved, and you can’t provide the security above the kernel because you need it inside.
“That’s where TCPINC fits in.
Storage – they have really complex interfaces between the network-available storage server and the kernel, and there’s lots of complex distributed processing going on.”
That’s important to “the likes of NetApp and EMC and so on”, he said: “For some of those folks, being able to slot in security inside the kernel, with TCPINC, is attractive.
Some, I might expect, will adopt that sort of thing – but it may never be seen on the public Internet.”
Security and the end-to-end model
Farrell said more encryption is changing the Internet in ways the general public probably doesn’t think about – but which they’ll appreciate.
The old end-to-end model – the “neutral Internet” – has been under both overt and covert attack for years: carriers want to be more than passive bit-pipes, so they look for ways that traffic management can become a revenue stream; while advertisers want access to traffic in transit so they can capture information and inject advertisements.
Ubiquitous encryption changes both of these models, by re-empowering the endpoints.

Along the way, perhaps surprisingly, Farrell sees this as something that can make innovation on the Internet more democratic.
He cited HTML2 and QUIC as important non-IETF examples: “there’s a whole bunch of people motivated to use TLS almost ubiquitously, not only because they care about privacy, but because of performance: it moves the point of control back towards the endpoint, not the middle of the network.
“One of the interesting and fun things of trying to improve the security properties and privacy properties of the network is that it changes who controls what.
“If you encrypt a session, nobody in the middle can do something like inject advertising.
“It reasserts the end-to-end argument in a pretty strong way.
If you do the crypto right, then the middlebox can’t jump in and modify things – at least not without being detectable.”
He argues that the carrier’s / network operators’ “middleboxes” became an innovation roadblock.
“The real downside of having middleboxes doing things is that they kind of freeze what you’re doing, and prevent you innovating.
“One of the reasons people did HTTP2 implementations, that only ever talk ciphertext, is because they found a lot of middleboxes would break the connection if they saw anything that wasn’t HTTP 1.1.
“In other words, the cleartext had the effect that the middleboxes, that were frozen in time, would prevent the edges from innovating. Once they encrypted the HTTP2 traffic, the middleboxes were willing to say ‘it’s TLS so I won’t go near it’, and the innovation can kick off again at the edges.”
Won’t somebody think of the sysadmin?
Systems administrators – in enterprises as well as in carriers – are less in love with crypto.
“Network management people have been used to managing cleartext networks,” he said.
For more than 20 years, for perfectly legitimate reasons – and without betraying their users – sysadmins would look into packets, see what they contained, and when sensible do something about them.
“Not for nefarious reasons – in order to detect attacks, in order to optimise traffic, and so on. We’re changing that, and that also means the technology they’re using will be undergoing change, to deal with much more ciphertext than plaintext.
“We need to learn better ways of how to fulfil those same functions on the network,” he said.
“If you had some security mechanism in your network for detecting some malware attack traffic, instead of being able to operate that from the middle of the network, it pushes a requirement on you to move that to the edge.”
Commercial services are starting to understand how this can work, he said: “If you look at some of the commercial instant messaging providers, that have introduced end-to-end encryption of their messaging – they have found they can move those functions in their networks to new places to do what they need to do.
“It means change, but it doesn’t make network management impossible.”
Advertising models will change
Companies collaborating to collect advertising data remains a big challenge, he said.

That’s likely to change – “there’s no reason why a particular business model has to last forever”, but in the meantime, “it’s hard to see how we make a dramatic improvement in privacy.
“We can make some improvements, but how we make it dramatically better – it’s hard.

The incentives are aligned to make all the service providers want to be privacy-unfriendly, from the point of “me”, but not perhaps the point of view of 99 per cent of people who use the Internet, and seem happy enough with it.”
Breaches and leaks are frightening the service providers, which helps, because providers “realise that storing everything, forever, is toxic, and in the end they’ll get caught by it.”
About the cough NSA cough
The Register also asked: what protects future standards against security organisations polluting standards, as they did with DUAL-EC?
“As an open organisation, we need to be open to technical contributions from anywhere,” Farrell said, “be that an employee of the NSA, or be that – as we’ve had in one case – a teenager from the Ukraine who was commenting on RFCs five or six years ago.”
It has to be handled socially, rather than by process, he argued, citing the IETF’s creation of the Crypto Forum Research Group, chaired by Alexey Melnikov and Kenny Paterson and designed to bring together IETF standards authors and the academic crypto community.
He described it as a “lightweight process” designed to assess crypto proposals – have they been reviewed? Is the proposal novel and maybe not ready for prime time?
“The number of NSA employees that attend IETF [meetings] – I don’t think it’s a useful metric at all.
I think how well peoples’ contributions are examined is a much more useful metric, and there, things like having the CFRG, having academic cryptographers interacting much more with the standards community – those are more effective ways of doing that.
“We’ve set up a thing called the Advanced Networking Research Prize, which is a prize for already-published academic work.
It pays for the academic come to an IETF meeting, give us a talk, get them involved” (Paterson first became involved in the CRFG as an invited academic who won the prize).
Spooks want to monitor everyone because they believe everyone might be guilty, he added, and that’s a mistake.
“We should not think people are guilty by association.

That’s a fallacy – if you believe that NSA employees are not allowed to contribute, you’re making the same mistake they’re making.” ®

Leave a Reply