3.1 C
London
Sunday, November 19, 2017
Home Tags Internet Engineering Task Force (IETF)

Tag: Internet Engineering Task Force (IETF)

TLS over HTTP? Yes please, says every sysadmin, netizen The Internet Engineering Task Force (IETF) has just put out a new draft for a standard that would enable folks to effectively bypass surveillance equipment on their networks to maintain secure con...
Proposal offers proper authentication, verification and over-the-air delivery A trio of ARM engineers have devoted some of their free time* to working up an architecture to address the problem of delivering software updates to internet-connected things...
Proposal to extend Error 451 After a long campaign, the Internet Engineering Task Force (IETF) has decided that users deserve to know why pages were blocked and created HTML error 451. Now the body will consider a proposal to extend it to give users mo...
Open Shortest Path First(OSPF)protocol implementations may improperly determine Link State Advertisement(LSA)recency for LSAs with MaxSequenceNumber.

Attackers with the ability to transmit messages from a routing domain router may send specially crafted OSPF messages to poison routing tables within the domain.
Draft blends asymmetric public/private key encryption and one-time pad analogs In case someone manages to make a general purpose quantum computer one day, a group of IETF authors have put forward a proposal to harden Internet key exchange.…
Mozilla engineer spots a gap in Web security, reaches for the patch kit In spite of the rise of HTTPS, there are still spots where content originating on the Web can remain unencrypted, so a Mozilla engineer wants to close one of those gaps.…
Mozilla engineer spots a gap in online security, reaches for the patch kit Amid the rise of HTTPS, there are still many spots where content shifted encrypted across the web is ultimately stored in wide-open plain text, so a Mozilla engineer wants to close one of those gaps.…
Network time lords decide we don't need IP address swaps The Internet Engineering Task Force has taken another small step in protecting everybody's privacy – this time, in making the Network Time Protocol a bit less spaffy.…
The IBM Lotus Domino server IMAP service contains a stack-based buffer overflow vulnerability in IMAP commands that refer to a mailbox name.

This can allow a remote,authenticated attacker to execute arbitrary code with the privileges of the Domino server
IETF Security director Stephen Farrell offers a report card on evolving defences FEATURE After three years of work on making the Internet more secure, the Internet Engineering Task Force (IETF) still faces bottlenecks: ordinary peoples' perception of risk, sysadmins worried about how to manage encrypted networks, and – more even than state snooping – an advertising-heavy 'net business model that relies on collecting as much information as possible. In a wide-ranging 45-minute, 4,000-word interview (full transcript in this PDF), IETF Security Area Director Stephen Farrell gave a report card of what's happened since the Internet Architecture Board declared that “pervasive monitoring is an attack”, in RFC 7258. Much of the discussion used Farrell's presentation to the NORDUnet conference in September, and the slides are here. Let's boil the ocean, so we can cook an elephant.

And eat it. Given the sheer scale of the effort involved – the IETF's list of RFCs passed the 8,000 mark in November – nobody expected the world to get a private Internet quickly, but Farrell told The Register some of the key in-IETF efforts have progressed well: its UTA (Using TLS in Applications), DPRIVE (DNS Privacy), and TCPINC (TCP INCreased security, which among other things is working to revive the tcpcrypt proposal rejected earlier in the decade). UTA: The idea is to get rid of the nasty surprises that happen when someone realises a standard (and therefore code written to that standard) still references a “laggard” protocol – so, for example, nobody gets burned complying with a standard that happens to reference a deprecated SSL or TLS standard. “The UTA working group produced RFC 7525 (Recommendations for Secure Use of Transport Layer Security (TLS) and Datagram Transport Layer Security (DTLS), https://tools.ietf.org/html/rfc7525 here).

The last time I looked, there were something like 50 RFCs that are referencing that [The Register checked this list, provided by Farrell – it seems to be close to 70 already].” The idea of UTA is that a protocol written 10 or 15 years ago should be updated so it no longer references the then-current version of TLS, he said. “That's being used in order to provide a common reference: as people update their implementations, they'll reference a more modern version of TLS, currently TLS 1.2, and as TLS 1.3 is finished, we have an automated-ish way of getting those updates percolating through to the documentation sets. “That's quite successful, I think, because it normalises and updates and modernises a bunch of recommendations.” DNSPRIV: Readers will recall that IETF 97 was the venue for the launch of Stubby, a demonstrator for securing DNS queries from the user to their DNS responder. Stubby, a demonstrator of DNS privacy work That, Farrell said, is a good example of where DNSPRIV is at – on the user side, it's ready for experimental code to go into service. “DNS privacy is something that is ready to experiment with.

The current work in DPRIVE was how to [secure] the hop between and the next DNS provider you talk to. “That's an easy problem to tackle – you talk to that DNS resolver a lot, and you have some shared space, so the overhead of doing the crypto stuff is nowhere.” Getting upstream to where DNS queries become recursive – your ISP can't answer, so they pass the query upwards – is much harder, he said. “Assuming that [the ISP] needs to find “where is theregister.co.uk?”, he'll eventually talk to the UK ccTLD, and then he'll go talk to .co.uk and then he'll go talk to theregister.co.uk – it's forking the communications a lot more, and it's a little harder to see how to efficiently amortise the crypto. “The DPRIVE working group are now examining whether they think they can produce some technology that will work for that part of the problem.” TCPINC: Some of the questions in this working group may never be seen by ordinary Internet users, but they're still important, Farrell said. “I think we're close to having some TCP-crypt-based RFCs issued, there's been code for that all along. Whether or not we'll get much deployment of that, we'll see.” “I think there are a bunch of applications that maybe wouldn't be visible to the general public. Let's say you have an application server that has to run over a socket – an application that runs on top of the Linux kernel, say, where you have to use the kernel because of the interfaces involved, and you can't provide the security above the kernel because you need it inside. “That's where TCPINC fits in.
Storage – they have really complex interfaces between the network-available storage server and the kernel, and there's lots of complex distributed processing going on.” That's important to “the likes of NetApp and EMC and so on”, he said: “For some of those folks, being able to slot in security inside the kernel, with TCPINC, is attractive.
Some, I might expect, will adopt that sort of thing – but it may never be seen on the public Internet.” Security and the end-to-end model Farrell said more encryption is changing the Internet in ways the general public probably doesn't think about – but which they'll appreciate. The old end-to-end model – the “neutral Internet” – has been under both overt and covert attack for years: carriers want to be more than passive bit-pipes, so they look for ways that traffic management can become a revenue stream; while advertisers want access to traffic in transit so they can capture information and inject advertisements. Ubiquitous encryption changes both of these models, by re-empowering the endpoints.

Along the way, perhaps surprisingly, Farrell sees this as something that can make innovation on the Internet more democratic. He cited HTML2 and QUIC as important non-IETF examples: “there's a whole bunch of people motivated to use TLS almost ubiquitously, not only because they care about privacy, but because of performance: it moves the point of control back towards the endpoint, not the middle of the network. “One of the interesting and fun things of trying to improve the security properties and privacy properties of the network is that it changes who controls what. “If you encrypt a session, nobody in the middle can do something like inject advertising. “It reasserts the end-to-end argument in a pretty strong way.
If you do the crypto right, then the middlebox can't jump in and modify things – at least not without being detectable.” He argues that the carrier's / network operators' “middleboxes” became an innovation roadblock. “The real downside of having middleboxes doing things is that they kind of freeze what you're doing, and prevent you innovating. “One of the reasons people did HTTP2 implementations, that only ever talk ciphertext, is because they found a lot of middleboxes would break the connection if they saw anything that wasn't HTTP 1.1. “In other words, the cleartext had the effect that the middleboxes, that were frozen in time, would prevent the edges from innovating. Once they encrypted the HTTP2 traffic, the middleboxes were willing to say 'it's TLS so I won't go near it', and the innovation can kick off again at the edges.” Won't somebody think of the sysadmin? Systems administrators – in enterprises as well as in carriers – are less in love with crypto. “Network management people have been used to managing cleartext networks,” he said. For more than 20 years, for perfectly legitimate reasons – and without betraying their users – sysadmins would look into packets, see what they contained, and when sensible do something about them. “Not for nefarious reasons – in order to detect attacks, in order to optimise traffic, and so on. We're changing that, and that also means the technology they're using will be undergoing change, to deal with much more ciphertext than plaintext. “We need to learn better ways of how to fulfil those same functions on the network,” he said. “If you had some security mechanism in your network for detecting some malware attack traffic, instead of being able to operate that from the middle of the network, it pushes a requirement on you to move that to the edge.” Commercial services are starting to understand how this can work, he said: “If you look at some of the commercial instant messaging providers, that have introduced end-to-end encryption of their messaging – they have found they can move those functions in their networks to new places to do what they need to do. “It means change, but it doesn't make network management impossible.” Advertising models will change Companies collaborating to collect advertising data remains a big challenge, he said.

That's likely to change – “there's no reason why a particular business model has to last forever”, but in the meantime, “it's hard to see how we make a dramatic improvement in privacy. “We can make some improvements, but how we make it dramatically better – it's hard.

The incentives are aligned to make all the service providers want to be privacy-unfriendly, from the point of “me”, but not perhaps the point of view of 99 per cent of people who use the Internet, and seem happy enough with it.” Breaches and leaks are frightening the service providers, which helps, because providers “realise that storing everything, forever, is toxic, and in the end they'll get caught by it.” About the cough NSA coughThe Register also asked: what protects future standards against security organisations polluting standards, as they did with DUAL-EC? “As an open organisation, we need to be open to technical contributions from anywhere,” Farrell said, “be that an employee of the NSA, or be that – as we've had in one case – a teenager from the Ukraine who was commenting on RFCs five or six years ago.” It has to be handled socially, rather than by process, he argued, citing the IETF's creation of the Crypto Forum Research Group, chaired by Alexey Melnikov and Kenny Paterson and designed to bring together IETF standards authors and the academic crypto community. He described it as a “lightweight process” designed to assess crypto proposals – have they been reviewed? Is the proposal novel and maybe not ready for prime time? “The number of NSA employees that attend IETF [meetings] – I don't think it's a useful metric at all.
I think how well peoples' contributions are examined is a much more useful metric, and there, things like having the CFRG, having academic cryptographers interacting much more with the standards community – those are more effective ways of doing that. “We've set up a thing called the Advanced Networking Research Prize, which is a prize for already-published academic work.
It pays for the academic come to an IETF meeting, give us a talk, get them involved” (Paterson first became involved in the CRFG as an invited academic who won the prize). Spooks want to monitor everyone because they believe everyone might be guilty, he added, and that's a mistake. “We should not think people are guilty by association.

That's a fallacy – if you believe that NSA employees are not allowed to contribute, you're making the same mistake they're making.” ®

Time is running out for NTP

There are two types of open source projects: those with corporate sponsorship and those that fall under the “labor of love” category.

Actually, there’s a third variety: projects that get some support but have to keep looking ahead for the next sponsor. Some open source projects are so widely used that if anything goes wrong, everyone feels the ripple effects. OpenSSL is one such project; when the Heartbleed flaw was discovered in the open source cryptography library, organizations scrambled to identify and fix all their vulnerable networking devices and software. Network Time Protocol (NTP) arguably plays as critical a role in modern computing, if not more; the open source protocol is used to synchronize clocks on servers and devices to make sure they all have the same time. Yet, the fact remains that NTP is woefully underfunded and undersupported. NTP is more than 30 years old—it may be the oldest codebase running on the internet.

Despite some hiccups, it continues to work well.

But the project’s future is uncertain because the number of volunteer contributors has shrunk, and there’s too much work for one person—principal maintainer Harlan Stenn—to handle. When there is limited support, the project has to pick and choose what tasks it can afford to complete, which slows down maintenance and stifles innovation. “NTF’s NTP project remains severely underfunded,” the project team wrote in a recent security advisory. “Google was unable to sponsor us this year, and currently, the Linux Foundation’s Core Internet Initiative only supports Harlan for about 25 percent of his hours per week and is restricted to NTP development only.” Last year, the Linux Foundation renewed its financial commitment to NTP for another year via the Core Infrastructure Initiative, but it isn’t enough. The absence of a sponsor has a direct impact on the project. One of the vulnerabilities addressed in the recently released ntp-4.2.8p9 update was originally reported to the project back in June.
In September, the researcher who discovered the flaw, which could be exploited with a single, malformed packet, asked for a status update because 80 days had passed since his initial report.

As the vulnerability had already existed for more than 100 days, Magnus Studman was concerned that more delays gave “people with bad intentions” more chances to also find it. Stenn’s response was blunt. “Reality bites—we remain severely under-resourced for the work that needs to be done. You can yell at us about it, and/or you can work to help us, and/or you can work to get others to help us,” he wrote. Researchers are reporting security issues, but there aren’t enough developers to help Stenn fix them, test the patches, and document the changes.

The Linux Foundation’s CII support doesn’t cover the work on new initiatives, such as the Network Time Security (NTS) and the General Timestamp API, or on standards and best practices work currently underway.

The initial support from CII covers “support for developers as well as infrastructure support.” NTS, currently in draft version with the Internet Engineering Task Force (IETF), would give administrators a way to add security to NTP, as it would secure time synchronization.

The mechanism uses Datagram Transport Layer Security (DTLS) to provide cryptographic security for NTP.

The General Timestamp API would develop a new time-stamp format containing more information than date and time, which would be more useful.

The goal is to also develop an efficient and portable library API to use those time stamps. Open source projects and initiatives struggle to keep going when there isn’t enough support, sponsorship, financial aid, and manpower.

This is why open source security projects frequently struggle to gain traction among organizations. Organizations don’t want to wind up relying on a project when future support is uncertain.
In a perfect world, open source projects that are critical parts of core infrastructure should have permanent funding. NTP is buried so deeply in the infrastructure that practically everyone reaps the project’s benefits for free. NTP needs more than simply maintaining the codebase, fixing bugs, and improving the software. Without help, the future of the project remains uncertain. NTP—or the Network Time Foundation established to run the project—should not have to struggle to find corporate sponsors and donors. “If accurate, secure time is important to you or your organization, help us help you: Donate today or become a member,” NTP’s project team wrote.
Nothing to see here. Move along As an ostensible courtesy to internet customers now facing 1TB monthly data caps, Comcast has begun notifying those approaching their quotas through popup browser windows. But the way it delivers those messages – injecting web code into the customer's browsing session – undermines online security, said iOS developer Chris Dzombak in a blog post on Tuesday. In November, Comcast expanded the areas in the US where it implements data caps for internet customers to 28 states, a practice it has been experimenting with for several years. The notifications provide a practical way for Comcast to keep customers apprised of dwindling data rations, and have previously been used for malware warnings. Dzombak points out that Comcast described its injection technique in an informational RFC (6108) to the IETF in 2011. He suggests that Comcast submitted the RFC to legitimize its practice, which he likens to man-in-the-middle attack. "This practice will train customers to expect that their ISP sends them critical messages by injecting them into random webpages as they browse," said Dzombak. "Moreover, these notifications can plausibly contain important calls to action which involve logging into the customer's Comcast account and which might ask for financial information." Dzombak argues that Comcast's notification format could easily be co-opted and spoofed by an online attacker. Comcast customers, accustomed to interacting with such popup windows, would presumably be more trusting of such interaction and thus more susceptible to social engineering. "Unfortunately, when such a notification appears on a non-Comcast web page, it's very difficult for an internet user to ascertain whether the notification is legitimately from Comcast," said Dzombak in an email. In response to a query about the practice, a Comcast spokesperson via email told The Register, "This has come up in the past and is not new." @cdzombak FWIW we've done a lot of those over the years, mostly 2 warn of poss. malware infection. But ur points are fair. (see next tweet) — Jason Livingood (@jlivingood) October 7, 2016 Last month, Jason Livingood, Comcast's VP of tech policy and standards and one of the coauthors of the RFC, offered a less dismissive response. In a Twitter thread, he acknowledged Dzombak's concerns, saying "[your] points are fair." In any event, Comcast's days of injecting web content appear to be numbered. As Dzombak observes, content injection doesn't work with HTTPS websites and, thanks to Google, Mozilla, and other tech companies, more and more websites are supporting HTTPS. ® Sponsored: Customer Identity and Access Management