Home Tags Internet Engineering Task Force (IETF)

Tag: Internet Engineering Task Force (IETF)

Let’s harden Internet crypto so quantum computers can’t crack it

Draft blends asymmetric public/private key encryption and one-time pad analogs In case someone manages to make a general purpose quantum computer one day, a group of IETF authors have put forward a proposal to harden Internet key exchange.…

Idea to encrypt Web traffic at rest hits the IETF’s Standard...

Mozilla engineer spots a gap in Web security, reaches for the patch kit In spite of the rise of HTTPS, there are still spots where content originating on the Web can remain unencrypted, so a Mozilla engineer wants to close one of those gaps.…

Idea to encrypt stuff on the web at rest hits the...

Mozilla engineer spots a gap in online security, reaches for the patch kit Amid the rise of HTTPS, there are still many spots where content shifted encrypted across the web is ultimately stored in wide-open plain text, so a Mozilla engineer wants to close one of those gaps.…

Network Time Protocol updated to spook-harden user comms

Network time lords decide we don't need IP address swaps The Internet Engineering Task Force has taken another small step in protecting everybody's privacy – this time, in making the Network Time Protocol a bit less spaffy.…

VU#676632: IBM Lotus Domino server mailbox name stack buffer overflow

The IBM Lotus Domino server IMAP service contains a stack-based buffer overflow vulnerability in IMAP commands that refer to a mailbox name.

This can allow a remote,authenticated attacker to execute arbitrary code with the privileges of the Domino server

In the three years since IETF said pervasive monitoring is an...

IETF Security director Stephen Farrell offers a report card on evolving defences FEATURE After three years of work on making the Internet more secure, the Internet Engineering Task Force (IETF) still faces bottlenecks: ordinary peoples' perception of risk, sysadmins worried about how to manage encrypted networks, and – more even than state snooping – an advertising-heavy 'net business model that relies on collecting as much information as possible. In a wide-ranging 45-minute, 4,000-word interview (full transcript in this PDF), IETF Security Area Director Stephen Farrell gave a report card of what's happened since the Internet Architecture Board declared that “pervasive monitoring is an attack”, in RFC 7258. Much of the discussion used Farrell's presentation to the NORDUnet conference in September, and the slides are here. Let's boil the ocean, so we can cook an elephant.

And eat it. Given the sheer scale of the effort involved – the IETF's list of RFCs passed the 8,000 mark in November – nobody expected the world to get a private Internet quickly, but Farrell told The Register some of the key in-IETF efforts have progressed well: its UTA (Using TLS in Applications), DPRIVE (DNS Privacy), and TCPINC (TCP INCreased security, which among other things is working to revive the tcpcrypt proposal rejected earlier in the decade). UTA: The idea is to get rid of the nasty surprises that happen when someone realises a standard (and therefore code written to that standard) still references a “laggard” protocol – so, for example, nobody gets burned complying with a standard that happens to reference a deprecated SSL or TLS standard. “The UTA working group produced RFC 7525 (Recommendations for Secure Use of Transport Layer Security (TLS) and Datagram Transport Layer Security (DTLS), https://tools.ietf.org/html/rfc7525 here).

The last time I looked, there were something like 50 RFCs that are referencing that [The Register checked this list, provided by Farrell – it seems to be close to 70 already].” The idea of UTA is that a protocol written 10 or 15 years ago should be updated so it no longer references the then-current version of TLS, he said. “That's being used in order to provide a common reference: as people update their implementations, they'll reference a more modern version of TLS, currently TLS 1.2, and as TLS 1.3 is finished, we have an automated-ish way of getting those updates percolating through to the documentation sets. “That's quite successful, I think, because it normalises and updates and modernises a bunch of recommendations.” DNSPRIV: Readers will recall that IETF 97 was the venue for the launch of Stubby, a demonstrator for securing DNS queries from the user to their DNS responder. Stubby, a demonstrator of DNS privacy work That, Farrell said, is a good example of where DNSPRIV is at – on the user side, it's ready for experimental code to go into service. “DNS privacy is something that is ready to experiment with.

The current work in DPRIVE was how to [secure] the hop between and the next DNS provider you talk to. “That's an easy problem to tackle – you talk to that DNS resolver a lot, and you have some shared space, so the overhead of doing the crypto stuff is nowhere.” Getting upstream to where DNS queries become recursive – your ISP can't answer, so they pass the query upwards – is much harder, he said. “Assuming that [the ISP] needs to find “where is theregister.co.uk?”, he'll eventually talk to the UK ccTLD, and then he'll go talk to .co.uk and then he'll go talk to theregister.co.uk – it's forking the communications a lot more, and it's a little harder to see how to efficiently amortise the crypto. “The DPRIVE working group are now examining whether they think they can produce some technology that will work for that part of the problem.” TCPINC: Some of the questions in this working group may never be seen by ordinary Internet users, but they're still important, Farrell said. “I think we're close to having some TCP-crypt-based RFCs issued, there's been code for that all along. Whether or not we'll get much deployment of that, we'll see.” “I think there are a bunch of applications that maybe wouldn't be visible to the general public. Let's say you have an application server that has to run over a socket – an application that runs on top of the Linux kernel, say, where you have to use the kernel because of the interfaces involved, and you can't provide the security above the kernel because you need it inside. “That's where TCPINC fits in.
Storage – they have really complex interfaces between the network-available storage server and the kernel, and there's lots of complex distributed processing going on.” That's important to “the likes of NetApp and EMC and so on”, he said: “For some of those folks, being able to slot in security inside the kernel, with TCPINC, is attractive.
Some, I might expect, will adopt that sort of thing – but it may never be seen on the public Internet.” Security and the end-to-end model Farrell said more encryption is changing the Internet in ways the general public probably doesn't think about – but which they'll appreciate. The old end-to-end model – the “neutral Internet” – has been under both overt and covert attack for years: carriers want to be more than passive bit-pipes, so they look for ways that traffic management can become a revenue stream; while advertisers want access to traffic in transit so they can capture information and inject advertisements. Ubiquitous encryption changes both of these models, by re-empowering the endpoints.

Along the way, perhaps surprisingly, Farrell sees this as something that can make innovation on the Internet more democratic. He cited HTML2 and QUIC as important non-IETF examples: “there's a whole bunch of people motivated to use TLS almost ubiquitously, not only because they care about privacy, but because of performance: it moves the point of control back towards the endpoint, not the middle of the network. “One of the interesting and fun things of trying to improve the security properties and privacy properties of the network is that it changes who controls what. “If you encrypt a session, nobody in the middle can do something like inject advertising. “It reasserts the end-to-end argument in a pretty strong way.
If you do the crypto right, then the middlebox can't jump in and modify things – at least not without being detectable.” He argues that the carrier's / network operators' “middleboxes” became an innovation roadblock. “The real downside of having middleboxes doing things is that they kind of freeze what you're doing, and prevent you innovating. “One of the reasons people did HTTP2 implementations, that only ever talk ciphertext, is because they found a lot of middleboxes would break the connection if they saw anything that wasn't HTTP 1.1. “In other words, the cleartext had the effect that the middleboxes, that were frozen in time, would prevent the edges from innovating. Once they encrypted the HTTP2 traffic, the middleboxes were willing to say 'it's TLS so I won't go near it', and the innovation can kick off again at the edges.” Won't somebody think of the sysadmin? Systems administrators – in enterprises as well as in carriers – are less in love with crypto. “Network management people have been used to managing cleartext networks,” he said. For more than 20 years, for perfectly legitimate reasons – and without betraying their users – sysadmins would look into packets, see what they contained, and when sensible do something about them. “Not for nefarious reasons – in order to detect attacks, in order to optimise traffic, and so on. We're changing that, and that also means the technology they're using will be undergoing change, to deal with much more ciphertext than plaintext. “We need to learn better ways of how to fulfil those same functions on the network,” he said. “If you had some security mechanism in your network for detecting some malware attack traffic, instead of being able to operate that from the middle of the network, it pushes a requirement on you to move that to the edge.” Commercial services are starting to understand how this can work, he said: “If you look at some of the commercial instant messaging providers, that have introduced end-to-end encryption of their messaging – they have found they can move those functions in their networks to new places to do what they need to do. “It means change, but it doesn't make network management impossible.” Advertising models will change Companies collaborating to collect advertising data remains a big challenge, he said.

That's likely to change – “there's no reason why a particular business model has to last forever”, but in the meantime, “it's hard to see how we make a dramatic improvement in privacy. “We can make some improvements, but how we make it dramatically better – it's hard.

The incentives are aligned to make all the service providers want to be privacy-unfriendly, from the point of “me”, but not perhaps the point of view of 99 per cent of people who use the Internet, and seem happy enough with it.” Breaches and leaks are frightening the service providers, which helps, because providers “realise that storing everything, forever, is toxic, and in the end they'll get caught by it.” About the cough NSA coughThe Register also asked: what protects future standards against security organisations polluting standards, as they did with DUAL-EC? “As an open organisation, we need to be open to technical contributions from anywhere,” Farrell said, “be that an employee of the NSA, or be that – as we've had in one case – a teenager from the Ukraine who was commenting on RFCs five or six years ago.” It has to be handled socially, rather than by process, he argued, citing the IETF's creation of the Crypto Forum Research Group, chaired by Alexey Melnikov and Kenny Paterson and designed to bring together IETF standards authors and the academic crypto community. He described it as a “lightweight process” designed to assess crypto proposals – have they been reviewed? Is the proposal novel and maybe not ready for prime time? “The number of NSA employees that attend IETF [meetings] – I don't think it's a useful metric at all.
I think how well peoples' contributions are examined is a much more useful metric, and there, things like having the CFRG, having academic cryptographers interacting much more with the standards community – those are more effective ways of doing that. “We've set up a thing called the Advanced Networking Research Prize, which is a prize for already-published academic work.
It pays for the academic come to an IETF meeting, give us a talk, get them involved” (Paterson first became involved in the CRFG as an invited academic who won the prize). Spooks want to monitor everyone because they believe everyone might be guilty, he added, and that's a mistake. “We should not think people are guilty by association.

That's a fallacy – if you believe that NSA employees are not allowed to contribute, you're making the same mistake they're making.” ®

Time is running out for NTP

There are two types of open source projects: those with corporate sponsorship and those that fall under the “labor of love” category.

Actually, there’s a third variety: projects that get some support but have to keep looking ahead for the next sponsor. Some open source projects are so widely used that if anything goes wrong, everyone feels the ripple effects. OpenSSL is one such project; when the Heartbleed flaw was discovered in the open source cryptography library, organizations scrambled to identify and fix all their vulnerable networking devices and software. Network Time Protocol (NTP) arguably plays as critical a role in modern computing, if not more; the open source protocol is used to synchronize clocks on servers and devices to make sure they all have the same time. Yet, the fact remains that NTP is woefully underfunded and undersupported. NTP is more than 30 years old—it may be the oldest codebase running on the internet.

Despite some hiccups, it continues to work well.

But the project’s future is uncertain because the number of volunteer contributors has shrunk, and there’s too much work for one person—principal maintainer Harlan Stenn—to handle. When there is limited support, the project has to pick and choose what tasks it can afford to complete, which slows down maintenance and stifles innovation. “NTF’s NTP project remains severely underfunded,” the project team wrote in a recent security advisory. “Google was unable to sponsor us this year, and currently, the Linux Foundation’s Core Internet Initiative only supports Harlan for about 25 percent of his hours per week and is restricted to NTP development only.” Last year, the Linux Foundation renewed its financial commitment to NTP for another year via the Core Infrastructure Initiative, but it isn’t enough. The absence of a sponsor has a direct impact on the project. One of the vulnerabilities addressed in the recently released ntp-4.2.8p9 update was originally reported to the project back in June.
In September, the researcher who discovered the flaw, which could be exploited with a single, malformed packet, asked for a status update because 80 days had passed since his initial report.

As the vulnerability had already existed for more than 100 days, Magnus Studman was concerned that more delays gave “people with bad intentions” more chances to also find it. Stenn’s response was blunt. “Reality bites—we remain severely under-resourced for the work that needs to be done. You can yell at us about it, and/or you can work to help us, and/or you can work to get others to help us,” he wrote. Researchers are reporting security issues, but there aren’t enough developers to help Stenn fix them, test the patches, and document the changes.

The Linux Foundation’s CII support doesn’t cover the work on new initiatives, such as the Network Time Security (NTS) and the General Timestamp API, or on standards and best practices work currently underway.

The initial support from CII covers “support for developers as well as infrastructure support.” NTS, currently in draft version with the Internet Engineering Task Force (IETF), would give administrators a way to add security to NTP, as it would secure time synchronization.

The mechanism uses Datagram Transport Layer Security (DTLS) to provide cryptographic security for NTP.

The General Timestamp API would develop a new time-stamp format containing more information than date and time, which would be more useful.

The goal is to also develop an efficient and portable library API to use those time stamps. Open source projects and initiatives struggle to keep going when there isn’t enough support, sponsorship, financial aid, and manpower.

This is why open source security projects frequently struggle to gain traction among organizations. Organizations don’t want to wind up relying on a project when future support is uncertain.
In a perfect world, open source projects that are critical parts of core infrastructure should have permanent funding. NTP is buried so deeply in the infrastructure that practically everyone reaps the project’s benefits for free. NTP needs more than simply maintaining the codebase, fixing bugs, and improving the software. Without help, the future of the project remains uncertain. NTP—or the Network Time Foundation established to run the project—should not have to struggle to find corporate sponsors and donors. “If accurate, secure time is important to you or your organization, help us help you: Donate today or become a member,” NTP’s project team wrote.

Comcast shrugs off critique of injected notifications

Nothing to see here. Move along As an ostensible courtesy to internet customers now facing 1TB monthly data caps, Comcast has begun notifying those approaching their quotas through popup browser windows. But the way it delivers those messages – injecting web code into the customer's browsing session – undermines online security, said iOS developer Chris Dzombak in a blog post on Tuesday. In November, Comcast expanded the areas in the US where it implements data caps for internet customers to 28 states, a practice it has been experimenting with for several years. The notifications provide a practical way for Comcast to keep customers apprised of dwindling data rations, and have previously been used for malware warnings. Dzombak points out that Comcast described its injection technique in an informational RFC (6108) to the IETF in 2011. He suggests that Comcast submitted the RFC to legitimize its practice, which he likens to man-in-the-middle attack. "This practice will train customers to expect that their ISP sends them critical messages by injecting them into random webpages as they browse," said Dzombak. "Moreover, these notifications can plausibly contain important calls to action which involve logging into the customer's Comcast account and which might ask for financial information." Dzombak argues that Comcast's notification format could easily be co-opted and spoofed by an online attacker. Comcast customers, accustomed to interacting with such popup windows, would presumably be more trusting of such interaction and thus more susceptible to social engineering. "Unfortunately, when such a notification appears on a non-Comcast web page, it's very difficult for an internet user to ascertain whether the notification is legitimately from Comcast," said Dzombak in an email. In response to a query about the practice, a Comcast spokesperson via email told The Register, "This has come up in the past and is not new." @cdzombak FWIW we've done a lot of those over the years, mostly 2 warn of poss. malware infection. But ur points are fair. (see next tweet) — Jason Livingood (@jlivingood) October 7, 2016 Last month, Jason Livingood, Comcast's VP of tech policy and standards and one of the coauthors of the RFC, offered a less dismissive response. In a Twitter thread, he acknowledged Dzombak's concerns, saying "[your] points are fair." In any event, Comcast's days of injecting web content appear to be numbered. As Dzombak observes, content injection doesn't work with HTTPS websites and, thanks to Google, Mozilla, and other tech companies, more and more websites are supporting HTTPS. ® Sponsored: Customer Identity and Access Management

IETF plants privacy test inside DNS

'Stubby' aims to protect your metadata from snoopers The Internet Engineering Task Force's (IETF's) years-long effort to protect Internet users has taken a small step forward, with one option for better Domain Name System (DNS) privacy reaching the test stage. "Stubby", created by software developer and DNS privacy advocate Sara Dickinson, lets users test encrypted DNS queries. The idea isn't to flick the switch to encryption in one big hit, but rather, to provide a resolver that can accept connections and return responses over Transport Layer Security (TLS) at the user-side. The demonstrator's only dependency is SSL version 1.0.2 or better, so Stubby can authenticate hostnames. The problem Dickinson and others are trying to address is that DNS requests are extremely revealing about user activity, since they identify every server you seek out on the public Internet. As the American Civil Liberties Union's Daniel Kahn Gillmor told the IETF in a joint presentation with Dickinson at last week's IETF 97 in Seoul, DNS metadata leakage allows individuals to be identified. While DNS data is public, Gillmor argued, individual DNS transactions should not be, and stub-to-resolver encryption is arguably the best place to start. Which brings us back to Stubby: since fixing the whole DNS in one shot is something of a Mars mission, one of the steps along the way is to privacy-enable a “stub resolver” (for example, the DNS resolver in a home router counts as a stub, and if it were supported, it could encrypt its communications with an ISP's DNS). On a Linux or MacOS box, Stubby provides the local stub, sending outgoing DNS requests from 127.0.0.1 to a DNS Privacy Server (there are currently four), using the “strict” or “opportunistic” profiles defined in https://tools.ietf.org/html/draft-ietf-dprive-dtls-and-tls-profiles-07 this IETF draft. Dickinson's company, Sinodun, offers privacy servers at dnsovertls.sinodun.com and dnsovertls1.sinodun.com, and there are two others at getdnsapi.net and dns.cmrg.net. The work falls into the context of the Internet Architecture Board's declaration in 2014 that “pervasive monitoring is an attack” that standards developers need to unite to defeat. ® Sponsored: Customer Identity and Access Management

Researchers tag new brace of bugs in NTP, but they’re fixable

Party like it's 1985 1955 2015 WHAT DATE IS IT ANYWAY? Back in January, Cisco dropped a bunch of NTP (network time protocol) patches; now, it's emerged that the research behind that round of fixes also turned up other bugs that haven't yet been fixed. This week, Ciscoans Matt Gundy and Jonathan Gardner teamed up with Boston University's Aanchal Malhotra, Mayank Varia, Haydn Kennedy and Sharon Goldberg to show off a bunch of possible attacks against NTP's datagram protocol. The bad news: the group reckons millions of IP addresses are currently vulnerable. The good news? The protocol is fixable, and the researchers urge the IETF to adopt a cryptographic model for better client/server NTP protocols. Fooling around with NTP is a handy attack vector, since you can spoil cryptographic calculations, “roll back time”, or cause denial-of-service attacks. A lot of attacks against NTP are man-in-the-middle attacks; what the Cisco / Boston University demonstrate are three off-path attacks (one of which, CVE-2015-8138, was fixed by Cisco in January, and has also been fixed in later versions of the NTP daemon). The vulnerabilities exist because RFC 5905, which defines NTP, has a fundamental problem: “client/server mode and symmetric mode have conflicting security requirements; meanwhile, RFC5905 suggests identical processing for incoming packets of both modes”. Vulnerabilities discussed in the paper include: A low-rate denial-of-service attack against the NTP daemon's “interleaved mode” (supposed to make timestamps more accurate); and Timeshifting attacks that haven't yet been fixed. However, because these are protocol vulnerabilities, the researchers fixing NTP is more important.

They propose replacing the current model with one that uses more cryptography. While the 'net's druids contemplate that proposal, the group reminds sysadmins they should “Finally, we suggest the firewalls and ntpd clients block all incoming NTP control queries from unwanted IPs”. ®

NSA could put undetectable “trapdoors” in millions of crypto keys

EnlargeJorge Láscar reader comments 30 Share this story Researchers have devised a way to place undetectable backdoors in the cryptographic keys that protect websites, virtual private networks, and Internet servers.

The feat allows hackers to passively decrypt hundreds of millions of encrypted communications as well as cryptographically impersonate key owners. The technique is notable because it puts a backdoor—or in the parlance of cryptographers, a "trapdoor"—in 1,024-bit keys used in the Diffie-Hellman key exchange.

Diffie-Hellman significantly raises the burden on eavesdroppers because it regularly changes the encryption key protecting an ongoing communication.

Attackers who are aware of the trapdoor have everything they need to decrypt Diffie-Hellman-protected communications over extended periods of time, often measured in years. Knowledgeable attackers can also forge cryptographic signatures that are based on the widely used digital signature algorithm. As with all public key encryption, the security of the Diffie-Hellman protocol is based on number-theoretic computations involving prime numbers so large that the problems are prohibitively hard for attackers to solve.

The parties are able to conceal secrets within the results of these computations.

A special prime devised by the researchers, however, contains certain invisible properties that make the secret parameters unusually susceptible to discovery.

The researchers were able to break one of these weakened 1,024-bit primes in slightly more than two months using an academic computing cluster of 2,000 to 3,000 CPUs. Backdooring crypto standards—"completely feasible" To the holder, a key with a trapdoored prime looks like any other 1,024-bit key.

To attackers with knowledge of the weakness, however, the discrete logarithm problem that underpins its security is about 10,000 times easier to solve.

This efficiency makes keys with a trapdoored prime ideal for the type of campaign former National Security Agency contractor Edward Snowden exposed in 2013, which aims to decode vast swaths of the encrypted Internet. "The Snowden documents have raised some serious questions about backdoors in public key cryptography standards," Nadia Heninger, one of the University of Pennsylvania researchers who participated in the project, told Ars. "We are showing that trapdoored primes that would allow an adversary to efficiently break 1,024-bit keys are completely feasible." While NIST—short for the National Institute for Standards and Technology—has recommended minimum key sizes of 2,048 bits since 2010, keys of half that size remain abundant on the Internet.

As of last month, a survey performed by the SSL Pulse service found that 22 percent of the top 200,000 HTTPS-protected websites performed key exchanges with 1,024-bit keys.

A belief that 1,024-bit keys can only be broken at great cost by nation-sponsored adversaries is one reason for the wide use. Other reasons include implementation and compatibility difficulties. Java version 8 released in 2014, for instance, didn't support Diffie-Hellman or DSA keys larger than 1,024 bits.

And, to this day, the DNSSEC specification for securing the Internet's domain name system limits keys to a maximum of 1,024 bits. Poisoning the well Solving a key's discrete logarithm problem is significant in the Diffie-Hellman arena. Why? Because a handful of primes are frequently standardized and used by a large number of applications. If the NSA or another adversary succeeded in getting one or more trapdoored primes adopted as a mainstream specification, the agency would have a way to eavesdrop on the encrypted communications of millions, possibly hundreds of millions or billions, of end users over the life of the primes.
So far, the researchers have found no evidence of trapdoored primes in widely used applications.

But that doesn't mean such primes haven't managed to slip by unnoticed. In 2008, the Internet Engineering Task Force published a series of recommended prime numbers for use in a variety of highly sensitive applications, including the transport layer security protocol protecting websites and e-mail servers, the secure shell protocol for remotely administering servers, the Internet key exchange for securing connections, and the secure/multipurpose Internet mail extensions standard for e-mail. Had the primes contained the type of trapdoor the researchers created, there would be virtually no way for outsiders to know, short of solving mathematical problems that would take centuries of processor time. Similarly, Heninger said, there's no way for the world at large to know that crucial 1,024-bit primes used by the Apache Web server aren't similarly backdoored.
In an e-mail, she wrote: We show that we are never going to be able to detect primes that have been properly trapdoored.

But we know exactly how the trapdoor works, and [we] can quantify the massive advantage it gives to the attacker.
So people should start asking pointed questions about how the opaque primes in some implementations and standards were generated. Why should the primes in RFC 5114 be trusted without proof that they have not been trapdoored? How were they generated in the first place? Why were they standardized and pretty widely implemented by VPNs without proof that they were generated with verifiable randomness? Unlike prime numbers in RSA keys, which are always supposed to be unique, certain Diffie-Hellman primes are extremely common.
If the NSA or another adversary managed to get a trapdoored prime adopted as a real or de facto standard, it would be a coup.

From then on, the adversary would have possession of the shared secret that two parties used to generate ephemeral keys during a Diffie-Hellman-encrypted conversation. Remember Dual_EC_DRBG? Such a scenario, assuming it happened, wouldn't be the first time the NSA intentionally weakened standards so it could more easily defeat cryptographic protections.
In 2007, for example, NIST backed NSA-developed code for generating random number generators.

Almost from the start, the so-called Dual_EC_DRBG was suspected of containing a deliberately designed weakness that allowed the agency to quickly derive the cryptographic keys that relied on the algorithm for crucial randomness.
In 2013, some six years later, Snowden-leaked documents all but confirmed the suspicions. RSA Security, at the time owned by the publicly traded corporation EMC, responded by warning customers to stop using Dual_EC_DRBG.

At the time, Dual_EC_DRBG was the default random number generator in RSA's BSAFE and Data Protection Manager programs. Early this year, Juniper Networks also removed the NSA-developed number generator from its NetScreen line of firewalls after researchers determined it was one of two backdoors allowing attackers to surreptitiously decrypt VPN traffic. In contrast to 1,024-bit keys, keys with a trapdoored prime of 2,048 bits take 16 million times longer to crack, or about 6.4 × 109 core-years, compared with the 400 core-years it took for the researchers to crack their trapdoored 1,024-bit prime. While even the 6.4 × 109 core-year threshold is considered too low for most security experts, the researchers—from the University of Pennsylvania and France's National Institute for Research in Computer Science and Control at the University of Lorraine—said their research still underscores the importance of retiring 1,024-bit keys as soon as possible. "The discrete logarithm computation for our backdoored prime was only feasible because of the 1,024-bit size, and the most effective protection against any backdoor of this type has always been to use key sizes for which any computation is infeasible," they wrote in a research paper published last week. "NIST recommended transitioning away from 1,024-bit key sizes for DSA, RSA, and Diffie-Hellman in 2010. Unfortunately, such key sizes remain in wide use in practice." In addition to using sizes of 2,048 bits or bigger, the researchers said, keys must also be generated in a way that holders can verify the randomness of the underlying primes. One way to do this is to generate primes where most of the bits come from what cryptographers call "a 'nothing up my sleeve' number such as pi or e." Another method is for standardized primes to include the seed values used to ensure their randomness.
Sadly, such verifications are missing from a wide range of regularly used 1,024-bit primes. While the Federal Information Processing Standards imposed on US government agencies and contractors recommends a seed be published along with the primes they generated, the recommendation is marked as optional. The only widely used primes the researchers have seen come with such assurances are those generated using the Oakley key determination protocol, the negotiated Finite Field Diffie-Hellman Ephemeral Parameters for TLS version 1.3, and the Java Development Kit. Cracking crypto keys most often involves the use of what's known as the number field sieve algorithm to solve, depending on the key type, either its discrete logarithm or factorization problem.

To date, the biggest prime known to have its discrete logarithm problem solved was 768 bits in length from last year.

The feat took about 5,000 core years.

By contrast, solving the discrete logarithm problem for the researcher's 1,024-bit key with the trapdoored prime required about a tenth of the computation. "More distressing" Since the early 1990s, researchers have known that certain composite integers are especially susceptible to being factored by NFS.

They also know that primes with certain properties allow for easier computation of discrete logarithms.

This special set of primes can be broken much more quickly than regular primes using NFS.

For some 25 years, researchers believed the trapdoored primes weren't a threat because they were easy to spot.

The new research provided novel insights into the special number field sieve that proved these assumptions wrong. Heninger wrote: The condition for being able to use the faster form of the algorithm (the "special" in the special number field sieve) is that the prime has a particular property.

For some primes that's easy to see, for example if a prime is very close to a power of 2. We found some implementations using primes like this, which are clearly vulnerable. We did discrete log computations for a couple of them, described in Section 6.2 of the paper. But there are also primes for which this is impossible to detect. (Or, more precisely, would be as much work to detect as it is to just do the discrete log computation the hard way.) This is more distressing, since there's no way for any user to tell that a prime someone gives them has this special property or not, since it just looks like a large prime. We discuss in the paper how to construct primes that have this special property but the property is undetectable unless you know the trapdoor secret. It's possible to give assurance that a prime does not contain a trapdoor like this. One way is to generate primes where most of the bits come from a "nothing up my sleeve" number like e or pi.
Some standards do this.

Another way is to give the seeds used for a verifiable random generation algorithm. With the current batch of existing 1,024-bit primes already well past their, well, prime, the time has come to retire them to make way for 2,048-bit or even 4,096-bit replacements.

Those 1,024-bit primes that can't be verified as truly random should be viewed with special suspicion and banished from accepted standards as soon as possible.

CloudFlare Looks to Jump-Start TLS 1.3 Adoption by Supporting Draft

CloudFlare aims to jump-start adoption of the next generation of internet encryption by supporting a draft standard. The Transport Layer Security 1.3 specification is not yet a finalized Internet Engineering Task Force (IETF) official standard, but that's not stopping content delivery network provider CloudFlare from implementing it.

CloudFlare announced on Sept. 20 that it is now supporting several advanced encryption technologies on its platform, including TLS 1.3, Opportunistic Encryption and HTTPS Rewrites.TLS 1.3 is the latest incarnation of the standard for encrypting data in motion across the internet that originally was known as Secure Sockets Layer (SSL).

Following SSL 3.0, which is no longer considered to be safe, TLS became its successor in 1999 with the TLS 1.0 specification.

The most recent formal version of TLS is the 1.2 specification that was defined in 2008."CloudFlare supports the latest draft of the TLS 1.3 specification, which is very close to the final version of the protocol," Nick Sullivan, head of cryptography at CloudFlare, told eWEEK. "We expect this draft to be standardized soon."Both the Mozilla Firefox and Google Chrome web browsers support the latest draft of TLS 1.3 as well. Sullivan noted that anyone using Firefox or Chrome with TLS 1.3 will automatically connect to CloudFlare sites with TLS 1.3. "With about 4 million CloudFlare customers today, this will encourage browser vendors to enable TLS 1.3, and we hope that this is a call for action to make that happen," he said. Among the promises of TLS 1.3 is that it can enable encrypted traffic to be as fast as nonencrypted traffic. Historically, one of the most cited reasons why organizations have not deployed SSL/TLS is because of the performance impact that it has on traffic."TLS 1.3 decreases connection time compared to previous versions of TLS, which has remained the same since the beginning of SSL," Sullivan said.In addition, TLS 1.3 builds on top of the next-generation HTTP/2 web standard for even faster page loads.

The HTTP/2 standard was declared by the IETF to be final on Feb. 18, 2015, providing improved web traffic prioritization, control and security capabilities.
Sullivan added that encrypted sites are already faster than unencrypted sites today as a result of CloudFlare's launching support for HTTP/2 back in 2015.While support for TLS 1.3 is helpful for encouraging the use of encryption, CloudFlare is also taking additional measures, including support for HTTPS Rewrites and Opportunistic Encryption.
Sullivan said the HTTPS Rewrite technology was developed by CloudFlare security experts in collaboration with technologists from the Electronic Frontier Foundation (EFF) who manage the HTTPS Everywhere project."The main difference between the two is that with HTTPS Rewrites we rewrite links on your page, and with Opportunistic Encryption we tell the browser that the site is available over an encrypted connection via an HTTP header," Sullivan explained. "Rewriting links helps fix mixed content on all browsers, while Opportunistic Encryption only works with Firefox."The reason why HTTPS Rewrites and Opportunistic Encryption are needed is because many websites will still mix non-HTTPS content, including images, links and videos, with HTTPS pages.
Sullivan said that CloudFlare's Automatic HTTPS Rewrites solves the problem of mixed content errors, which occur when content is loaded using unencrypted HTTP on an HTTPS site."These errors result in a warning message or the removal of the green lock icon in the address bar," Sullivan said. "With Automatic HTTPS Rewrites, images or content that use HTTP will automatically be secured using HTTPS whenever possible."Overall, CloudFlare is working to make encryption as simple and as accessible as possible, he said."We believe online services should be available using encryption, and that encryption should be enabled by default," Sullivan said. "These three features make it easier and more appealing than ever for customers to make encryption their default. However, the choice is ultimately up to our customers.

That's why we created these features—to make the decision to encrypt a no-brainer."Sean Michael Kerner is a senior editor at eWEEK and InternetNews.com.

Follow him on Twitter @TechJournalist.