Home Tags Waterloo

Tag: Waterloo

Atomic clocks and solid walls: New tools in the search for...

As searches come up empty, some are thinking of new ways to look for dark matter.

IBM Watson steps into real-world cybersecurity

Watson is done with school -- for now -- and is ready to try out what it has learned in the real world. IBM has launched the Watson for Cyber Security beta program to encourage companies to include Watson in their current security environments.

Wow. What a shock. The FBI will get its bonus hacking...

Rule 41 makes life easier for Feds, cops to target Tor, VPN users, and malware victims Three last-ditch legislative efforts to block the changes to Rule 41 of the Federal Rules of Criminal Procedure have failed, and from tomorrow the Feds will find hacking your PC a lot less of a hassle. The rule change was introduced by the Supreme Court in April. It will allow the FBI and police to apply for a warrant to a nearby US judge to hack any suspect that's using Tor, a VPN, or some other anonymizing software to hide their whereabouts, in order to find the target's true location. Normally, if agents want to hack a suspect's PC, they have to ask a judge for a warrant in the jurisdiction where the machine is located. This is tricky if the location is obscured by technology. With the changes to Rule 41 in place, investigators can get a warrant from any handy judge to deploy malware to find out where the suspect is based – which could be anywhere in America or the world. Also, when agents are investigating a crime that spans five or more different judicial districts in the US, the new Rule 41 will allow them to go to just one judge for a warrant, rather than all the courts in all the involved jurisdictions. And it allows the Feds, with a search warrant, to poke around in people's malware-infected computers to, in the words of the US Department of Justice, "liberate" devices. This extension of law enforcement hacking powers has occurred with no Congressional debate or vote, simply by an administrative change. But some law makers have been fighting to stop the change – today was their Waterloo, and sadly they got Napoleon's role. Shortly after the April decision, Senators Ron Wyden (D-OR) and Rand Paul (R-KY) introduced the Stopping Mass Hacking (SMH) Act, but it remained stalled in Congress. Wyden made a last plea for the Senate to act on Wednesday but it was rejected. "By sitting here and doing nothing, the Senate has given consent to this expansion of government hacking and surveillance," Wyden said. "Law-abiding Americans are going to ask 'what were you guys thinking?' when the FBI starts hacking victims of a botnet hack. Or when a mass hack goes awry and breaks their device, or an entire hospital system, and puts lives at risk." Next it was the turn of Senator Chris Coons (D-DE) to ask for unanimous consent to pass his Review the Rule Act, which would have extended the deadline for the rule change by six months. This was denied. "These changes to Rule 41 will go into effect tomorrow without any hearing or markup to consider and evaluate the impact of the changes," he said. "While the proposed changes are not necessarily bad or good, they are serious, and they present significant privacy concerns that warrant careful consideration and debate." Lastly Wyden tried again, asking Congress to sign off on his Stalling Mass Damaging Hacking Act, which would have extended the deadline by just three months. Republican leaders refused to support the bill and so as of tomorrow, the rules come into effect. ® Sponsored: Customer Identity and Access Management

SHA3-256 cipher is quantum-proof, should last BEELLIONS of years, say boffins

Ye Olde asymmetric encryption looks like it can beat the coming of the quantum cats While it's reasonable to assume that a world with real quantum computers will ruin traditional asymmetric encryption, perhaps surprisingly hash functions might survive. That's the conclusion of a group of boffins led by Matthew Amy of Canada's University of Waterloo, in a paper at the International Association of Cryptologic Research. The researchers – which included contributions from the Perimeter Institute for Theoretical Physics and the Canadian Institute for Advanced Research – looked at attacks on SHA-2 and SHA-3 using Grover's algorithm (a quantum algorithm to search "black boxes" - Wikipedia). They reckon both SHA-256 and SHA3-256 need around 2166 “logical qubit cycles” to crack. Perhaps counter-intuitively, the paper says the problem isn't in the quantum computers, but the classical processors needed to manage them. The paper notes: “The main difficulty is that the coherence time of physical qubits is finite. Noise in the physical system will eventually corrupt the state of any long computation.” “Preserving the state of a logical qubit is an active process that requires periodic evaluation of an error detection and correction routine.” If the quantum correction is handled by ASICs running at a few million hashes per second (and if Vulture South's spreadsheet is right), Grover's algorithm would need about 1032 years to crack SHA-256 or SHA3-256. That's considerably longer than the mere 14 billion years the universe has existed, although less than the estimated 10100 years until the heat death of the universe.

Even if you didn't care about the circuit footprint and used a billion-hash-per-second Bitcoin-mining ASIC, the calculation still seems to be in the order of 1029 years. ®

Crypto needs more transparency, researchers warn

Publish primes with seeds, so we know there are no backdoors Researchers with at the French Institute for Research in Computer Science and Automation (INRIA) and the University of Pennsylvania have called for security standards-setters to publish the seeds for the prime numbers on which their standards rely. The boffins also demonstrated again that 1,024-bit primes can no longer be considered secure, by publishing an attack using “special number field sieve” (SNFS) mathematics to show that an attacker could create a prime that looks secure, but isn't. Since the research is bound to get conspiracists over-excited, it's worth noting: their paper doesn't claim that any of the cryptographic primes it mentions have been back-doored, only that they can no longer be considered secure. “There are opaque, standardised 1024-bit and 2048-bit primes in wide use today that cannot be properly verified”, the paper states. Joshua Fried and Nadia Heninger (University of Pennsylvania) worked with Pierrick Gaudry and Emmanuel Thomé (INRIA at the University of Lorraine on the paper, here. They call for 2,048-bit keys to be based on “standardised primes” using published seeds, because too many crypto schemes don't provide any way to verify that the seeds aren't somehow back-doored. Examples of re-used primes in the paper include: Many TLS implementations use some form of default, and as a result, “in May 2015, 56 per cent of HTTPS hosts selected one of the 10 most common 1024-bit groups when negotiating ephemeral Diffie-Hellman key exchange”; In IPSec, “66 per cent of IKE responder hosts preferred the 1024-bit Oakley Group 2 over other choices” for their Diffie-Hellman exchange; and OpenSSH implementations favour “a pre-generated list that is generally shipped with the software package”. If any of the “hard-coded” primes were maliciously produced – something that's happened before, for those who remember RSA's NSA-funded Dual EC Deterministic Random Bit Generator – it would be hard to spot by looking at the numbers, but factorisation would be feasible. It might not necessarily be easy, however: the paper describing the SNFS computation notes it needed “a little over two months of calendar time on an academic cluster” (using between 500 and 3,000 cores in different phases in the operation – a total of around 400 core-years). Their experiments ran on France's Grid'5000 testbed, the University of Pennsylvania's Cisco UCS cluster, the University of Waterloo's CrySIP RIPPLE facility, and Technische Universiteit Eindhoven's Saber cluster. Earlier this year, INRIA researchers turned up the Sweet32 birthday attack against old Blowfish and Triple DES ciphers, and in January the group warned the world that the zombie MD5 and SHA1 hash protocols live on in too many TLS, IKE and SSH implementations. ®

Why quantum computing has the cybersecurity world white-knuckled

As quantum computers inch closer to reality, experts are sweating over their potential to render many of today's cybersecurity technologies useless. Earlier this year the U.S. National Institute of Standards and Technology issued a call for help on the matter, and this week the Global Risk Institute added its voice to the mix. Because of quantum computing, there's a one-in-seven chance that fundamental public-key cryptography tools used today will be broken by 2026, warned Michele Mosca, co-founder of the University of Waterloo's Institute for Quantum Computing and special advisor on cybersecurity to the Global Risk Institute. By 2031, that chance jumps to 50 percent, Mosca wrote in a report published Monday. "Although the quantum attacks are not happening yet, critical decisions need to be taken today in order to be able to respond to these threats in the future," he added. Such threats stem from the fact that quantum computers work in a fundamentally different way than traditional computers do. In traditional computing, numbers are represented by either 0s or 1s, but quantum computing relies on atomic-scale units called quantum bits, or "qubits," that can be simultaneously 0 and 1 through a state known as superposition. Far greater performance and efficiency are among the benefits, but there's also a downside. "One unintended consequence of quantum computation is breaking some of the cryptographic tools currently underpinning cybersecurity," Mosca wrote. Encryption, for example, often relies on the challenge of factoring large numbers, but researchers recently demonstrated what they said is the first five-atom quantum computer capable of cracking such encryption schemes. "When the cryptographic foundations upon which a cyber system is built are fundamentally broken, unless a failover replacement (which generally takes years to develop) is in place, the system will crumble with no quick fixes," Mosca wrote. "Right now, our cyber immune system is not ready for the quantum threat. There is a pending lethal attack, and the clock is ticking to design and deploy the cure before the threat is realized." In the short term, work needs to be done to design systems that are "cryptographically agile," Mosca said, and can quickly swap one cryptographic tool for another. In the longer run, we'll need "quantum-safe" cryptography tools, he said, including protocols that can run on conventional technologies and resist quantum attacks. Part of the NIST's effort will be a competition in which members of the public will devise and test promising new cryptographic methods. Meanwhile, private security firms are working on the problem as well. KryptAll, for example, recently launched an independent effort of its own, with the goal of having a product available by 2021.

Blackberry enters a new era, files 105-page patent lawsuit against Avaya

reader comments 25 Share this story BlackBerry has filed a patent lawsuit (PDF) against internet telephony firm Avaya. The dispute marks a turning point for Blackberry, which pushed into the Android market last year but has been struggling.In making its case that Avaya should pay royalties, BlackBerry's focus is squarely on its rear-view mirror.

The firm argues that it should be paid for its history of innovation going back nearly 20 years. "BlackBerry revolutionized the mobile industry," the company's lawyers wrote in their complaint. "BlackBerry... has invented a broad array of new technologies that cover everything from enhanced security and cryptographic techniques, to mobile device user interfaces, to communication servers, and many other areas." Out of a vast portfolio, BlackBerry claims Avaya infringes eight US Patents: Nos. 9,143,801 and 8,964,849, relating to "significance maps" for coding video data; No. 8,116,739, describing methods of displaying messages; No. 8,886,212, describing tracking location of mobile devices; No. 8,688,439, relating to speech decoding and compression; No. 7,440,561, describing integrating wireless phones into a PBX network; No. 8,554,218, describing call routing methods; and No. 7,372,961, a method of generating a cryptographic public key. The patents have various original filing dates, ranging from 2011 back to 1998. Accused products include Avaya's video conferencing systems, Avaya Communicator for iPad, a product that connects mobile users to IP Office systems, and various IP desk phones.

The '961 cryptography patent is allegedly infringed by a whole series of products that "include OpenSSL and Open SSL elliptic curve cryptography," including the Avaya CMS and conferencing systems. The BlackBerry complaint states that the company notified Avaya of its alleged infringement of those specific patents in a letter dated December 17, 2015. BlackBerry, which is based in Waterloo, Ontario, filed the complaint in the Northern District of Texas, where Avaya does business and maintains a two-story office. To prosecute the suit, BlackBerry has hired Quinn Emanuel, an experienced California-based law firm that's no stranger to high-profile tech cases.

Emanuel's firm defended Samsung in the high-profile Apple v.
case and has taken on various cases for Google. The lawsuit was filed last week and first reported by the IAM Blog on Tuesday. This won't be the first time a large networking company pays BlackBerry for its patents.

A patent cross-license that BlackBerry executed last year involved Cisco paying a "license fee," although the amount was confidential.
In May, BlackBerry CEO John Chen told investors on an earnings call that he was in "patent licensing mode," eager to monetize his company's 38,000 patents.

There are limits to 2FA and it can be near-crippling to...

[embedded content] A video demonstration of the vulnerability here, using a temporary password. Kapil Haresh reader comments 36 Share this story This piece first appeared on Medium and is republished here with the permission of the author. It reveals a limitation in the way Apple approaches 2FA, which is most likely a deliberate decision. Apple engineers probably recognize that someone who loses their phone won’t be able to wipe data if 2FA is enforced, and this story is a good reminder of the pitfalls. As a graduate student studying cryptography, security and privacy (CrySP), software engineering and human-computer interaction, I've learned a thing or two about security. Yet a couple of days back, I watched my entire digital life get violated and nearly wiped off the face of the Earth. That sounds like a bit of an exaggeration, but honestly it pretty much felt like that. Here’s the timeline of a cyber-attack I recently faced on Sunday, July 24, 2016 (all times are in Eastern Standard): That’s a pretty incidence matrix Kapil Haresh 3:36pm—I was scribbling out an incidence matrix for a perfect hash family table on the whiteboard, explaining how the incidence matrix should be built to my friends. Ironically, this was a cryptography assignment for multicast encryption. Everything seemed fine until a rather odd sound started playing on my iPhone. I was pretty sure it was on silent, but I was quite surprised to see that it said “Find My iPhone Alert” on the lock screen. That was odd. 3:37pm—My iPhone’s lock screen changes. The screen dims, with the following message, “Hey why did you lock my iPhone haha. Call me at (123) 456–7890.” This was when I realized what exactly was happening. My Apple ID had been compromised and the dimwit on the other end was probably trying to wipe all my Apple devices. Clearly he/she wasn’t very smart (to my benefit), and the adversary had decided to play the sound and kick the iPhone into Lost Mode before attempting to run the remote erase. When you throw a device into Lost Mode, it immediately attempts to get the physical location of the device and shows it to the adversary. Sounds familiar? Of course, this was exactly what happened in August of 2012 with Mat Honan’s massive hack. In his case it happened through a slightly different way, but the end goal was the same—wipe the devices and destroy the data. 3:36pm, first Find My iPhone alert. Apple e-mails you back everytime you make a change with Find My iPhone. Kapil Haresh 3:37pm, second Find My iPhone alert. Kapil Haresh 3:37pm, Lost Mode enabled. Kapil Haresh 3:38pm—Naturally, I go into lockdown mode, and immediately take all my devices offline to stop whatever else the adversary was planning to do. When I knew I was being targeted in the same way as the Mat Honan attack, I expected would soon try to wipe my devices. True enough, I was able to confirm that they indeed attempted to wipe my iPhone and my Mac as well. 3:37pm, adversary now knows where I am right at that moment. Cool. Kapil Haresh 3:50pm, I get back into iCloud and notice the pending erase request. Kapil Haresh Because I managed to take all my devices offline, I was able to make sure all of them didn’t get their erase requests from the server. But this could have been worse, way worse. After the Honan attack back in 2012, I decided to get two-factor authentication (2FA) turned on for my Apple ID to act as a safeguard. 2FA here was my friend to some extent, as in the case of iCloud. 2FA blocks any user attempting to login to your account, not allowing them to go any further than logging in and accessing Find My iPhone, Apple Pay, and Apple Watch settings — I don’t have Apple Pay and an Apple Watch for now, so I am not sure as to the extent of access for those two. But with Find My iPhone, this form of 2FA doesn’t protect it. This was kind of understood — if you lose your iPhone, you can’t get the second factor of authentication to get in to lock your iPhone. One of the benefits of having 2FA was that things like my Mail, Contacts, Calendar and other documents were locked away. Without my 2FA code that comes either through my trusted device (via the Find My iPhone service) or via a text message, there wasn’t any way to get that unless the trusted device or the device that received the text message was compromised as well. Additionally, there was no way for the adversary to reset the password without getting the second authentication code. 2FA via iCloud Kapil Haresh I was able to lock my account with a new password and got all the erase requests cancelled. But herein lies the problems, which if addressed, could have prevented this attack or at least limit the potential damage. Put simply: the lack of 2FA for Find My iPhone and the lack of pattern monitoring on Apple’s servers were the two main reasons this attack took place. Legitimate login by me, on my (then new) MacBook Kapil Haresh The adversary’s login — I did get an e-mail detailing the login attempt Kapil Haresh Pattern monitoring One of the things I did notice was that the login notification e-mails generally originate from the country you login from, especially in this day and age when Apple has a local division in most large, if not all, countries. I noticed this as I was able to check on my older login notification e-mails that I received when I lived in Australia . In that case all of my notifications were addressed from Apple Pty Ltd, while my logins from Canada were addressed from Apple Canada Inc. In this case, the adversary’s login attempt resulted in a login e-mail from Ireland instead, which lead me to suspect they clearly were not in North America at least. Of course this could have beeen spoofed with the help of a VPN, but the location change could have been detected as it would be an outlier from my regular logins from Canada. The other, clearer differentiation of the pattern was the part where the login was done on a Windows computer, instead of a Mac . In my case, this would have been quite an outlier as I normally use a Mac and can probably count the number of times I have logged in using a Windows computer. Ideally, at this point, it would have been reasonable for Apple to check if this was a legitimate login — for example, using one of the secondary accounts nominated in the Apple ID. Microsoft actually does this if you attempt to use your Microsoft account on a new device or a device that isn’t normally used, and the company locks the account and gets you to confirm the login through your secondary accounts. E-mail notification when I tried to login after the attack, once I had it contained Kapil Haresh Microsoft got this bit right. I got this when I signed in on a new Mac. Kapil Haresh Lack of 2FA for Find My iPhone When you sign up for 2FA, Apple disables the secret questions/answers to reset the password — you need the recovery key instead to regain access if you forget the password. I can see why Apple decided against using the same 2FA authentication for Find My iPhone . Ideally, you’d only use Find My iPhone when you lose your device, hence you’d not be able to access your text and on-device authentication. But for there to be no 2FA for Find My iPhone doesn’t quite add up. I can imagine how this could be fixed. Instead of having a one time code for Find My iPhone, it might be better to have a second layer of authentication in the form of a secret question/answer when accessing Find My iPhone if 2FA was on. The legitimate user would know the answer for the question just like in the case of a forgotten password. By nominating a number of question /answer pairs, it can be randomized, too. If such a thing existed, the adversary in this case would have not been able to go further than looking up the location, and ideally he/she wouldn’t be able to play the alert sound or even conduct the remote erase. Enlarge Ramona Leitao What happens next? To be fair, I have not had bad experiences with Apple’s security in the last 10 years of using their products, hence I would say I’m still pretty confident to use its products. At the same time, the viability for such an attack to occur is quite scary considering Apple is moving (like many others) to a cloud focussed future. My experience in this case wasn’t as bad as it could have been. I knew what to do and how to contain, and subsequently neutralize, the attack as I know how Find My iPhone and iCloud works. But to the general population—a large proportion of Apple’s user base—this would have been a very different story. I’ve never revealed this password, and the password itself is pretty random, with capital letters, small letters and numbers. I've also never accidentally signed into a dodgy site with it. I’m going on the basis that the adversary successfully guessed the password somehow, but the important thing here is to reduce the damage should a password be obtained by the adversary. I believe this is a genuine concern, and I think Apple should address this as soon as possible. I can’t imagine having my iPhone randomly wipe out while I’m on the road with CarPlay giving me driving directions or HomeKit controlling my home (especially considering in the next couple of years, we’d likely to see stronger CarPlay integration and HomeKit integration). To the hackers — please take grammar classes. That was quite a pathetic Lost Mode message. Not as bad as the Oleg Pliss attack message in 2014, though interestingly, that attack could have been prevented as well if there was a second factor of authentication for Lost Mode. Back then, just like the situation today, the 2FA that everyone suggested to turn on doesn’t protect Find My iPhone. Kapil Haresh (LinkedIn) is a current CS grad student and TA at the University of Waterloo, where he does cryptography, security and privacy (CrySP), software engineering and human-computer interaction. As a CS undergrad from the University of Wollongong, he specialized in digital security and software engineering, making it into the limited admission Dean’s Scholar program. His cryptography and network security subjects had pretty decent (90 percent) averages on their own, but even he's not immune to 2FA's shortcomings. (Luckily, as you've read, he was able to minimize the damage.) He can be contacted at khvignes (at) uwaterloo (dot) ca. Listing image by Ramona Leitao