14.1 C
Thursday, November 23, 2017
Home Tags Malibu

Tag: Malibu

EnlargeAFP / Getty Images News reader comments 11 Share this story A federal appeals court ruled against a criminal defendant who challenged the warrantless use of a stingray that was used to locate him. The Wednesday decision marks the first time that questions regarding the proper use of stingrays, also known as cell-site simulators, have reached the federal appellate level. In the United States v. Patrick, suspect Damian Patrick had an outstanding warrant for a probation violation and was found via the use of a stingray in Milwaukee in 2013. Stingrays determine a phone’s location by spoofing a cell tower.
In some cases, they can intercept calls and text messages. Once deployed, the devices intercept data from a target phone along with information from other phones within the vicinity. In recent years, the use of stingrays has come under increased public scrutiny. Last year, the Department of Homeland Security and the Department of Justice, which oversees the FBI, said that they would have new policies requiring a warrant. But, as the 7th US Circuit Court of Appeals ruled on Wednesday, the fact that law enforcement didn’t have a warrant to use the stingray against Patrick is immaterial. As the court concluded: A person wanted on probable cause (and an arrest warrant) who is taken into custody in a public place, where he had no legitimate expectation of privacy, cannot complain about how the police learned his location. … Probable cause to arrest Patrick predated the effort to locate him.

From his perspective, it is all the same whether a paid informant, a jilted lover, police with binoculars, a bartender, a member of a rival gang, a spy trailing his car after it left his driveway, the phone company’s cell towers, or a device pretending to be a cell tower, provided location information.

A fugitive cannot be picky about how he is run to ground.
So it would be inappropriate to use the exclusionary rule, even if the police should have told the judge that they planned to use a cell-site simulator to execute the location warrant. Deus ex machina As Ars reported previously, the case dates back to October 2013, when Damian Patrick was sitting in the passenger seat of a rented white Chevy Malibu parked behind a house in northern Milwaukee. Unbeknownst to him, he was being watched not only by several local police officers but by the FBI as well. According to a police report, two Milwaukee Police Department (MPD) officers blocked the Chevy with their own car and conducted a traffic stop.

The officers ordered Patrick and the driver of the car, Terrell Newman, to show their hands.

The uniformed cops then opened the car and ordered the two men out.

As Patrick exited the car, Officer Thomas Multhauf noticed a semi-automatic Smith & Wesson on the floor where Patrick was sitting. The officers checked their records and found that Patrick had an outstanding arrest warrant for violating his probation. He was promptly taken into custody. How did the Milwaukee Police Department and the FBI magically descend upon Patrick’s location? The arrest reports are vague, making references only to an "unknown source" and "prior knowledge." The report says, "We [police] obtained information" to the fact that Patrick, wanted on a felony probation violation, happened to be in that parking spot. As the case moved forward, defense attorneys tried to challenge what they argued was an unlawful seizure of Patrick’s gun.

Eventually, Patrick conditionally pleaded guilty to the single count but reserved the right to appeal. Questions abound In a lengthy dissent, Circuit Chief Judge Diane Wood lambasted secrecy surrounding the device. “We know very little about the device, thanks mostly to the government’s refusal to divulge any information about it,” she wrote. “Until recently, the government has gone so far as to dismiss cases and withdraw evidence rather than reveal that the technology was used.” As she concluded: It is time for the stingray to come out of the shadows, so that its use can be subject to the same kind of scrutiny as other mechanisms, such as thermal imaging devices, GPS trackers, pen registers, beepers, and the like.
Its capabilities go far beyond any of those, and cases such as Riley indicate that the Supreme Court might take a dim view of indiscriminate use of something that can read texts and emails, listen to conversations, and perhaps intercept other application data housed not just on the target’s phone, but also on those of countless innocent third parties. Patrick’s attorney, Chris Donovan, did not immediately respond to Ars’ request for comment.

Donovan could request that a full panel (en banc) of the 7th Circuit re-hear the case, or he could appeal up to the Supreme Court.
reader comments 69 Share this story The Linux kernel today faces an unprecedented safety crisis. Much like when Ralph Nader famously told the American public that their cars were "unsafe at any speed" back in 1965, numerous security developers told the 2016 Linux Security Summit in Toronto that the operating system needs a total rethink to keep it fit for purpose. No longer the niche concern of years past, Linux today underpins the server farms that run the cloud, more than a billion Android phones, and not to mention the coming tsunami of grossly insecure devices that will be hitched to the Internet of Things.

Today's world runs on Linux, and the security of its kernel is a single point of failure that will affect the safety and well-being of almost every human being on the planet in one way or another. "Cars were designed to run but not to fail," Kees Cook, head of the Linux Kernel Self Protection Project, and a Google employee working on the future of IoT security, said at the summit. "Very comfortable while you're going down the road, but as soon as you crashed, everybody died."
A crash test between a 1959 Chevrolet Bel Air and 2009 Chevrolet Malibu.

The Linux kernel is more like the car on the right. "That's not acceptable anymore," he added, "and in a similar fashion the Linux kernel needs to deal with attacks in a manner where it actually is expecting them and actually handles gracefully in some fashion the fact that it's being attacked." Jeffrey Vander Stoep, a software engineer on the Android security team at Google, echoed Cook’s message: "This kind of hearkens back to last year's keynote speech when [Konstantin “Kai” Ryabitsev] compared computer safety with the car industry years ago. We need more and we need better safety features, and with it in mind this may cause inconvenience for developers, we still need them." For his part, Kai, a senior systems administrator at the Linux Foundation, who was unable to attend this year’s summit, is pleased that this car safety analogy is finding traction. “We approach security today as though we are still living in the world of the 1990s and 2000s, computers in a data centre managed by knowledgeable people,” he told Ars.

But, he pointed out, most computers today—laptops, smartphones, IoT devices—are not managed and secured by IT professionals. “For the cases where computers are not well protected in the hands of end-users who are not IT professionals, and who do not have any recourse to IT professional help, we need to design systems that proactively protect them,” he said. “We have to change the way we approach this dramatically, the same way the vehicle manufacturers in the 1970s did.” This is, however, easier said than done. Killing bug classes, not political dissidents The clear consensus at the Linux Security Summit was that squashing bugs is a losing strategy. Many deployed devices running Linux will never receive security updates, and patching a security hole in the upstream kernel does nothing to ensure the safety of an IoT device that could be in use for a decade and may forever be ignored by the manufacturer. Even devices that do receive patches may see long gaps between public bug discovery and a patch being applied.

Cook gave the example of an Internet-connected door lock that an end-user might well use for 15 years or more.
Such devices are likely to receive sporadic security patches, if at all. Worse, the average lifetime of a critical security bug in the Linux kernel, from introduction during a code commit to public discovery and having a patch issued, averages three years or more.

According to Cook’s analysis, critical and high-severity security bugs in the upstream kernel have lifespans from 3.3 to 6.4 years between commit and discovery. Red = critical severity bugs; orange = high; blue = medium; and black = low.

The X axis is total number of security bugs; the Y axis shows the kernel version.
So, the height of the bar shows how long that bug was present. Kees Cook "The question I get a lot is 'well isn't this just theoretical?'" he said. "No-one's actually finding these bugs to begin with, so there's no window of opportunity.

And that's demonstrably false.” Nation-state attackers are watching every commit, looking for an opening, he said, and "people are finding these bugs sometimes immediately when they're introduced." He went on: "This seems to be a big thing that people for some reason just can't accept mentally. You know, like 'well I have no open bugs in my bug tracker, everything's fine.'" How, then, can the kernel proactively defend itself against bugs that have not yet been reported—or even implemented? The answer, said Cook, could be a matter of life and death for some people: "If you're a dissident, an activist somewhere in the world, and you're getting spied on, your life is literally at risk because of these bugs.

As we move forward, we need devices that can protect themselves." A closer look at the lifespan of critical- and high-severity security bugs in the upstream Linux kernel. X axis is the number of security bugs; Y axis shows the kernel versions in which each security bug was present. Kees Cook Protecting a world in which critical infrastructure runs Linux—not to mention protecting journalists and political dissidents—begins with protecting the kernel.

The way to do that is to focus on squashing entire classes of bugs, so that a single undiscovered bug would not be exploitable, even on a future device running an ancient kernel. Further, since successful attacks today often require chaining multiple exploits together, finding ways to break the exploit chain is a critical goal. Kernel drivers suck However that's hard to do when the vast majority of kernel bugs come from vendor drivers, not the upstream Linux kernel, Stoep said. "Android does in fact inherit bugs from the upstream kernel," he said, "but our data shows that most of Android's kernel security vulnerabilities live in device drivers." A slide from Stoep's presentation at the Security Summit. Jeffrey Vander Stoep And, he explained, many more are introduced by manufacturers, meaning that securing the Linux kernel against bugs in code over which upstream has no control becomes the challenge. "[Kernel] maintainers say 'bugs you didn't inherit from upstream are not upstream's problem,' but I think the reality is that this is what most Linux systems look like, and it's not limited to Android devices," he said. "Kernel defence will protect both code that comes from upstream as well as out-of-tree vulnerabilities.

That's a really important point." He was quick to add that he was not calling out any particular vendor for poor security practices.

As he put it, to audience chuckles, “they’re really all doing poorly." The bug stops here While the technical challenges the Linux kernel faces in protecting itself against zero-days are “incredibly complex,” Cook said that the politics of submitting patches upstream can be even more challenging. Coming across as a consummate diplomat, both in his talk and in person, he gently chided the buck-passing over how kernel security issues are discovered, fixed, and deployed. "I hear a lot of blame-shifting of where this problem needs to be solved," he told the audience. "Even if upstream says 'oh sure we found that bug, we fixed it,' what kernel version was it fixed in? Did it end up in a stable release? Did a vendor backport it? Did the carrier for the phone take that update from the vendor and push it onto phones?" He went on: "The idea is to build in the protection technologies from the start, so that when a bug comes along, we don't really care." But these mitigations come with trade-offs to performance or maintainability—something that, he hinted, was a continuous struggle to convince upstream kernel maintainers to accept. "Understanding that developing against upstream means you're not writing code for the kernel, you're writing code for the kernel developers," he said. Not just the developers, either.
It's for everyone in the Internet-connected world we now live in. “If we are to start being mindful of this new era of computing,” said Kai, “we have to change the way we approach this dramatically, the same way that vehicle manufacturers in the 1970s did.” J.M. Porup is a freelance cybersecurity reporter who lives in Toronto. When he dies his epitaph will simply read "assume breach." You can find him on Twitter at @toholdaquill. This post originated on Ars Technica UK