Home Tags Contemporary

Tag: contemporary

Possible human ancestor turns out to have shared Earth with us

A recent small-brained, early-looking hominid shakes up the family tree.

Nintendo Switch review: Meet the Game Boy Entertainment System

Hybrid system merges an amazing portable experience with a middling TV console.

Mobile apps and stealing a connected car

The concept of a connected car, or a car equipped with Internet access, has been gaining popularity for the last several years.

By using proprietary mobile apps, it is possible to get some useful features, but if a car thief were to gain access to the mobile device that belongs to a victim that has the app installed, then would car theft not become a mere trifle?

Bay Area: Join us tonight, 2/15, to discuss law and technology...

Law professor Ahmed Ghappour on tech, borders, and national security.

DOJ: Let’s make eyewitness identification more scientifically rigorous

Enlarge / Deputy Attorney General Sally Yates (pictured here in 2015) announced the change in the photo array guidelines in January 2017.Washington Post / Getty Images News reader comments 16 Share this story The Department of Justice has instituted new guidelines regarding identification in photo arrays of suspects, making the procedure more scientifically rigorous. Notably, these changes include a “blind” administration—where the person giving the exam doesn’t actually know who the actual suspect is—and recording the identification session. The new guidelines, which were released last Friday, state: There are times when such "blind" administration may be impracticable, for example, when all of the officers in an investigating office already know who the suspect is, or when a victim-witness refuses to participate in a photo array unless it is administered by the investigating officer. In such cases, the administrator should adopt "blinded" procedures, so that he or she cannot see the order or arrangement of the photographs viewed by the witness or which photograph( s) the witness is viewing at any particular moment. These guidelines apply specifically to federal agencies including the FBI and the Drug Enforcement Agency, and not to local law enforcement. However, some local law enforcement agencies have already begun to adopt such policies, including Dallas and Baltimore. In December 2013, the International Association of Chiefs of Police made similar recommendations in conjunction with the Innocence Project. The DOJ’s 2017 revision comes 18 years after the last time federal law enforcement addressed photo array procedures and noted that “research and practice have significantly evolved since then.” The DOJ specifically noted that the National Academies of Science published a lengthy study on the topic in 2014, taking into account contemporary research. In that 2014 paper, the NAS also recommended that all photo array identification sessions be recorded where possible. As the DOJ summarized: A witness's identification and assessment of certainty cannot be easily challenged if law enforcement agencies electronically record the identification procedure and the witness's response. Electronic recording preserves the identification process for later review in court and also protects officers against unfounded claims of misconduct. Video-recording is helpful because it allows fact finders to directly evaluate a witness's verbal and nonverbal reactions and any aspects of the array procedure that would help to contextualize or explain the witness' selection. As of 2013, approximately one-fifth of state and local law enforcement agencies had instituted video-recording of photo arrays. “Eyewitness identifications play an important role in our criminal justice system, and it’s important that we get them right,” said Deputy Attorney General Sally Yates in a statement last week.

Here’s what a “digital Miranda warning” might look like

EnlargeThomas Hawk reader comments 65 Share this story Anyone who has ever watched an American crime movie or television show can practically recite the Miranda warning by heart, even if they don’t know its official name. You have the right to remain silent.

Anything that you say or do can be used against you. You have the right to an attorney.
If you cannot afford one, one will be provided to you.

Do you understand these rights as I have read them to you? The basic idea behind the Miranda warning is to provide someone being arrested with information about their constitutional rights against compelled self-incrimination (Fifth Amendment) during a custodial situation and to reassure them of their right to an attorney (Sixth Amendment). This warning stems from a 1966 Supreme Court case, Miranda v.

Arizona
, where a kidnapping and rape suspect, Ernesto Miranda, confessed to the crime without the benefit of a lawyer and without being fully informed of his rights to not self-incriminate.

Today, all American police officers must recite some version of the Miranda warning while taking someone into custody due to the Supreme Court’s landmark 5-4 decision. In the half-century since the Miranda decision, a lot has changed.

For one, many of us carry smartphones containing a rich trove of personal data in our pockets that might interest law enforcement.
In fact, it wasn’t until 2014 that police officers nationwide were specifically ordered not to search people’s phones without a warrant during an arrest. In 1966, no one envisioned a world where we carried powerful computers in our pockets, so it's time for an update to the Miranda warning.

A modernized version would need to make clear not only that anyone can refuse to speak, but that speaking might involve inputting a passcode to open up a phone.

After speaking with several legal experts, here’s our "digital Miranda," based on our best understanding of current law. You have the right to remain silent.

This right includes declining to provide information that does not require speaking, such as entering a passcode to unlock a digital device, like a smartphone.

Anything that you say or do can be used against you.

Any data retrieved from your device can also be used against you. You have the right to an attorney.
If you cannot afford one, one will be provided to you.

Do you understand these rights as I have read them to you? We recognize that this revised Miranda warning has no actual force of law.
It’s simply meant as a way to think about encryption, constitutional rights, and contemporary interactions with police. Remember, you only get Mirandized during a “custodial situation” Chris Yarzab/Flickr Back in 2014, the court unanimously found in Riley v.

California
 that law enforcement must get a warrant before searching mobile phones during an arrest. Prior to Riley, at least some law enforcement officials were searching some suspects’ phones on the grounds that data on the phones could be used to aid their investigations. Writing for a rare unanimous court, Chief Justice John Roberts argued dismissively against the government, saying that searching a phone was not at all like searching a wallet. “That is like saying a ride on horseback is materially indistinguishable from a flight to the moon,” he concluded. Riley showed that the Supreme Court has started to think in fundamentally new ways about privacy in relation to the digital devices that are almost always with us.
So, then, we wondered, would most people even think to challenge law enforcement when asked to unlock their device, whether during an arrest, or otherwise? In fact, just after the Riley decision in 2014, a California Highway Patrol officer asked a woman to unlock her phone and hand it over during a traffic stop on suspicion of a DUI.
She complied.
It’s worth noting that as this was just a traffic stop, which is not generally considered to be a “custodial situation.” She did not need to be given a Miranda warning, either. Recall, Riley only dealt with a very specific situation: requiring a warrant incident to arrest. The officer, Sean Harrington, found semi-nude pictures on the woman’s phone, which he then sent on to himself and shared with his buddies. (Harrington has since left the CHP, was prosecuted, took a plea deal, and is currently on probation.) We guess that most people wouldn’t know about Riley, nor many of their other constitutional rights and how they apply in the modern world. Most people probably would follow whatever instructions, whether legal or not, given to them by an (ideally well-intentioned) officer of the law. (To be clear, we’ve yet to find an example where evidence was tossed in a case because an officer blatantly ignored Riley.) When in doubt, ask for a lawyer and stay quiet One of the key elements of understanding post-Miranda criminal procedure is that suspects don’t always have to be read their rights. Miranda only kicks in during what’s called a “custodial” situation, typically an arrest. (A 2009 article from PoliceOne.com describes “how to talk to suspects without Mirandizing.”) When we asked around, Orin Kerr, a law professor at George Washington University, was quick to point out that there is a post-Miranda Supreme Court decision that involves what’s known as a “consent search.” In this 1973 decision, in a case known as Schneckloth v.

Bustamonte
, the court found that a search is still allowed where consent is granted, even if the defendant is not expressly informed of his or her constitutional rights to refuse such a search. In that case, Sunnyvale, California, Police Officer James Rand pulled over a car containing six people at 2:40am on a traffic stop for a broken tail light. When Office Rand asked the men to produce identification, only one, Joe Alcala, complied. Rand asked him if he could search the car, and Alcala agreed.

The search yielded stolen checks in the car. One of the passengers, Robert Bustamante, was eventually charged with possessing stolen checks.

The men challenged the search, and eventually, the Supreme Court found that the men were under no legal obligation to consent to a search. Moreover, the officer did not have to inform the men of their rights until one of them had been arrested. Similarly, the woman who had the unfortunate interaction with the CHP officer in 2014 was under no obligation to unlock her phone, much less hand it over. Harrington didn’t have to read “Jane Doe” a Miranda warning—she was not under arrest.

As many cops know, criminals often will still talk even after they are Mirandized. “The nice thing about Miranda is that it doesn’t require [police] to say too much,” Mark Jaffe, a criminal defense lawyer who specializes in computer crimes, told Ars. (Jaffe has represented defendants in cases that Ars has written about, including Matthew Keys and Deric Lostutter.) Jaffe explained that many law enforcement officers want a clear, bright line like Miranda, as to what is acceptable in certain situations. But what about a scenario where law enforcement simply comes knocking at your door, asking that you help out? What rights do you have in such a non-custodial setting? In February 2016, a woman in Glendale, California, was ordered to depress her fingerprint on a seized iPhone. Months later, in May 2016, federal law enforcement officials, also in Los Angeles County, were successful in getting judicial approval for two highly unusual searches of a seized smartphone at two different Southern California homes, one in Lancaster and one in West Covina, about 90 miles away.

The signed warrants allowed the authorities to force a resident reasonably believed to be a user to press their fingerprints on the phone to see if it would unlock. (Under iOS and Android, fingerprints as passcodes only work for 48 hours, after that timeframe, the regular passcode is required.

Court records show that the warrants were presumably executed within that 48-hour window.) While there is no evidence that any of the residents attempted to challenge this order in court, it seems that someone could have. Presumably a person could have refused, possibly risking contempt of court and even the use of physical force to get a fingerprint onto the phone. “You shouldn’t resist a police order, you should lodge your dissent, and you should ask and clarify that they’re asking you to do it,” Alex Abdo, an attorney with the American Civil Liberties Union, told Ars. “But you should comply—as a lawyer that’s the advice you’re going to have to give.” Kerr didn’t think that a Lancaster-style situation would be considered custodial, and so wouldn’t trigger Miranda.
In other words, given the court’s holding in Schneckloth, our revised Miranda warning wouldn’t matter anyway. This seems reasonable—there are plenty of situations where many people might want to be helpful to police. Plus, we generally want police to be able to solve crimes.

But not everyone may be so forthcoming or trusting of police. Jaffe even proposed a short verbal warning that law enforcement could use as a Miranda-style warning in non-custodial situations: “I would like to search your car/house/phone. Please understand I don’t have a warrant to do so.” Supreme Court has yet to rule Being enticed or even compelled to hand over passcodes or fingerprint-enabled passcodes gets to the heart of the “going dark” problem. Law enforcement says that modern “unbreakable” encryption frustrates lawful investigations aimed at tech-savvy criminals who refuse to unlock their data. As Ars has reported before, under the Fifth Amendment, defendants cannot generally be compelled to provide self-incriminating testimony (“what you know”).
In 2012, the 11th US Circuit Court of Appeals ruled in favor of a defendant (“John Doe”) accused of possessing child pornography. “We conclude that the decryption and production would be tantamount to testimony by Doe of his knowledge of the existence and location of potentially incriminating files; of his possession, control, and access to the encrypted portions of the drives; and of his capability to decrypt the files,” the court wrote. The government did not pursue the issue further.

For now the 11th Circuit ruling, which covers Alabama, Florida and Georgia, remains the highest court to have directly addressed the subject. But that doesn’t mean that other judges see it this way, and some have ordered forced decryption. Shortly after the 11th Circuit ruling, a judge ordered a Colorado woman to decrypt her laptop computer so prosecutors could use the files against her in a criminal case.

The case, in which the judge also found that the woman's Fifth Amendment privilege against compelled self-incrimination was not violated, ultimately settled itself without her having to cough up the password and decrypt her computer for the authorities. More recently, a former Philadelphia police sergeant, referred to in court documents as yet another John Doe, still remains in custody for refusing an April 2016 court order to decrypt hard drives that authorities believe contain child porn.

That case is currently pending before the 3rd US Circuit Court of Appeals, and a decision could come at any time.
In court filings, Doe’s lawyers largely relied on the 11th Circuit’s decision. But, giving a fingerprint (“what you are”) for the purposes of identification or matching to an unknown fingerprint found at a crime scene has been allowed.
It wasn’t until relatively recently, after all, that fingerprints could be used to unlock a smartphone.

The crux of the legal theory here is that a compelled fingerprint isn’t testimonial, it’s simply a compelled production—like being forced to hand over a key to a safe. In the Lancaster court filings, nearly all of the cases that the government cites predate the implementation of fingerprint readers, except for a 2014 state case from Virginia.

As Ars reported at the time, a Virginia Circuit Court judge ruled that a person does not need to provide a passcode to unlock their phone for the police.

The court also ruled that demanding a suspect provide a fingerprint to unlock a phone would be constitutional. However, the Virginia state case, while interesting, has little legal relevance to ongoing federal cases across the country. “I’m not sure that I would ever provide my passcode if it would incriminate myself,” Brian Owsley, a law professor at the University of North Texas and a former federal magistrate judge, told Ars. “What’s the max that you’re going to face for refusing to obey a court order? If you’re facing life sentence without parole you’re better off being obstinate—it’s not the job of the accused ever to make the job easier for the prosecution.” What is custodial, anyway? Enlarge Marc Falardeau Situations where law enforcement demands passwords in what they believe are noncustodial situations are surely set to become standard practice, if they haven’t already. On a cold February morning earlier this year, no less than 10 armed officers from various law enforcement agencies, all wearing body armor, showed up to execute a search warrant on Justin Ashmore’s two-bedroom apartment in Arkansas. According to Ashmore’s lawyer, Carrie Jernigan, her client answered the door in his underwear. He was held to the side of the room as the search began.

Ashmore was then led upstairs to his bedroom to stay out of earshot of his eight-year-old son, who was also in the apartment and questioned. One of the federal agents began peppering Ashmore with questions and statements like: “Tell me why we are here today?” and “Don't play stupid, you know why we are here.” Ashmore initially thought perhaps it was because he had a small amount of marijuana in his freezer.

The questioning agent told him he didn’t care about the weed. As the interrogation went on, the agent eventually came out with it: “Tell me about the child porn movies you have been downloading.” According to the government, this was roughly when Ashmore confessed. As Jernigan wrote: At no time upstairs was Defendant Ashmore ever advised of his Miranda warnings or ever told he was free to leave.
In fact, he was denied his ability to leave.

Defendant Ashmore was also told he had to give the agents all passwords to all his electronic devices or it may be a very long time before he sees his own son again, to which he complied and gave the agents the information. In their own filings, prosecutors dispute many parts of Jernigan's account. "When interviewing Defendant, Special Agent Cranor and TFO Heffner did not use strong arm tactics and deceptive stratagems during questioning," they wrote, adding: "The agents advised Defendant that they were there to serve a search warrant regarding child pornography downloads at his residence and did not ask him to guess as to why they were there." The government argued that this scenario was not custodial, and so Ashmore did not have to be read his rights. According to the US District Judge PK Holmes’ December 2016 13-page opinion, it wasn’t until after Ashmore confessed to having the marijuana that he was Mirandized, at which point he allegedly confessed a second time to having downloaded child porn. Judge Holmes further explained that because the defendant had not been adequately given a Miranda warning before he gave up the password to his cellphone and computer, that his two alleged confessions and the two passwords to his devices should be suppressed. “The Government’s position is effectively that because officers never Mirandized Ashmore for his alleged confession related to child pornography, they could not have circumvented Miranda on purpose,” Judge Holmes continued. “Having listened to their testimony and observed their demeanor on this point, the Court does not believe the officers’ testimony and finds that they deliberately avoided giving Ashmore a Miranda warning.” But, in this case, Judge Holmes concluded, the data found on those devices would be allowed to be presented as evidence during trial as the warrants were valid. Plus, police would have been able to access them anyway as the computer hard drive was unencrypted and Ashmore’s Samsung Android’s passcode could have been circumvented easily.
In legal terms, this is known as the “independent source doctrine [which] is an exception to the exclusionary rule.” For now, Ashmore is set to go to trial January 17, 2017 in Fort Smith, Arkansas. "Agencies are given tools to use to investigate crimes and they should entirely be allowed to use those tools," Erik Rasmussen, a lawyer and former Secret Service special agent who focused on computer crimes, told Ars. "It changes all the time, because the adversaries change all the time."

An update on all the legal cases we thought would be...

EnlargeCary Bass-Deschenes reader comments 3 Share this story As a tumultuous 2016 draws to a close, one case distilled contemporary law enforcement, terrorism, encryption, and surveillance issues more than any other: the case popularly known as “FBI vs.

Apple.” The ordeal began on February 16 when a federal judge in Riverside, California, ordered Apple to help the government unlock and decrypt the seized iPhone 5C used by Syed Rizwan Farook.

Farook had shot up an office party in a terrorist attack in nearby San Bernardino in December 2015. Specifically, United States Magistrate Judge Sheri Pym mandated that Apple provide the FBI a custom firmware file, known as an IPSW file, that would likely enable investigators to brute force the passcode lockout currently on the phone, which was running iOS 9.

This order was unprecedented. Apple refused, and the two sides battled it out in court filings and the court of public opinion for weeks. But the day before they were set to argue before the judge in Riverside, prosecutors called it off.

They announced that federal investigators had found some mysterious way to access the contents of Farook’s phone, but provided hardly any details.
In April 2016, Ars reported that the FBI paid at least $1.3 million for a way to access the phone.

But getting into the phone seems to have resulted in little, if any, meaningful benefits. The underlying legal issue remains unresolved.
In May 2016, FBI Director James Comey noted that the government would likely bring further legal challenges in the near future.

The law is clearly struggling to keep up with the current realities of encryption.

These issues impact not only national security cases, but also more run-of-the-mill crimes. In short, many of the most profound questions of our time have yet to be resolved.

These include: what measures can the government take in order to mitigate encryption? What tools can the government employ in order to conduct legitimate investigations? Can a person or a company be compelled to hand over a password or fingerprint to unlock a phone or create new software to achieve that end? In years past, Ars has tried to predict what privacy-related cases would reach the Supreme Court.

Given that our track record has been abysmal, we’re going to take a slightly different approach this year.

Today, we’ll update the five surveillance-related cases that we thought would become huge in 2016.

Tomorrow, we’ll expand our outlook to include other important legal cases still ongoing in 2017 that touch on important tech issues. Not exactly an angel on top Case: United States v. MohamudStatus: 9th US Circuit Court of Appeals rejected appeal in December 2016 As with last year, we’ll begin with the story of a terrorism suspect who was convicted of attempting to blow up a Christmas tree lighting ceremony in Portland, Oregon, in 2010.

That case involved a Somali-American, Mohamed Osman Mohamud, who became a radicalized wannabe terrorist. Mohamud believed that he was corresponding with an Al-Qaeda sympathizer, and he was eventually introduced to another man who he believed was a weapons expert.

Both of those men were with the FBI. Mohamud thought it would be a good idea to target the ceremony on November 27, 2010. He was arrested possessing what he believed was a detonator, but it was, in fact, a dud. Earlier this month, the 9th US Circuit Court of Appeals rejected an effort to overturn Mohamed Osman Mohamud’s conviction on the grounds that the surveillance to initially identify the suspect did not require a warrant. Mohamud went to trial, was eventually found guilty, and was then sentenced to 30 years in prison. After the conviction, the government disclosed that it used surveillance under Section 702 of the FISA Amendments Act to collect and search Mohamud's e-mail.
Seeing this, Mohamud’s legal team attempted to re-open the case, but the 9th Circuit disagreed. As the 9th Circuit ruled: "The panel held that no warrant was required to intercept the overseas foreign national’s communications or to intercept a U.S. person’s communications incidentally." From here, Mohamud and his legal team could ask that the 9th Circuit re-hear the appeal with a full panel of judges (en banc), or they could appeal up to the Supreme Court.
If either court declines, the case is over, and the ruling stands. Slowly turning wheels of justice Case: United States v. HasbajramiStatus: Appeal pending in 2nd US Circuit Court of Appeals Similar to Mohamud, another notable terrorism case revolves around Section 702 surveillance.

As we reported at this time last year, Hasbajrami involves a United States person (citizen or legal resident) accused of attempting to provide support for terrorism-related activities.

According to the government, Agron Hasbajrami, an Albanian citizen and Brooklyn resident, traded e-mails with a Pakistan-based terror suspect back in 2011.

The terror suspect claimed to be involved in attacks against the US military in Afghanistan.

After he was apprehended, Hasbajrami pleaded guilty in 2013 to attempting to provide material support to terrorists. After he pleaded guilty, the government informed Hasbajrami that, like with Mohamud, it had used Section 702 surveillance against him, and the case was re-opened. Many cases that have tried to fight surveillance have fallen down for lack of standing. Hasbajrami’s case is different, however, because he can definitively prove that he was spied upon by the government. As his case neared trial in mid-2015, Hasbajrami pleaded guilty a second time.

But shortly thereafter, he moved to withdraw the plea again, which the judge rejected.
So the case progressed to the 2nd US Circuit Court of Appeals. Earlier this year, when we expected to see Hasbajrami’s first appellate filing, his new lawyers filed an application with the judge.

They asked that the case be held “in abeyance,” which essentially puts a kind of stay on the appeals process.

The 2nd Circuit agreed. The reason? Because US District Judge John Gleeson, then the judge at the lower-court level, issued a classified opinion “which directly relates to and impacts the issues to be raised on appeal.” United States v. Hasbajrami was delayed when Judge Gleeson stepped down from the bench in late February. While Judge Gleeson’s opinion was released (in a redacted form) to the defense attorneys, by September, defense attorneys argued again in filings to the new judge that they possess adequate security clearance and should be given access to this material, unredacted. As they wrote: In that context, the government repeatedly fails—in its argument as well as the authority it cites—to distinguish public release of the redacted portions from providing security-cleared defense counsel access to that material. Here, all Mr. Hasbajrami seeks is the latter.

Thus, the dangers of dissemination beyond to those already authorized to review classified information simply do not exist, and the government’s contentions with respect to national security serve as a red herring. The most recent entry in either the appellate or district court docket is an October 31 filing.
In it, defense attorneys inform the 2nd Circuit that they are still waiting for Chief US District Judge Dora Irizarry to rule on receiving the unredacted version. One of Hasbajrami’s attorneys is Joshua Dratel.

Dratel is famous for having defended (and still defending) Ross Ulbricht, the convicted mastermind behind the Silk Road drug marketplace website. The Free Encyclopedia Case: Wikimedia v. NSAStatus: Appeal pending in 4th US Circuit Court of Appeals Of course, Section 702 is just one of many ways the government is conducting surveillance beyond its intended target. Wikimedia v. NSA is one of several cases that has tried to target the “upstream” setup that allows the NSA to grab data directly off fiber optic cables. Wikimedia, which publishes Wikipedia, filed its case originally in March 2015.
In it, the company argues that the government is engaged in illegal and unconstitutional searches and seizures of these groups’ communications. But, in October 2015, US District Judge T.S.

Ellis III dismissed the case. He found that Wikimedia and the other plaintiffs had no standing and could not prove that they had been surveilled.

That action largely echoed a 2013 Supreme Court decision, Clapper v.

Amnesty International
. The plaintiffs filed their appeal to the 4th US Circuit Court of Appeals immediately.
In their February 2016 opening brief, which was written by top attorneys from the American Civil Liberties Union, they argue essentially that Wikipedia traffic had to have been captured in the National Security Agency’s snare because it’s one of the most-trafficked sites on the Internet. They wrote: In other words, even if the NSA were conducting Upstream surveillance on only a single circuit, it would be copying and reviewing the Wikimedia communications that traverse that circuit.

But the government has acknowledged monitoring multiple internet circuits—making it only more certain that Wikimedia’s communications are being copied and reviewed. Moreover, the NSA’s own documents indicate that it is copying and reviewing Wikimedia’s communications.

Taken together, these detailed factual allegations leave no doubt as to the plausibility of Wikimedia’s standing. The government, for its part, countered by saying that the 4th Circuit should uphold the district court’s ruling. Why? Because, as it argued in April 2016, Wikimedia’s argument is largely speculative. ... the facts do not support plaintiffs’ assumption that Wikimedia’s communications must traverse every fiber of every sub-cable such that, if the NSA is monitoring only one fiber or even one sub-cable, it still must be intercepting, copying, and reviewing Wikimedia’s communications. Beyond that, the government continued, even if Wikimedia’s communications were intercepted, the plaintiffs have not demonstrated how they have actually been injured, because a large portion of the NSA’s interception is done by machine. The government continued: Indeed, plaintiffs’ complaint generally fails to state a cognizable injury because, whatever the nature of the particular communications at issue, plaintiffs have made no allegation that interception, copying, and filtering for selectors involve any human review of the content of those communications. The two sides squared off at the 4th Circuit in Baltimore on December 8, 2016 for oral arguments.

A decision is expected within the next few months. Fast food, fast crimes Case: United States v.

Graham
Status: Decided en banc at 4th US Circuit Court of Appeals, cert petition filed to Supreme Court This case was a big hope for many civil libertarians and privacy activists.

An appeals court had initially rejected the thorny third-party doctrine and found that, because the two suspects voluntarily disclosed their own location to their mobile carrier via their phones, they did not have a reasonable expectation of privacy. But in May 2016, the 4th US Circuit Court of Appeals, in an en banc ruling, found in favor of the government.

The court concluded that police did not, in fact, need a warrant to obtain more than 200 days' worth of cell-site location information (CSLI) for two criminal suspects. As the court ruled: The Supreme Court may in the future limit, or even eliminate, the third-party doctrine.

Congress may act to require a warrant for CSLI.

But without a change in controlling law, we cannot conclude that the Government violated the Fourth Amendment in this case. This case dates back to February 5, 2011 when two men robbed a Burger King and a McDonald’s in Baltimore.

Ten minutes later, they were caught and cuffed by Baltimore City Police officers.

Eventually, Aaron Graham and Eric Jordan were charged with 17 federal counts of interstate robbery, including a pair of fast food robberies and another one at a 7-Eleven.

They also received charges for brandishing a firearm in furtherance of the crime. A Baltimore City Police detective first sought and obtained a search warrant for the two cell phones recovered during a search of the getaway car. Prosecutors later obtained a court order (a lesser standard than a warrant) granting disclosure of the defendants’ CSLI data for various periods totaling 14 days when the suspects were believed to have been involved in robberies.

The government next applied for (and received) a second application to another magistrate judge for a new set of CSLI data, covering a period of July 1, 2010 through February 6, 2011 (221 days). In August 2012, Graham and Jordan were found guilty on nearly all counts.

They were sentenced to 147 years in prison and 72 years, respectively. Meghan Skelton, Graham’s public defender, has filed an appeal with the Supreme Court, which has not yet decided whether it will hear the case. Who is the Dread Pirate Roberts? Cases: United States v. Ulbricht and United States v.

Bridges
Status: Appeals pending in 2nd US Circuit Court of Appeals, 9th US Circuit Court of Appeals, respectively While Section 702 surveillance and cell-site location information are important, there was one defendant who was defeated largely by snatching his laptop out of his hands: Ross Ulbricht.

The young Texan was convicted as being Dread Pirate Roberts, the creator of the notorious online drug market Silk Road. Later on in 2015, Ulbricht was given a double life sentence, despite emotional pleas from himself, his family, and friends for far less. 2016 kicked off with Ross Ulbricht’s formal appeal to the 2nd Circuit.

Ars described it as a “170-page whopper that revisits several of the evidentiary arguments that Ulbricht's lawyer made at trial.” These included theories that Ulbricht wasn’t Dread Pirate Roberts, and it attributed digital evidence found on Ulbricht’s computer to “vulnerabilities inherent to the Internet and digital data,” like hacking and fabrication of files.

According to the appeal, these “vulnerabilities” made “much of the evidence against Ulbricht inauthentic, unattributable to him, and/or ultimately unreliable.” Plus, corrupt federal agents Shaun Bridges and Carl Mark Force tarnished the case against Ulbricht, claimed his lawyer.

That lawyer is Joshua Dratel, who makes his second appearance on this list. The government responded with its own 186-page whopper on June 17, 2016.

After a lengthy recap of the entire case, United States Attorney Preet Bharara opened his arguments with a notable flaw in Ulbricht’s logic: But nowhere, either below or here, has Ulbricht explained, other than in the most conclusory way, how the corruption of two agents—who neither testified at his trial nor generated the evidence against him—tended to disprove that he was running Silk Road from his laptop. In short, the government argues, Ulbricht was caught red-handed, and the appeals court should uphold both the conviction and the sentence. The following month, federal prosecutors in San Francisco unsealed new court documents that make a strong case that former agent Bridges stole another $600,000 in bitcoins after he pleaded guilty. By August 2016, Bridges’ lawyer Davina Pujari filed what she herself said was a “legally frivolous” appeal to the 9th Circuit on behalf of her client, and she asked to be removed from the case.

Bridges’ case remains pending at the appellate level, and no oral arguments have been scheduled. (Pujari is still Bridges’ lawyer for now.) Bridges remains a prisoner at the Terre Haute Federal Correctional Institute in Indiana, where he is scheduled for release in 2021. Later in August, Ars chronicled the saga of how a San Francisco-based federal prosecutor joined forces with a dogged Internal Revenue Service special agent to bring Bridges and Force to justice. Meanwhile, Ulbricht’s lawyers, led by Joshua Dratel, faced off at the 2nd Circuit against federal prosecutors on October 6, 2016 to challenge Ulbricht’s conviction and sentence.

The court is expected to rule within the next few months.

Malicious code and the Windows integrity mechanism

Introduction Ask any expert who analyzes malicious code for Windows which system privileges malware works with and wants to acquire and, without a second thought, they’ll tell you: “Administrator rights”.

Are there any studies to back this up? Unfortunately, I was unable to find any coherent analysis on the subject; however, it is never too late to play Captain Obvious and present the facts for public evaluation. My goal wasn’t to review the techniques of elevating system privileges; the Internet already has plenty of articles on the subject. New mechanisms are discovered every year, and each technique deserves its own review. Here, I wanted to look at the overall picture and talk about the whole range of Windows operating systems in all their diversity dating back to Windows Vista, but without discussing specific versions. Step Back in Time The Windows XP security model differs significantly from the security model of Windows Vista and newer operating systems.

There are two types of user accounts in Windows XP: a standard account and an administrator account.

The vast majority of users worked with administrator rights, despite the fact that they didn’t need the rights for everyday tasks.

These people infected their systems with malicious software that acquired the rights of the current user and, more often than not, they were administrator rights.

As a result, the malicious software did not encounter any serious problems acquiring elevated privileges in a system running Windows XP. This mechanism was used until the release of the Windows Vista family, where Microsoft introduced a new security model: Windows integrity mechanism. Integrity Level in Windows 10 Roughly speaking, the two aforementioned user account types are present in the new mechanism; however, the operating system now utilizes the Admin Approval Mode. Yes, that very same, our “beloved” UAC (User Access Control).

As soon as there is a need for elevated privileges, a UAC dialog pops up and prompts the user for permission to perform a certain action. The human factor is one of the primary security problems, and that is why placing responsibility on a user who doesn’t know the first thing about computer security is, to say the least, a questionable decision. Microsoft itself has issued the following statement on the topic: “One important thing to know is that UAC is not a security boundary. UAC helps people be more secure, but it is not a cure all. UAC helps most by being the prompt before software is installed.” For those interested in Microsoft’s position on the matter, I recommend reading the following blog posts: User Account Control, User Account Control (UAC) – quick update, Update on UAC. The Windows Integrity Mechanism The new Windows integrity mechanism is the main protection component of the Windows security architecture.

The mechanism restricts access permissions of applications that run under the same user account, but that are less trustworthy. Put more simply, this mechanism assigns an integrity level to processes as well as other securable objects in Windows.

The integrity level restricts or grants access permissions of one object to another. // // Mandatory Label Authority. // #define SECURITY_MANDATORY_LABEL_AUTHORITY {0,0,0,0,0,16} #define SECURITY_MANDATORY_UNTRUSTED_RID (0x00000000L) #define SECURITY_MANDATORY_LOW_RID (0x00001000L) #define SECURITY_MANDATORY_MEDIUM_RID (0x00002000L) #define SECURITY_MANDATORY_MEDIUM_PLUS_RID (SECURITY_MANDATORY_MEDIUM_RID + 0x100) #define SECURITY_MANDATORY_HIGH_RID (0x00003000L) #define SECURITY_MANDATORY_SYSTEM_RID (0x00004000L) #define SECURITY_MANDATORY_PROTECTED_PROCESS_RID (0x00005000L) // // SECURITY_MANDATORY_MAXIMUM_USER_RID is the highest RID that // can be set by a usermode caller. // #define SECURITY_MANDATORY_MAXIMUM_USER_RID SECURITY_MANDATORY_SYSTEM_RID // Mandatory Label Authority. #define SECURITY_MANDATORY_LABEL_AUTHORITY          {0,0,0,0,0,16} #define SECURITY_MANDATORY_UNTRUSTED_RID            (0x00000000L) #define SECURITY_MANDATORY_LOW_RID                  (0x00001000L) #define SECURITY_MANDATORY_MEDIUM_RID               (0x00002000L) #define SECURITY_MANDATORY_MEDIUM_PLUS_RID          (SECURITY_MANDATORY_MEDIUM_RID + 0x100) #define SECURITY_MANDATORY_HIGH_RID                 (0x00003000L) #define SECURITY_MANDATORY_SYSTEM_RID               (0x00004000L) #define SECURITY_MANDATORY_PROTECTED_PROCESS_RID    (0x00005000L) // SECURITY_MANDATORY_MAXIMUM_USER_RID is the highest RID that // can be set by a usermode caller. #define SECURITY_MANDATORY_MAXIMUM_USER_RID   SECURITY_MANDATORY_SYSTEM_RID I won’t go into detail about the operation of the integrity mechanism. We only need one table to simplify interpretation of the gathered statistics: the table shows the connection between integrity levels and SID security identifiers (see Table 7) that identify the user, group, domain, or computer accounts in Windows. SID in Access Token Assigned Integrity Level LocalSystem System LocalService System NetworkService System Administrators High Backup Operators High Network Configuration Operators High Cryptographic Operators High Authenticated Users Medium Everyone (World) Low Anonymous Untrusted Most applications launched by a standard user are assigned a medium integrity level.

Administrators get a high integrity level; services and the kernel receive system integrity.

A low integrity level will be assigned to an App Container, for example.

This is a typical level for modern browsers that protect the operating system from possible malware intrusions from malicious websites. Basically, the high level and the levels above it are the ones that malicious software aims for. Lies, Damned Lies, and Statistics Contemporary anti-virus products implement a comprehensive approach to system security.

That’s why they use dozens of components that prevent malicious code from infecting the system at various stages.

Those components may include Web antivirus, script emulators, cloud signatures, exploit detectors, and much more.

Data entering the system goes through numerous scans initiated by the different components of an antivirus product.

As a result, a huge number of malicious programs do not get to the execution stage and are detected “on takeoff”.

As for me, I was interested in malware that did manage to get to the execution stage.

A contemporary antivirus product continues to track the potentially malicious object, in that even in the event of its execution, behavioral stream signatures (BSS) of the Kaspersky System Watcher component can be triggered. So, I asked our Behavior Detection group to assist me in collecting statistics for system privilege levels used for execution by active malware, and which can be detected with the help of BSS. Within 15 days, I managed to gather data on approximately 1.5 million detections with the help of Kaspersky Security Network.

The entire range of Windows operating systems, starting with Windows Vista up to Windows 10, was included in the statistics.

After filtering out some events and leaving only unique ones as well as those that do not contain our test signatures, I ended up with 976,000 detections. Let us take a look at the distribution of integrity levels for active malicious software during that period. Distribution of Integrity Levels By summing up Untrusted, Low, Medium, as well as High and System, it is possible to calculate a percentage ratio, which I called “OK to Bad”.

Although, I assume, the creators of malware would not view this ratio as being so bad. “OK to Bad” Ratio Conclusions What’s the reason for these horrifying statistics? To be honest, I can’t say for certain just yet; a deeper study is required.
Sure enough, virus writers employ different methods to elevate privileges: autoelevation and bypassing the UAC mechanism, vulnerabilities in Windows and third-party software, social engineering, etc.

There is a non-zero probability that many users have UAC completely disabled, as it irritates them. However, it is obvious that malware creators encounter no problems with acquiring elevated privileges in Windows; therefore, threat protection developers need to consider this problem.

Lawyer who argued for landmark SCOTUS privacy decision says Trump “is...

Enlargejshyun reader comments 38 Share this story In the world of American privacy law, one Supreme Court decision casts a long shadow over all others: Katz v. United States. In that decision, which was handed down in December 1967, the court famously held that the Fourth Amendment “protects people rather than places.” Katz countermanded a previous Supreme Court ruling from 1928, which required a physical trespass to prove a Fourth Amendment violation. In Katz, because the FBI placed secret microphones without a warrant on a Los Angeles phone booth to investigate Charles Katz’ illegal gambling, the Supreme Court reversed a lower court’s decision.

The court found that a telephone booth was, in fact, a place (like a bedroom) where a person has a “reasonable expectation of privacy.” One of the youngest members of Charles Katz’ legal team was Harvey Schneider, then a recent graduate of the University of Southern California law school.
It was Schneider who came up with the legal theory that he presented to the Supreme Court: We feel that the right to privacy follows the individual and that whether or not he’s in a space enclosed by four walls and a ceiling and a roof, or in an automobile, or in any other physical location, is not determinative of the issue of whether or not the communication can ultimately be declared confidential. The Supreme Court accepted this argument.
Since then, Katz has been used as a stepping stone for other important legal ideas, including the third-party doctrine, which legitimized the National Security Agency metadata program, among other things. So it was in this light that Ars called Schneider, now a retired Los Angeles Superior Court judge, to discuss Katz and its ramifications today. In our brief conversation, Judge Schneider said that he was very worried about the presidential election of Donald Trump and his future Supreme Court nominations.

As of now, President-elect Trump will have at least Antonin Scalia’s seat to fill since his death earlier this year, and he could have more in the coming years. “I think the guy’s a moron,” Schneider said of Trump, “and has no idea of what governance is about. He has surrounded himself with people—[Steve Bannon, Trump's chief strategist] is a white nationalist, some of these other people are cuckoo clocks; it’s going to be a tough ride.
I have a feeling that privacy is just one of the areas where there could conceivably be an erosion.” Question everything Schneider describes himself as a “technological midget” and eschews social media, but he clearly is someone who has spent time thinking about the contemporary implications of a landmark decision like Katz. When Ars asked him how modern judges should deal with surveillance technologies that we frequently report on, ranging from “network investigative techniques” to cell-site simulators, Schneider urged his fellow officers of the court to ask questions. “There are judges that sign any kind of warrant without much reflection, and there are judges who will dive into it and ask a lot of questions,” he said. “It differs between judges.” “It’s not unusual for an issue to be brought to a court that the particular judge has no particular expertise in.
I had lots of cases, not necessarily privacy cases, where I did not have any understanding of how certain records were kept, in lots of subjects.

The whole process, it’s the lawyer’s job to educate the judge in those subjects.

The judge can’t possibly know everything about everything.
If it were me and I was asked to sign a warrant, I would be asking lots of questions, but not everybody would.” Despite Judge Schneider’s privacy interests, he does tend to lean toward believing the government in most situations. He said, for example, that he “wasn’t happy at all about Snowden.” “To me, national security trumps everything—pardon the expression,” he added. “My premise is that law enforcement is operating in good faith.
If not, all bets are off.

Assuming that they reasonably believe that there is a communication going to and fro, that somebody is planning on placing a bomb somewhere, I don’t think that they shouldn’t try to get somebody and let the bomb explode.
I don’t think that’s necessary.

But it has to be in good faith.

Aside from national security concerns, I think a reasonable expectation of privacy and warrants should apply.” Even though he generally seems to give government the benefit of the doubt, Schneider indicated that he’ll be watching the Trump administration with a skeptical eye. “It may not be pleasant, the next few years, but we will see,” Schneider said.

Bizarre leaked Pentagon video is a science fiction story about the...

reader comments 70 Share this story Recently we got a peek at what the Army secretly thinks is coming next for humanity.

This short, untitled film was leaked to The Intercept after being screened as part of an “Advanced Special Operations Combating Terrorism” course convened by Joint Special Operations University (JSOU). Originally made by the Army, it's about how troops will deal with megacities in the year 2030. What's surprising is that it acknowledges social problems that the US government usually ignores or denies. Over at The Intercept, Nick Turse explains the film's provenance: The video was used... for a lesson on “The Emerging Terrorism Threat.” JSOU is operated by U.S.
Special Operations Command, the umbrella organization for America’s most elite troops... Lacking opening and closing credits, the provenance of “Megacities” was initially unclear, with SOCOM claiming the video was produced by JSOU, before indicating it was actually created by the Army. “It was made for an internal military audience to illuminate the challenges of operating in megacity environments,” Army spokesperson William Layer told The Intercept in an e-mail. “The video was privately produced pro bono in spring of 2014 based on ‘Megacities and the United States Army’...

The producer of the film wishes to remain anonymous.” Turse goes on to make fun of the film’s hyperbolic narrative and cheesy stock photos, which admittedly feel like a propaganda snippet from Starship Troopers.

Despite the terrible delivery, however, the movie does some good science fiction world-building.

The premise is that we’ve mastered urban warfare, but our tactics only work in late 20th-century cities. Megacities, which are usually defined as urban areas with more than 15 million people, will change the game.

The movie explores what social life will be like in such places, especially after climate change has made them more dangerous and the separation between rich and poor has been magnified beyond belief.
In this video, produced by the Pentagon for its Joint Special Operations University, we learn that the future looks like Blade Runner crossed with Hunger Games. That's right—the Pentagon acknowledges climate change as a serious threat.

This stance is not a new one, and it highlights a rift between the armed forces and Congress. Likewise, economic catastrophe is tackled head-on in this video, with the military describing our class-divided future much like the Occupy movement did. With more people living in poverty, cities will become snarls of DIY electrical grids and ad-hoc social systems that rely on “alternate forms of government” and “decentralized economies.” Strikingly, the megacities the Army imagines are like something from the pages of an early William Gibson novel. Rich people with unimaginable technologies live alongside shantytowns, and both groups are knit together by “unaligned individuals” who work “in the shadows” on digital weapons and social media counter-insurgencies.

There, in a world where “social structures will be dysfunctional,” the Army imagines a “nervous system” of non-state actors further eroding national security by “mingling with citizens” to create new threats. Like I said, the delivery is clunky and the imagery comes from stock photos, but whoever made this movie is familiar with many of the concerns in the best of contemporary science fiction. Plus, filmmakers look at future disasters without flinching and without pretending that climate change is a myth.

And that’s why the scariest part of the film is the fact that it offers no solutions, only more combat.

This is the dystopian military future, as imagined by the US military, where the wars go on forever. Listing image by Program Executive Officer Soldier

Oracle Patches 253 Vulnerabilities in Oct. Security Update

Oracle's latest Critical Patch Update, which fixes 253 vulnerabilities, is the company's second-largest CPU ever. Oracle's patching updates have been growing in recent years. Oracle released its October Critical Patch Update, fixing 253 different vulnerabilities across the company product portfolio. The update, released Oct. 18, is the second-largest ever issued by Oracle, outpaced only by the company's July CPU in which 276 vulnerabilities were patched.Overall, Oracle's patching updates have been growing in recent years, with 2016 set to be larger than in past years."This year, we have seen the updates getting bigger as compared to an average of 161 vulnerabilities in 2015 and 128 vulnerabilities in 2014," Amol Sarwate, director of Vulnerability Labs, Qualys Inc., wrote in a blog post.The largest single product patch haul comes from the Oracle Communications Applications product group, which is receiving fixes for 36 different issues, of which 31 can potentially be exploited over a network without requiring user credentials. Oracle is also patching for 31 different issues that are now fixed in the open-source MySQL database. Additionally, there are 12 vulnerabilities that are being addressed in Oracle's namesake database. Oracle's Fusion middleware is being patched for 29 security issues of which 19 are remotely exploitable without user authentication. The October Critical Patch Update also contains seven new security fixes for Oracle Java. All of the Java vulnerabilities may be remotely exploitable without authentication. John Matthew Holt, CTO of Waratek, isn't surprised at the large number of security issues patched by Oracle. Waratek is an application security firm with a focus on Java applications. Holt commented that the October update is largely the same as many recent updates from Oracle, though that's not to say that they aren't real and serious security issues."Oracle is very serious about improving Java's security and the evidence of those efforts is the constant stream of CVEs [Common Vulnerabilities and Exposures] that are identified and remediated every 90 days," Holt said.It's naïve to think that Oracle's Java is disproportionally insecure compared to other contemporary programming languages and frameworks that report fewer CVEs per quarter, Holt said. In fact, Java is more secure because the company has made a disproportionately larger investment in security than others have made in contemporary programming languages, which are under-reporting and under-discovering their own vulnerabilities, he said."Given the proactive nature of Oracle's CVE find-and-fix program, they are often ahead of large-scale attacks against vulnerabilities. That's the good news," Holt said. "The bad news is that the IT community is poor at maintaining patch compliance and, as a result, are routinely breached by old CVEs in unpatched Java versions long after a CPU update is available to fix a given CVE."The October CPU is Oracle's last patch update for 2016, with the next regularly scheduled update currently set for Jan. 17, 2017.Sean Michael Kerner is a senior editor at eWEEK and InternetNews.com. Follow him on Twitter @TechJournalist.

Latest Release Makes Unisys ClearPath OS 2200 Software More Secure, Open...

Release 17.0 of proven OS 2200 operating environment refines support for single sign-on, mobile app development and advanced data analyticsLondon, UK., October 5, 2016 – Unisys Corporation (NYSE: UIS) today announced a new release of its ClearPath OS 2200 operating-environment software for the company’s ClearPath Forward™ Dorado systems. Available immediately, ClearPath OS 2200 Release 17.0 makes it easier and more cost-efficient for IT organisations to secure their critical computing infrastructure, modernise their application environments and transform their data centers into engines for digital business through support for data analytics and other enabling capabilities. “In a marketplace where change is the only constant, our clients maintain a long-term commitment to OS 2200 as the foundation of the IT infrastructure that runs their business,” said Brian Herkalo, director, ClearPath Forward Solutions, Unisys. “OS 2200 Release 17.0 honors Unisys’ commitment to continue adding value by providing the greater security, openness, mobility, scalability and flexibility our clients need to remain successful as their business evolves.” OS 2200 Release 17.0 is an integrated, fully tested stack of 107 software components, with many feature updates resulting from direct suggestions by the OS 2200 user community. Enhancements Build on OS 2200‘s Unmatched SecurityBuilding on the unsurpassed security of the OS 2200 environment, Unisys has added more than 30 security features. Release 17.0 replaces the prior single sign-on security scheme with one enabled through the Transport Layer Security (TLS) protocol, facilitating access via smart card or workstation. In addition, Release 17.0 refines the Web-based Apex solution for management of the operating environment with the capability to manage file security and quota sets and configure network authentication, as well as enhanced security auditing. Integration of Widely Available Tools Simplifies Cross-System App DevelopmentOS 2200 Release 17.0 incorporates a range of widely available tools that enable developers with skills in Microsoft .NET and other environments to create interoperable web-based applications.

For example, CS2200, a new product, provides a simplified, socket-based program interface that makes it easier for .NET applications to access OS 2200 files and applications. Similarly, the Open TI capability, which provides access to applications or transactions running on ClearPath Forward systems from a Microsoft Windows environment, now supports 32- and 64-bit Windows environments.

The 64-bit capability includes support for .NET 4.6.1 and higher, enabling developers to take advantage of the latest Windows tool sets and technology. In addition, the new release includes support for the widely used Apache Cordova 5.1.1 mobile application development environment in the ClearPath Forward ePortal for OS 2200 solution.

That capability makes it easier to create contemporary user interfaces that enable Apple, Android, Windows, BlackBerry and other devices to access applications running on OS 2200. Enhanced Data Analytics and Automation Help Accelerate Data Center TransformationRelease 17.0 includes enhancements to the analytics and automation capabilities that fortify OS 2200-based data centers for digital business.

For example, improved query performance in the Relational Data Management System (RDMS) database leads to faster data access, enhancing support for business operations requiring data warehousing and online analytical processing (OLAP). Significant performance and scaling improvements to the Message Control Bank (MCB) communications processing capability make it possible to increase OS 2200 application workloads without impacting performance or system limits.

Those enhancements support the higher transaction rates, increased number of concurrently running transactions and potential for a greater number of messages in the system queue that can improve performance in advanced data analytics applications. A new version of the Operations Sentinel data-center management solution increases capabilities for automated monitoring of an organisation’s IT infrastructure, including Windows 10 and Microsoft Windows Server 2016, ClearPath Forward ePortal for OS 2200 implementations, and backup/restore facilities. Services Amplify Benefits of Enhanced SoftwareTo amplify their return on IT investment, users of OS 2200 Release 17.0 can benefit from Unisys expansive, integrated suite of ClearPath Forward Services, including Application Enrichment, Product Implementation and Managed Services. About UnisysUnisys is a global information technology company that works with many of the world's largest companies and government organisations to solve their most pressing IT and business challenges. Unisys specialises in providing integrated, leading-edge solutions to clients in government, financial services and commercial markets. With more than 20,000 employees serving clients around the world, Unisys offerings include cloud and infrastructure services, application services, security solutions, and high-end server technology.

For more information, visit www.unisys.com. Follow Unisys on Twitter and LinkedIn. Contacts:EMEA: Nick Miles, Unisys EMEA, +44(0)7808 391543nick.miles@gb.unisys.com UK: Jay Jay Merrall-Wyre, Unisys, +44 (0) 20 3837 3729unisys@weareoctopusgroup.net ### Unisys and other Unisys products and services mentioned herein, as well as their respective logos, are trademarks or registered trademarks of Unisys Corporation.

Any other brand or product referenced herein is acknowledged to be a trademark or registered trademark of its respective holder.
UIS-C