12.8 C
Thursday, September 21, 2017
Home Tags Tamper

Tag: Tamper

Phishing, watering holes and malware are being used to steal credentials which could be used to tamper with energy supplies.
Research by MWR InfoSecurity found that it's possible to tamper with older Amazon Echo speakers and discreetly turn the home-assistants into an eavesdropping device.
Big Blue's new patent aims to protect cryptographic keys and make them tamper-resistant.
Developer, DRM maker both deny accusations as protection gets removed.
The four major makers aren't properly securing critical cardiac devices, report says.
BioWare patches in new, uncracked Denuvo version alongside improvements.
EO ultimately led to sanctions against Russia for hacking and other attempts to tamper with the outcome of the US election.
DRM says main goal is "keeping the game safe... during the initial sales window"
Enlargeportal gda reader comments 26 Share this story A virulent family of malware that infected more than 10 million Android devices last year has made a comeback, this time hiding inside Google Play apps that have been downloaded by as many as 12...
Another shot at spook-proofing e-mail It's taken longer than first expected, but the first fruits of Lavabit founder Ladar Levison's Dark Mail Technical Alliance have landed with the relaunch of the encrypted mail service he closed in 2013. After shuttering Lavabit, Levison joined hands with Silent Circle to form the DMTA and promised Lavabit would flow again in 2014. In 2015, Levison posted a GitHub repository putting forward a protocol to support fully “dark” e-mail: the Dark Internet Mail Environment, or DIME, which has “multiple layers of key management and multiple layers of message encryption”. The Libdime implementation offered both libraries and command line utilities, which is, after all, doing it the hard way: Lavabit Mark II puts that in the hands of users with the also-open-source Magma Webmail server implementation. The Lavabit mail server, Magma first appeared on GitHub in 2016. Levison writes “DIME provides multiple modes of security (Trustful, Cautious, & Paranoid) and is radically different from any other encrypted platform, solving security problems others neglect.

DIME is the only automated, federated, encryption standard designed to work with different service providers while minimising the leakage of metadata without a centralised authority.

DIME is end-to-end secure, yet flexible enough to allow users to continue using their email without a Ph.D. in cryptology.” So what's in the protocol? Let's look at the specification, published here (PDF). DIME's message flow.
Image: The Dark Mail Technical Alliance You don't get perfect security while you've still got wetware involved.

The DIME document notes that if a user has a weak password or bad endpoint security, all bets are off. Within that constraint, the DMTA says DIME's designed to provide “secure and reliable delivery of email, while providing for message confidentiality, tamper protection, and a dramatic reduction in the leakage of metadata to processing agents encountered along the delivery path”. At the top level, the four components of the system architecture are e-mail clients; privacy processing agents; key stores (with a resolver architecture to retrieve keys, in DIME called “signets”); and the encrypted message objects. To most users, The Register will assume the only new concept here is the privacy processing agent (PPA).

There are two kinds, the organisational PPA, and the user PPA. The Organisation Privacy Agent (OPA) talks to both user e-mail clients and the Internet at large, handling user key management to create “a secure transit channel that hides all information about the message using transport layer security”.
It also “provides access to the envelope information needed for immediate handling.” The User Privacy Agent (UPA) handles user-side crypto functions, and can reside in the user's e-mail client or, in Webmail implementations, on the server. DIME has three modes of operation: Trustful – the user trusts the server to handle privacy; Cautious – the server stores and synchs encrypted data, including encrypted copies of private keys and messages.

Encryption can be carried out inside a user's browser; Paranoid – the server never sees a user's keys.

There's no Webmail, and if you want to use multiple devices, it's up to you to synch them across different keyrings. In technical terms, that means the system has to automate all aspects of key management; encrypt and sign messages without a user having to learn how to run it; and resist manipulation (including, ideally, even if a client is compromised. The layering of encryption, the standard says, is designed to protect messages, even if (for example) a server along the way is compromised. DIME relies on a concept of “signets” for keys: organisational signets, which are keys associated with a domain; and user signets, the key associated with an individual e-mail address. “The basic validation model is to obtain a signet from a credible primary source and then confirm it with another pre-authenticated source.

The two pre-authenticated sources currently available are a management record signed using DNSSEC or a TLS certificate signed by a recognised Certificate Authority (CA).

Both can be cryptographically traced by a signet resolver back to a trusted key that is shipped with the resolver. As well as the Webmail version, Lavabit says it wants to develop clients for Windows, Mac OS and iOS, Linux and Android. ® Sponsored: Customer Identity and Access Management
Blockchains, AI, internet of things (IoT), and other emergent technologies are all going to be major forces in the coming year, notes IT industry analyst firm CompTIA in its IT Industry Outlook 2017 report. But these new-school technologies are hemmed in by some of the industry's oldest and most pervasive problems: Lack of qualified people to make the most of them, laggardly approaches to security, and whether or not they represent solutions still looking for a good problem. What do we do with this? In CompTIA's eyes, the emergent technologies of the coming year include software-defined components (the enablers of "hyperconverged infrastructure"), blockchain technology, and machine learning/artificial intelligence. As with the cloud environments most of these will prosper in, they're "primarily focused on the back end, and [we] will see initial adoption at the enterprise level before moving downstream into the SMB space." The hard part will be figuring out where they're genuinely useful. Blockchain's original name-brand application, bitcoin, has turned out to be only one of many possible uses for the tech, with automated contracts or tamper-proof databases on the list as well. Right now it's the biggest of the boys, the likes of IBM and Microsoft, that are making blockchain useful; when this trickles down to small businesses, it'll likely be through tech those companies are already familiar with -- such as databases -- and not necessarily entirely new applications. There's little question that machine learning and AI have achieved prominence and are already at work enriching many different kinds of applications. But CompTIA asserts that ML/AI "will provide a new layer for technology interaction" -- that is, the manner in which AI can enhance the utility of any given tech is still largely unexplored territory. IoT, long believed to be a game-changer, also figured into CompTIA's analysis, but with plenty of caveats: "The complexity of IoT and the regulations and protocols required for integration will drive a long adoption cycle." Add to that a patchwork of competing standards, a lack of good expertise, and threats galore -- all good reasons why IoT still remains better in theory than in practice. Experts (still) wanted The lack of good expertise isn't confined to IoT; CompTIA sees the difficulties of finding the right people to work with emerging technologies across IT generally. "Finding workers with expertise in emerging tech fields" made the top slot on the report's list of "factors contributing to a more challenging hiring landscape in 2017," ahead of issues like "insufficient pool of talent in locale" and "competing with other tech firms." CompTIA's list of "emerging job titles to watch for" feature many AI and data-centric specializations: chief data officer, data architect, AI/machine learning architect, and data visualizers. The title of "cloud services engineer" could also in theory be stretched to include ML/AI, as cloud platforms offer more. Machine learning may be getting easier to grasp thanks to such services (and the open source toolkits from which many of them spring), but they still require knowledgeable developers to get the most from them. Other specializations on the list reflect CompTIA's belief that many technologies that have already become staple presences in the last couple of years still lack for manpower. Data visualization, for instance, is not new, but the manner in which data is accumulated and analyzed in the enterprise -- and the breadth of new tools available -- has given analytics a new prominence and a need for data visualization experts. (CompTIA also sees room for data admins moving out of the dev side and interacting more directly with business units to "focus on aggregation, analysis, and visualization.") Likewise, container technology existed before Docker, but Docker consumerized it and made it broadly accessible, spurring a need for quality talent ("container developer," on CompTIA's list) to make sense of it and put it to good use. Risky business Another major skill set on CompTIA's list is security. "Computer security incident responder," "risk management specialist," and "information assurance analyst/security auditor" all show up there, a reflection of CompTIA's sentiment that security for enterprises and small businesses will "get worse before it gets better." As bad as the "headline-making breaches" of the past few years have been, big companies have shown they can shrug off the costs, and the report projects any event that "creates a tipping point will need to have greater consequences before there is a broad shift in transforming security technology, processes, and education." One possible reason why many outfits "play catchup on recent technology moves" with security, as CompTIA puts it, is the disproportionate distribution and affects of security breaches. Health care companies, for instance, have been hammered heavily by attacks because of the payoffs involved, and they stand to be the most targeted sector for such attacks in 2017. 
♪ I've got the key, I've got the secreeeee-eeet ♪ Google has released an open-source technology dubbed Key Transparency, which is designed to offer an interoperable directory of public encryption keys. Key Transparency offers a generic, secure way to discover public keys.

The technology is built to scale up to internet size while providing a way to establish secure communications through untrusted servers.

The whole approach is designed to make encrypted apps easier and safer to use. Google put together elements of Certificate Transparency and CONIKS to develop Key Transparency, which it made available as an open-source prototype on Thursday. The approach is a more efficient means of building a web of trust than older alternatives such as PGP, as Google security engineers Ryan Hurst and Gary Belvin explain in a blog post. Existing methods of protecting users against server compromise require users to manually verify recipients' accounts in-person.

This simply hasn't worked. The PGP web-of-trust for encrypted email is just one example: over 20 years after its invention, most people still can't or won't use it, including its original author. Messaging apps, file sharing, and software updates also suffer from the same challenge. Key Transparency aims to make the relationship between online personas and public keys "automatically verifiable and publicly auditable" while supporting important user needs such as account recovery. "Users should be able to see all the keys that have been attached to an account, while making any attempt to tamper with the record publicly visible," Google's security boffins explain. The directory will make it easier for developers to create systems of all kinds with independently auditable account data, Google techies add.

Google is quick to emphasise that the technology is very much a work in progress. "It's still very early days for Key Transparency. With this first open-source release, we're continuing a conversation with the crypto community and other industry leaders, soliciting feedback, and working toward creating a standard that can help advance security for everyone," they explain. The project so far has already involved collaboration from the CONIKS team, Open Whisper Systems, as well as the security engineering teams at Yahoo! and internally at Google. Early reaction to the project from some independent experts such as Matthew Green, a professor of cryptography at Johns Hopkins University, has been positive. Kevin Bocek, chief cyber-security strategist at certificate management vendor Venafi, was much more sceptical. "Google's introduction of Key Transparency is a 'build it and hope the developers will come' initiative," he said. "There is not the clear compelling event as there was with Certificate Transparency, when the fraudulent issuance of digital [certificates] was starting to run rampant. Moreover, building a database of public keys not linked to digital certificates has been attempted before with PGP and never gain[ed] widespread adoption." ® Sponsored: Want to know more about Privileged Access Management? Visit The Register's hub