Home Tags Digital Certificate

Tag: Digital Certificate

The real reason we can't secure the internet

Last week I speculated that the current horrible state of internet security may well be as good as we're ever going to get. I focused on the technical and historical reasons why I believe that to be true. Today, I'll tell you why I'm convinced that, even if we were able to solve the technical issues, we'll still end up running in place. Global agreement is tough Have you ever gotten total agreement on a single issue with your immediate family? If so, then your family is nothing like mine. Heck, I have a hard time getting my wife to agree with 50 percent of what I say. At best I get eye rolls from my kids. Let's just say I'm not cut out to be a career politician. Now think about trying to get the entire world to agree on how to fix internet security, particularly when most of the internet was created and deployed before it went global. Over the last two decades, just about every major update to the internet we've proposed to the world has been shot down. We get small fixes, but nothing big. We've seen moderate, incremental improvement in a few places, such as better authentication or digital certificate revocation, but even that requires leadership by a giant like Google or Microsoft. Those updates only apply to those who choose to participate -- and they still take years to implement. Most of the internet's underlying protocols and participants are completely voluntary. That's its beauty and its curse. These protocols have become so widely popular, they're de facto standards. Think about using the Internet without DNS. Can you imagine having to remember specific IP addresses to go online shopping? A handful of international bodies review and approve the major protocols and rules that allow the internet to function as it does today (here's a great summary article on who "runs" the internet). To that list you should add vendors who make the software and devices that run on and connect to the Internet; vendor consortiums, such as the FIDO Alliance; and many other groups that exert influence and control. That diversity makes any global agreement to improve Internet security almost impossible. Instead, changes tend to happen through majority rule that drags the rest of the world along. So in one sense, we can get things done even when everyone doesn't agree. Unfortunately, that doesn't solve an even bigger problem. Governments don't want the internet to be more secure If there is one thing all governments agree on, it's that they want the ability to bypass people's privacy whenever and wherever the need arises. Even with laws in place to limit privacy breaches, governments routinely and without fear of punishment violate protective statutes. To really improve internet security, we'd have to make every communication stream encrypted and signed by default. But they would be invisible to governments, too. That's just not going to happen. Governments want to continue to have unfettered access to your private communications. Democratic governments are supposedly run by the people for the people. But even in countries where that's the rule of law, it isn't true. All governments invade privacy in the name of protection. That genie will never be put back in the bottle. The people lost. We need to get over it. The only way it might happen I've said it before and I'll say it again: The only way I can imagine internet security improving dramatically is if a global tipping-point disaster occurs -- and allows us to obtain shared, broad agreement. Citizen outrage and agreement would have to be so strong, it would override the objections of government. Nothing else is likely to work. I've been waiting for this all to happen for nearly three decades, the most recent marked by unimaginably huge data breaches. I'm not getting my hopes up any time soon.

Moment of truth: Web browsers and the SHA-1 switch

The long-awaited SHA-1 deprecation deadline of Jan. 1, 2017, is almost here.

At that point, we’ll all be expected to use SHA-2 instead.
So the question is: What is your browser going to do when it encounters a SHA-1 signed digital certificate? We’ll delve into the answers in a minute.

But first, let’s review what the move from SHA-1 to SHA-2 is all about. Getting from SHA-1 to SHA-2 SHA-1 is a cryptographic hash officially recommended by NIST.
It’s used to verify digital content, as well as digital certificates and certificate revocation lists (CRLs). Whenever a PKI certification authority (CA) issues a certificate or CRL, it signs it with a hash to assist “consuming” applications and devices with trust verification.  In January 2011, SHA-2 became the new, recommended, stronger hashing standard.
SHA-2 is often called “the SHA-2 family of hashes” because it contains hashes of many different lengths, including 224-bit, 256-bit, 384-bit, and 512-bit digests.

The most popular one is 256 bits by a large margin. Who declared Jan. 1, 2017 the drop-dead date for SHA-1? Three of the top browser vendors and dozens of other software vendors.

They belong to a vendor consortium called the CA Browser Forum, which publishes requirements for public CAs in its frequently updated Baseline Requirements document. The CA Browser forum’s SHA-1 deprecation requirements apply to all but two types of certificates (covered below), although some browser vendors care only about web server certificates. Per the CA Browser forum, no public CA is allowed to issue SHA-1-signed certificates after Jan. 1, 2016, for certificates that expire after Dec. 31, 2016, although in some browsers, any SHA-1 certificate expiring after Dec. 31, 2017, is flagged, regardless of when it was issued. The CA Browser Forum specifically excludes root CA server certificates and cross CA certificates from the SHA-1 deprecation requirements.

This means you do not have to worry about your root CA’s certificate, although you probably need to worry about how it signs subordinate CA certificates and CRLs. Your browser’s reaction Some major browser vendors have been issuing warnings and error messages for two years.

Today, some browsers put an X through the HTTPS indicator (Google Chrome), don’t display the lock icon (Microsoft Edge and Internet Explorer), or simply remove the HTTPS portion of the URL (Apple Safari). Some browsers, such as Firefox, don’t show any indication when consuming an SHA-1 certificate; others may or may not depending on whether you're using a PC or mobile version of the browser.
In some cases, the protection given by the SHA-1 TLS certificate is still active even though the browser appears to indicate that it is not (for example, Chrome, Edge, or Internet Explorer). SHA-1 deprecation in the major browsers Certificate types and deprecation evaluation What certificate types will be evaluated for SHA-1 deprecation? It depends on the browser.  The CA Browser forum says all certificates will be evaluated except for root CA server and cross-CA certificates.

But I have seen browsers that popped up an error message on SHA-1 root CA certificates when they were acting as an “intermediate” root CA in a three- or four-tier PKI hierarchy and on cross-CA certificates. Microsoft will only evaluate certificates that originate from a PKI chain registered in the Microsoft Trusted Root program.

Certificates originating from a PKI chain registered in the Microsoft Trusted Root program will be evaluated only if they contain the Server Authentication OID.

This is an important point because some TLS certificates may contain the Client Authentication or Workstation Authentication OIDs only. (See Microsoft’s SHA-1 deprecation policy.) Other browser vendors say they will inspect “all” certificates for SHA-1 deprecation, but in practice this always excludes the root CA server certificates and may technically mean only web server or Server Authentication OID certificates.
I’ve had a hard time nailing down browser vendors on exactly which certificates they will include in deprecation-checking. Mozilla did confirm it also checks for the deprecated Netscape Step-Up OID. Mozilla Firefox, Google Chrome, and Opera browsers will check both public and private certificates by default, although you can manually register private PKI chains (sometimes called enterprise chains) to be excluded from SHA-1 deprecation checking. You can find Mozilla’s latest SHA-1 deprecation statement here; Google’s can be found here. As of Jan. 1, 2017, “full” SHA-1 deprecation enforcement is supposed to happen, although Microsoft will actually begin full enforcement on Feb. 14, 2017 (the second Patch Tuesday of the year). Mozilla says it will begin full enforcement in January 2017, with no specific date, whereas Google (and Opera) will begin full enforcement by the end of January 2017. All browsers will eventually evaluate all certificates, public or private, with no exceptions allowed, although this is will probably be many years out.

Expect any new improvement in SHA-1 cracking to speed up timelines and incur policy updates.

How Clinton could have avoided the Wikileaks fiasco

People who are upset that Hillary Clinton’s personal email server may have been hacked are missing the big picture. Nearly everything that is worth hacking and connected to the internet is already hacked -- and that which is not can be hacked at will. I don’t want to get into the morass of whether Clinton’s use of personal email while she was Secretary of State was legal or ethical.

That’s been debated to death. Instead, I’m talking about whether it was hacked. Could it have been? I'll say it again: Everything is hackable.
Stuxnet took down Iranian centrifuges that were running on an air-gapped private network.

The State Department’s email was hacked -- very likely before, during, and after Clinton's tenure there. Was Clinton's email server hacked? As for Clinton's personal email server, the fact is we’ll never know whether it was hacked. Her server ran Microsoft Exchange 2010.

Arrested Romanian hacker Marcel Lazăr (aka Guccifer) claimed he had hacked it.

But beyond his public claim no evidence has come to light to back up his statement. The FBI forensic investigation into the server did not corroborate his statement.

As far as I can tell, Guccifer socially engineered her aide, Sidney Blumenthal, out of his AOL account password and nothing more.

The same hacking technique was used against her senior adviser John Podesta for the thousands of emails now shared via Wikileaks.
I’ve yet to hear any evidence that the server itself was exploited. Could someone have hacked the server without leaving evidence? Yes, although it seems unlikely. Most hackers leave behind lots of evidence because it doesn't matter if they do.

Almost no one gets caught, much less prosecuted.

Thus, hackers have become lazy and don’t attempt to clear log files or cover up evidence of their crimes. For the sake of argument, let's say a Russian superhacker broke into Clinton's server without leaving behind signs of compromise.
In that case, wouldn't we see emails other than those coming from two aides? It’s highly unlikely that a hacker would gain complete access, download every email, and fail to leak emails from Hillary and Bill Clinton. Don't get me wrong -- I think plenty of hackers are capable of hacking her server and not leaving behind evidence.

But I seriously doubt those hackers realized the importance of the email server serving up the @clintonemail.com domain.

The FBI’s own investigation revealed the server was scanned and a few hacks were attempted, but none seemed to get through. How would you hack Clinton’s email server? This is penetration testing 101.

First, you canvas your target.
It’s Microsoft Exchange 2010 running on Microsoft Windows -- you can get that much by sending a few SMTP query commands to the email service port or running a port scanner like Nmap against the IP address. Using a port scanner and a few fingerprinting apps, you’d likely come away with the Windows version and perhaps even its patch status, along with whatever other services it was running. We know from reports that it was running Microsoft Outlook Web Access (OWA) and Remote Desktop Protocol (RDP) for remote access.

That helps a lot. OWA means it’s also running Microsoft’s Internet Information Services (IIS).

Any hacker worth his or her salt already has all the possible exploits that might work against Microsoft Windows, IIS, Exchange, and RDP. Lots of hackers like to use the Metasploit Framework, but I’m partial to custom code for each vulnerability. RDP and OWA also give you remote logons to try.

Even if they have account lockout enabled, you can guess slowly.

Better yet, you can guess against the Administrator account.

As long as it hasn’t been renamed, you can guess forever as many times as you like and you won’t get locked out.
If you have Bill's or Hillary’s email address, the logon account name is likely to be the same as their email address. One of my favorite penetration tests, when I have the time, is to identify all  running software and wait until a new vulnerability appears. Microsoft releases new patches at least once a month, and almost every Windows server needs to be patched each time.

All you need to do is wait for the patch announcement and exploit the identified vulnerability before the system administrator can patch it. You usually have a day or so before the admin patches a server, if not longer. If the exploit gets you on the email server, you can then configure Exchange to forward copies of all new emails. Or you can use a program like ExMerge to suck up every existing email, including deleted ones. Once you're on the server, you can create new accounts, add backdoors, or do pretty much anything else. A few critics have noted that Clinton’s email server didn’t have SSL protection.

The SSL page was available, but the system admin didn’t populate it with an SSL certificate.

This means the connections to the server were in plaintext. While not having an SSL cert to protect the server isn’t great, it isn’t necessarily game over.
It isn’t easy to pop onto someone else’s network streams simply because you know they are there. You have to get close to the server’s original point and perform a man-in-the-middle attack on the main connection.
It’s easy to do if you’re already on the local network, but not so easy if you’re not. One of the more interesting feats you can perform with a public email server is to try and take over its domain. Perhaps Clinton’s server is bulletproof -- fully patched and unhackable.

Email hackers are famous for gaining control over DNS domains (in this case, clintonemail.com and wjcoffice.com) and, if successful, redirect all email and connections headed to those domains to a fraudulent email server. You wouldn’t be able to see preexisting emails, but you'd be able to capture new inbound emails (and all the long threads of previous emails they probably contain). What would have stopped the leak? In the social engineering instances, using a system that required two-factor authentication (2FA) would have helped.

Gmail had 2FA available back then, although I’m not sure about AOL.

Clinton should have been using the State Department systems for all business email, and her personal email server should have required 2FA (although the system admin would have to know how to set it up and show the Clintons how to use it). That’s water under the bridge now. What I’m sure Clinton really wishes she had used, besides the State Department email system, is a mechanism that prevents private email from being easily read by unauthorized parties.

There are myriad solutions, including Microsoft’s Rights Management System (RMS). Information protection software such as RMS is pretty nifty.
It encrypts all protected email and requires the user to retrieve an authorized personal digital certificate to view, print, or copy the email.

At any time the personal certificate can be revoked. Hence, if a hacker stole the email, as soon as someone noticed, the certificate could be revoked and the email would become unreadable.

Try posting that to Wikileaks. After all the huge corporate hacking incidents, in which embarrassing private emails were leaked, I’m surprised the email information protection market isn’t growing faster. Remember, we are either hacked or the attackers haven't gotten around to it yet. Your confidential emails should be protected in a manner that prevents your emails from being so easy to share. What happened to Clinton could absolutely happen to any person in any company who fails to use strong information protection for email.

That’s the real lesson we all should take away.

The only realistic plan to avoid DDoS disaster

Last Friday’s massive DDoS attack against Dyn.com and its DNS services slowed down or knocked out internet connectivity for millions of users for much of the day. Unfortunately, these sorts of attacks cannot be easily mitigated. We have to live with them for now. Huge DDoS attacks that take down entire sites can be accomplished for a pittance.
In the age of the insecure internet of things, hackers have plenty of free firepower.
Say the wrong thing against the wrong person and you can be removed from the web, as Brian Krebs recently discovered. Krebs' warning is not hyperbole.

For my entire career I’ve had to be careful about saying the wrong thing about the wrong person for fear that I or my employers would be taken down or doxxed. Krebs became a victim even with the assistance of some of the world’s best anti-DDoS services. Imagine if our police communications were routinely taken down simply because they sent out APBs on criminal suspects or arrested them. Online hackers have certainly tried. Plenty of them have successfully hacked the online assets of police departments and doxxed their employees. Flailing at DDoS attacks Readers, reporters, and friends have asked me what we can do to stop DDoS attacks, which break previous malicious traffic records every year. We're now seeing DDoS attacks that reach traffic rates exceeding 1Tb per second.

That’s insane! I remember being awed when attacks hit 100Mb per second. You can’t stop DDoS attacks because they can be accomplished anywhere along the OSI model -- and at each level dozens of different attacks can be performed.

Even if you could secure an intended victim's site perfectly, the hacker could attack upstream until the pain reached a point where the victim would be dropped to save everyone else. Because DDoS attackers use other people's computers or devices, it’s tough to shut down the attacks without taking out command-and-control centers. Krebs and others have helped nab a few of the worst DDoS attackers, but as with any criminal endeavor, new villains emerge to replace those arrested. The threats to the internet go beyond DDoS attacks, of course.

The internet is rife with spam, malware, and malicious criminals who steal tens of millions of dollars every day from unsuspecting victims.

All of this activity is focused on a global network that is more and more mission-critical every day.

Even activities never intended to be online -- banking, health care, control of the electrical grid -- now rely on the stability of the internet. That stability does not exist.

The internet can be taken down by disgruntled teenagers. What would it take? Fixing that sad state of affairs would take a complete rebuild of the internet -- version 2.0.
Version 1.0 of the internet is like a hobbyist's network that never went pro.

The majority of it runs on lowest-cost identity and zero trust assurance. For example, anyone can send an email (legitimate or otherwise) to almost any other email server in the world, and that email server will process the message to some extent.
If you repeat that process 10 million times, the same result will occur. The email server doesn’t care if the email claims to be from Donald Trump and originates from China or Russia’s IP address space.
It doesn’t know if Trump’s identity was verified by using a simple password, two-factor authentication, or a biometric marker.

There’s no way for the server to know whether that email came from the same place as all previous Trump emails or whether it was sent during Trump’s normal work hours.

The email server simply eats and eats emails, with no way to know whether a particular connection is more or less trustworthy than normal. Internet 2.0 I believe the world would be willing to pay for a new internet, one in which the minimum identity verification is two-factor or biometric.
I also think that, in exchange for much greater security, people would be willing to accept a slightly higher price for connected devices -- all of which would have embedded crypto chips to assure that a device or person’s digital certificate hadn’t been stolen or compromised. This professional-grade internet would have several centralized services, much like DNS today, that would be dedicated to detecting and communicating about badness to all participants.
If someone’s computer or account was taken over by hackers or malware, that event could quickly be communicated to everyone who uses the same connection. Moreover, when that person’s computer was cleaned up, centralized services would communicate that status to others.

Each network connection would be measured for trustworthiness, and each partner would decide how to treat each incoming connection based on the connection’s rating. This would effectively mean the end of anonymity on the internet.

For those who prefer today's (relative) anonymity, the current internet would be maintained. But people like me and the companies I've worked for that want more safety would be able to get it.

After all, many services already offer safe and less safe versions of their products.

For example, I’ve been using Instant Relay Chat (IRC) for decades. Most IRC channels are unauthenticated and subject to frequent hacker attacks, but you can opt for a more reliable and secure IRC.
I want the same for every protocol and service on the internet. I’ve been writing about the need for a more trustworthy internet for a decade-plus.

The only detail that has changed is that the internet has become increasingly mission-critical -- and the hacks have grown much worse.

At some point, we won’t be able to tolerate teenagers taking us offline whenever they like. Is that day here yet?

New research shows Cloud and IoT adoption requires organisations to future-proof...

Thales and Ponemon Institute research confirms organisations’ biggest PKI challenge is inability of existing infrastructure to support new applicationsPlantation, FL – 11 October 2016 – Thales, leader in critical information systems, cyber security and data protection, announces the results of its 2016 PKI Global Trends Study.

The report, based on independent research by the Ponemon Institute and sponsored by Thales, reveals an increased reliance on public key infrastructures (PKIs) in today’s enterprise environment, driven by the growing use of cloud-based services and applications and the Internet of Things (IoT). More than 5000 business and IT managers were surveyed in 11 countries: US, UK, Germany, France, Australia, Japan, Brazil, the Russian Federation, Mexico, India, and for the first time this year the Middle East (Saudi Arabia and United Arab Emirates), with the aim of better understanding the use of PKI within organisations. News facts: 62% of businesses regard cloud-based services as the most important trend driving the deployment of applications using PKI (50% in 2015) and over a quarter (28%) say IoT will drive this deployment PKIs are increasingly used to support more and more applications. On average they support eight different applications within a business – up one from 2015, but in the United States this number went up by three applications The most significant challenge organisations face around PKI is the inability of their existing PKIs to support new applications (58% of respondents said this) Worryingly, a large percentage of respondents continue to report that they have no certificate revocation techniques The use of high assurance mechanisms such as hardware security modules (HSMs) to secure PKI has increased The top places where HSMs are deployed to secure PKIs are for the most critical root and issuing certificate authority (CA) private keys together with offline and online root certificate authorities Dr. Larry Ponemon, chairman and founder of The Ponemon Institute, says:As organisations digitally transform their business, they are increasingly relying on cloud-based services and applications, as well as experiencing an explosion in IoT connected devices.

This rapidly escalating burden of data sharing and device authentication is set to apply an unprecedented level of pressure onto existing PKIs, which now are considered part of the core IT backbone, resulting in a huge challenge for security professionals to create trusted environments.
In short, as organisations continue to move to the cloud it is hugely important that PKIs are future proofed – sooner rather than later.” John Grimm, senior director security strategy, Thales e-Security, says:“An increasing number of today’s enterprise applications are in need of digital certificate issuance services — and many PKIs are not equipped to support them.

A PKI needs a strong root of trust to be fit for purpose if it is to support the growing dependency and business criticality of its services.

By securing the process of issuing certificates and managing signing keys in an HSM, organisations can greatly reduce the risk of their loss or theft, thereby creating a high assurance foundation for digital security.

Thales has decades of experience providing HSM-based PKI solutions and services that help organisations deploy world-class PKIs and trusted infrastructures.” Download your copy of the new 2016 PKI Global Trends Study: http://go.thales-esecurity.com/2016GlobalPKITrends For industry insight and views on the latest key management trends check out our blog www.thales-esecurity.com/blogs Follow Thales e-Security on Twitter @Thalesesecurity, LinkedIn, Facebook and YouTube About Thales e-SecurityThales e-Security + Vormetric have combined to form the leading global data protection and digital trust management company.

Together, we enable companies to compete confidently and quickly by securing data at-rest, in-motion, and in-use to effectively deliver secure and compliant solutions with the highest levels of management, speed and trust across physical, virtual, and cloud environments.

By deploying our leading solutions and services, targeted attacks are thwarted and sensitive data risk exposure is reduced with the least business disruption and at the lowest life cycle cost.

Thales e-Security and Vormetric are part of Thales Group. www.thales-esecurity.com About ThalesThales is a global technology leader for the Aerospace, Transport, Defence and Security markets. With 62,000 employees in 56 countries, Thales reported sales of €14 billion in 2015. With over 22,000 engineers and researchers, Thales has a unique capability to design and deploy equipment, systems and services to meet the most complex security requirements.
Its exceptional international footprint allows it to work closely with its customers all over the world. Positioned as a value-added systems integrator, equipment supplier and service provider, Thales is one of Europe’s leading players in the security market.

Thales solutions secure the four key domains considered vital to modern societies: government, cities, critical infrastructure and cyberspace. Drawing on renowned cryptographic capabilities, Thales is one of the world leaders in cybersecurity products and solutions for defense, governmental bodies, critical infrastructures operators, communication, industrial and financial companies. With a presence throughout the information security chain, Thales offers a comprehensive range of services and solutions ranging from security consulting and audits, data protection, digital trust management, cybersecured sytem design, development, integration, certification and through-life management to cyber-threat intelligence, intrusion detection and security supervision with Security Operation Centres in France, the United Kingdom, The Netherlands and soon in Hong Kong. Press contactsThales, Media Relations SecurityDorothée Bonneil+33 (0)6 84 79 65 86dorothee.bonneil@thalesgroup.com Thales, Media Relations e-SecurityLiz Harris+44 (0)7973 903648liz.harris@thales-esecurity.com

Afraid of online hacks? Worry more about your phone

I talk a lot about the security problems and weaknesses of the internet, as well as the devices connected to it.
It’s all true, and we badly need improvements. Yet the irony is that security in our online world is actually better than in our physical world. Think of how many people are scammed by someone phoning to say their computer is infected and needs repair.

As InfoWorld’s Fahmida Rashid recently chronicled, they typically say they’re with Microsoft or a Microsoft partner, and your computer is infected and needs fixing immediately. Unfortunately, millions of people fall for this scam and end up installing malicious software on their system.

They sometimes even pay for the privilege, compromising their credit card numbers in the process. The problem is there's no easy way in the real world to quickly and easily prove these phone solicitors are fake or legit.
In the digital world, all the major browser and email manufacturers spend a significant part of their coding to detect pretenders. My browser URL bar turns green in approval when I visit a legitimate website protected by an Extended Validation digital certificate.

That means I can trust it. There’s nothing like that in the physical world.
In the case of the fake Microsoft repair company, the best case I can hope for is to independently call the right Microsoft phone number and ask for verification. Any of Microsoft’s trained responders will readily and quickly tell you that you’re being scammed -- mainly because Microsoft doesn’t proactively call people to tell them their computer is infected.

But unless you know the phone number (800-426-9400) or the Microsoft website, or you enter the right words in an internet search engine, it’s going to take time and possibly a bunch of calls to get an answer. That’s not Microsoft’s fault.
It’s a huge, global company with tons of locations and products.
It has blogged about Microsoft phone scams dozens of times over the years, and it does advertise the right numbers and places to call for such inquiries. However, not everyone has heard of the scams or knows where to go when they have a question, so it takes effort.

Contrast that with looking at a green URL bar in one second. A few times I’ve been called, out of the blue, by a company I’m already affiliated with offers I'd normally be interested in -- say, faster internet for less per month.
It sounds great, and the company is ready to sign me up, but then asks for my “account password.” I ask the representative to tell me the account password on file, and I’ll verify it, but he or she says it doesn’t work that way.

Thus, I hang up.
If I try to call back in on the general, advertised phone number and get the same deal, it takes me an hour or I can’t find that call center at all. My bank recently did the same.
It was proactively calling to report that my debit card had been compromised. My bank had never called me before. How would I know that this complete stranger on the phone is who they say they are? Brian Krebs recently related a story in which digital scammers claiming to be from Google called someone who used a two-factor-enabled Gmail account and asked the user to tell them the code sent to the victim’s phone (via SMS) to verify the account. Luckily, the victim was suspicious and brought in her security-minded dad, and they didn’t give up the code. But it got me thinking.
In this particular instance, two-factor digital authentication was the strongest part of the authentication chain.

The phone call was the weak link and not easily verifiable. National Institute of Standards and Technology (NIST) now advises that SMS-sent two-factor authentications aren’t to be trusted, or at least not as trusted as we once thought them to be.

But to be honest, most of the problems with two-factor authentication using SMS verification apply to the phone, not the computer. We need a system that allows phone calls to be quickly and accurately verified.
I want EV certificates for the physical world! I want multiple defensive software programs that investigate my incoming calls and alert me if something seems risky.

Today most of those calls come in over cellphones.
I have to think a centralized phone number repository and a local phone app could solve much of the problem. Heck, we’d easily be able to kill unsolicited junk calls at the same time. The online world is nowhere near perfectly secure.

But I’m quickly starting to realize that, though insecure, the digital world is often in better shape than the physical world. How about that irony?

Google to Start Labeling as Non-secure Sites That Use HTTP

Starting January 2017, Chrome will explicitly mark web pages as insecure if they use HTTP for transmitting sensitive data. In its self-assumed quest to make the internet a safer place for everyone, Google will soon start publicly shaming websites that ...

SHA-2 shortcut: Easy certificate management for Linux

I spend a lot of time working on enterprise Public Key Infrastructure (PKI), especially in light of the coming SHA-1 deprecation deadlines.
It’s nearly all I do these days. One question my customers ask all the time is how to provision certificates on non-Windows devices and computers. Microsoft does an excellent job of automating the process to install certificates on Windows computers (that is, automatic enrollment and renewal) using built-in mechanisms.
It makes for low-touch distribution and updating of certificates on Windows computers. But if you want to enroll for, distribute, or renew digital certificates on non-Windows platforms, it can be hit or miss. Non-Windows devices typically come with built-in digital certificate handling, but usually lack automatic requesting, distribution, installation, and renewal. Microsoft recommends two products: Intune and Microsoft System Center Configuration Manager (SCCM).

Both work well, but many customers who simply want digital certificate handling prefer a more lightweight and focused option.

The same goes for non-Microsoft MDM products, such as AirWatch. Today, Venafi is the leading solution for total digital certificate control in the enterprise.
It’s an awesome, comprehensive certificate management solution, but you’ll pay top dollar for it and implementation can easily take many months.

There are other, less costly certificate management solutions, but most fail to handle non-Windows devices well. Introducing CertAccord That’s why I got excited when longtime friend and consultant, Mark Cooper of PKI Solutions, told me about a new product in open beta called CertAccord Enterprise, created by him and his brother. CertAccord works with Linux computers; Mac and Unix support are coming soon. You install a lightweight client, which can handle certificate requests automatically or allow admins to request and renew manually.

The clients connect to a server containing the certificate authority bridge (CAB). The CAB acts as the intermediate registration authority and interfaces with the PKI’s issuing certification authority (CA), which right now must be Microsoft Active Directory Certificate Services.

The CAB links to a MySQL database, and both run on a Windows server.

The CAB and MySQL database can be installed on the same server or located on separate servers.

Admins connect to a web-based management console to define one or more certificate policies.

The certificate policies define which devices and certificate actions are allowed. The CertAccord management console allows you to define which CAs the product works with and to register or confirm participating devices. The biggest selling points of this product, besides adding Linux to PKI integration activities, are its quick installation and lightweight client.

Clients connect using the REST API to the CAB server.

Certificates are delivered as standard Linux certificate PEM files or as Java Key Store files. The client agent is a daemon or service process that starts automatically at system boot.
It's responsible for checking in with a CAB server for updated certificate policies and configuration information.
It's also responsible for checking and performing automatic renewals of certificates. A manual request can be generated using a one-line command, such as: cmbagent cert create purpose=webserver Whether the request is automated or requested manually, the agent automates the generation of a local private key using policy data obtained from the CAB.

Behind the scenes, it generates a text-based certificate request, signs it, and sends it to the CAB, which then sends the request to the issuing CA.

After the certificate is approved and/or created, it's delivered back to the CAB.

The client picks up the resulting certificate on its next check-in and installs it to the client’s local file system. Depending on the involved PKI-consuming application, the certificate may still need to be configured within the application.
In my experience, many applications will use any valid certificate matching the appropriate usage requirements, but nearly as many require manual configuration.
In many cases, even if manual application configuration is needed, it can be scripted. CertAccord essentially gives non-Windows computers the automated enrollment and renewal services that Windows computers have long enjoyed.

CertAccord is fairly new, but if you need its specific functionality, it’s easy to get up and running to test or deploy. Remember: The deprecation deadline for SHA-1 (Jan. 1, 2017) is coming soon! CertAccord is a great way to get your non-Windows computers updated to SHA-2 with minimal hassle.

Researcher hides stealthy malware inside legitimate digitally signed files

A new technique allows attackers to hide malicious code inside digitally signed files without breaking their signatures and then to load that code directly into the memory of another process. The attack method, developed by Tom Nipravsky, a researcher with cybersecurity firm Deep Instinct, might prove to be a valuable tool for criminals and espionage groups in the future, allowing them to get malware past antivirus scanners and other security products. The first part of Nipravsky's research, which was presented at the Black Hat security conference in Las Vegas this week, has to do with file steganography -- the practice of hiding data inside a legitimate file. While malware authors have hidden malicious code or malware configuration data inside pictures in the past, Nipravsky's technique stands out because it allows them to do the same thing with digitally signed files.

That's significant because the whole point of digitally signing a file is to guarantee that it comes from a particular developer and hasn't been altered en route. If an executable file is signed, information about its signature is stored in its header, inside a field called the attribute certificate table (ACT) that's excluded when calculating the file's hash -- a unique string that serves as a cryptographic representation of its contents. This makes sense because the digital certificate information is not part of the original file at the time when it is signed.
It's only added later to certify that the file is configured as intended by its creator and has a certain hash. However, this means that attackers can add data, including another complete file inside the ACT field, without changing the file hash and breaking the signature. Such an addition will modify the overall file size on disk, which includes its header fields, and this file size is checked by Microsoft's Authenticode technology when validating a file signature. However, the file size is specified in three different places inside the file header and two of those values can be modified by an attacker without breaking the signature.

The problem is that Authenticode checks those two modifiable file size entries and doesn't check the third one. According to Nipravsky, this is a design logic flaw in Authenticode. Had the technology checked the third, unmodifiable file size value, attackers wouldn't be able to pull off this trick and still keep the file signature valid, he said. The malicious data added to the ACT is not loaded into memory when the modified file itself is executed because it's part of the header, not the file body. However, the ACT can serve as a hiding place to pass a malicious file undetected past antivirus defenses. For example, attackers could add their malicious code to one of the many Microsoft-signed Windows system files or to a Microsoft Office file.

Their signatures would still be valid and the files functional. Moreover, most security applications whitelist these files because they're signed by trusted publisher Microsoft to avoid false positive detections that could delete critical files and crash the system. The second part of Nipravsky's research was to develop a stealthy way to load the malicious executable files hidden inside signed files without being detected. He reverse engineered the whole behind-the-curtain process that Windows performs when loading PE files to memory.

This procedure is not publicly documented because developers don't typically need to do this themselves; they rely on the OS for file execution. It took four months of eight-hours-per-day work, but Nipravsky's reverse engineering efforts allowed him to create a so-called reflective PE loader: an application that can load portable executables directly into the system memory without leaving any traces on disk.

Because the loader uses the exact process that Windows does, it's difficult for security solutions to detect its behavior as suspicious. Nipravsky's loader can be used as part of a stealthy attack chain, where a drive-by download exploit executes a malware dropper in memory.

The process then downloads a digitally signed file with malicious code in its ACT from a server and then loads that code directly into memory. The researcher has no intention of releasing his loader publicly because of its potential for abuse. However, skilled hackers could create their own loader if they're willing to put in the same effort. The researcher tested his reflective PE loader against antivirus products and managed to execute malware those products would have otherwise detected. In a demo, he took a ransomware program that one antivirus product normally detected and blocked, added it to the ACT of a digitally signed file, and executed it with the reflective PE loader. The antivirus product only detected the ransom text file created by the ransomware program after it had already encrypted all of the user's files.
In other words, too late. Even if attackers don't have Nipravsky's reflective PE loader, they can still use the steganography technique to hide malware configuration data inside legitimate files or even to exfiltrate data stolen from organizations.

Data hidden inside a digitally signed file would likely pass network-level traffic inspection systems without problems.

Man-in-the-middle biz Blue Coat bought by Symantec: Infosec bods are worried

HTTPS-buster and root cert bods joining up? Hmm Analysis Symantec’s deal to to buy Blue Coat, the controversial web filtering firm, for $4.65bn will bolster its enterprise security business. But some security experts are concerned about the potential for conflict of interest created by housing Symantec’s digital certificate business and Blue Coat’s man-in-the-middle SSL inspection technologies under the same roof.

Business dealings between the two firms have already prompted cause for concern. Blue Coat sells a range of web and network security appliances and technologies such as ProxySG, a technology that offers content filtering, authentication and caching functionality. One of its products is an SSL Visibility Appliance, which sits in the middle of encrypted traffic flows in order to identify threats (such as botnet communications, data exfiltration by hackers and so on). Blue Coat technology masquerades as legit websites, while Symantec owns VeriSign, the biggest provider of SSL certificates. Last month Blue Coat was accused of misusing an intermediate certificate authority, backed by root certificate authority Symantec.

This facility created a means for Blue Coat to issue security certs for almost any website it wanted – certificates that would be implicitly trusted by browsers and apps on PCs, phones and gadgets. Blue Coat said the facility was used for internal testing and that “rumours of misuse are unfounded”.
It also added that “Symantec maintained full control of the private key”, an assurance weakened by the imminent acquisition of Blue Coat by Symantec. “The conflict between being simultaneously a certificate authority and certificate exploiter is huge,” said Rob Graham of Errata Security, the developer of BlackICE intrusion prevention software. “The real authorities (Microsoft, Google, Firefox, Apple) have been lax, letting CAs slide, but this time they might do something. On the other hand, Blue Coat is a natural fit for AV [anti-virus], letting customers AV scan things otherwise encrypted with SSL.” We like the management so much, we bought the company Blue Coat’s web gateway appliances will be added to Symantec’s existing corporate-focused email and endpoint security as well as its consumer-focused Norton anti-virus software. Traditionally Symantec’s security sales were split more or less evenly between corporate and consumers sales through its Norton line. Consumer sales have become a legacy business for Symantec because Microsoft has improved its security defences, freemium anti-virus software firms such as AVG and Avast are gaining big market share, and competitors and new entrants have outflanked the company in the mobile security software market. Acquiring Blue Coat will mean that 62 per cent of Symantec's revenues will come from enterprise security and this will position it better to compete with other enterprise security heavyweights such as FireEye, Check Point Software and Palo Alto Networks. Although the shift towards the enterprise strategy is clear, Symantec has no immediate plans to sell its consumer unit, which remains profitable, Reuters reports. Symantec sold its Veritas enterprise software storage business for $7.4bn to a group led by Carlyle Group back in January as part of the same strategy of focusing on the enterprise security software market. ® Sponsored: Rise of the machines