12.8 C
London
Thursday, September 21, 2017
Home Tags Digital Certificate

Tag: Digital Certificate

Introducing WhiteBear

As a part of our Kaspersky APT Intelligence Reporting subscription, customers received an update in mid-February 2017 on some interesting APT activity that we called WhiteBear.
It is a parallel project or second stage of the Skipper Turla cluster of activity documented in another private report. Like previous Turla activity, WhiteBear leverages compromised websites and hijacked satellite connections for command and control (C2) infrastructure.
Trickbot malware redirects to counterfeit site that displays the correct URL and the digital certificate of the genuine alternative.
$950 million deal comes in the wake of Google sanctions on Symantec certs earlier this year.
Enter the snag-dragon Updated  While the rest of the world had its eyes firmly on the WannaCrypt outbreak, digital certificate firm Comodo suffered an unrelated but protracted database problem that affected its billing systems.…
The Tuesday updates for Internet Explorer and Microsoft Edge force those browsers to flag SSL/TLS certificates signed with the aging SHA-1 hashing function as insecure.

The move follows similar actions by Google Chrome and Mozilla Firefox earlier this year.Browser vendors and certificate authorities have been engaged in a coordinated effort to phase out the use of SHA-1 certificates on the web for the past few years, because the hashing function no longer provides sufficient security against spoofing.[ Safeguard your data! The tools you need to encrypt your communications and web data. • Maximum-security essential tools for everyday encryption. • InfoWorld's encryption Deep Dive how-to report. | Discover how to secure your systems with InfoWorld's Security Report newsletter. ]SHA-1 (Secure Hash Algorithm 1) dates back to 1995 and has been known to be vulnerable to theoretical attacks since 2005.

The U.S. National Institute of Standards and Technology has banned the use of SHA-1 by U.S. federal agencies since 2010, and digital certificate authorities have not been allowed to issue SHA-1-signed certificates since Jan. 1, 2016, although some exemptions have been made -- for example, for outdated payment terminals.To read this article in full or to leave a comment, please click here
Last week I speculated that the current horrible state of internet security may well be as good as we're ever going to get. I focused on the technical and historical reasons why I believe that to be true. Today, I'll tell you why I'm convinced that, even if we were able to solve the technical issues, we'll still end up running in place. Global agreement is tough Have you ever gotten total agreement on a single issue with your immediate family? If so, then your family is nothing like mine. Heck, I have a hard time getting my wife to agree with 50 percent of what I say. At best I get eye rolls from my kids. Let's just say I'm not cut out to be a career politician. Now think about trying to get the entire world to agree on how to fix internet security, particularly when most of the internet was created and deployed before it went global. Over the last two decades, just about every major update to the internet we've proposed to the world has been shot down. We get small fixes, but nothing big. We've seen moderate, incremental improvement in a few places, such as better authentication or digital certificate revocation, but even that requires leadership by a giant like Google or Microsoft. Those updates only apply to those who choose to participate -- and they still take years to implement. Most of the internet's underlying protocols and participants are completely voluntary. That's its beauty and its curse. These protocols have become so widely popular, they're de facto standards. Think about using the Internet without DNS. Can you imagine having to remember specific IP addresses to go online shopping? A handful of international bodies review and approve the major protocols and rules that allow the internet to function as it does today (here's a great summary article on who "runs" the internet). To that list you should add vendors who make the software and devices that run on and connect to the Internet; vendor consortiums, such as the FIDO Alliance; and many other groups that exert influence and control. That diversity makes any global agreement to improve Internet security almost impossible. Instead, changes tend to happen through majority rule that drags the rest of the world along. So in one sense, we can get things done even when everyone doesn't agree. Unfortunately, that doesn't solve an even bigger problem. Governments don't want the internet to be more secure If there is one thing all governments agree on, it's that they want the ability to bypass people's privacy whenever and wherever the need arises. Even with laws in place to limit privacy breaches, governments routinely and without fear of punishment violate protective statutes. To really improve internet security, we'd have to make every communication stream encrypted and signed by default. But they would be invisible to governments, too. That's just not going to happen. Governments want to continue to have unfettered access to your private communications. Democratic governments are supposedly run by the people for the people. But even in countries where that's the rule of law, it isn't true. All governments invade privacy in the name of protection. That genie will never be put back in the bottle. The people lost. We need to get over it. The only way it might happen I've said it before and I'll say it again: The only way I can imagine internet security improving dramatically is if a global tipping-point disaster occurs -- and allows us to obtain shared, broad agreement. Citizen outrage and agreement would have to be so strong, it would override the objections of government. Nothing else is likely to work. I've been waiting for this all to happen for nearly three decades, the most recent marked by unimaginably huge data breaches. I'm not getting my hopes up any time soon.
The long-awaited SHA-1 deprecation deadline of Jan. 1, 2017, is almost here.

At that point, we’ll all be expected to use SHA-2 instead.
So the question is: What is your browser going to do when it encounters a SHA-1 signed digital certificate? We’ll delve into the answers in a minute.

But first, let’s review what the move from SHA-1 to SHA-2 is all about. Getting from SHA-1 to SHA-2 SHA-1 is a cryptographic hash officially recommended by NIST.
It’s used to verify digital content, as well as digital certificates and certificate revocation lists (CRLs). Whenever a PKI certification authority (CA) issues a certificate or CRL, it signs it with a hash to assist “consuming” applications and devices with trust verification.  In January 2011, SHA-2 became the new, recommended, stronger hashing standard.
SHA-2 is often called “the SHA-2 family of hashes” because it contains hashes of many different lengths, including 224-bit, 256-bit, 384-bit, and 512-bit digests.

The most popular one is 256 bits by a large margin. Who declared Jan. 1, 2017 the drop-dead date for SHA-1? Three of the top browser vendors and dozens of other software vendors.

They belong to a vendor consortium called the CA Browser Forum, which publishes requirements for public CAs in its frequently updated Baseline Requirements document. The CA Browser forum’s SHA-1 deprecation requirements apply to all but two types of certificates (covered below), although some browser vendors care only about web server certificates. Per the CA Browser forum, no public CA is allowed to issue SHA-1-signed certificates after Jan. 1, 2016, for certificates that expire after Dec. 31, 2016, although in some browsers, any SHA-1 certificate expiring after Dec. 31, 2017, is flagged, regardless of when it was issued. The CA Browser Forum specifically excludes root CA server certificates and cross CA certificates from the SHA-1 deprecation requirements.

This means you do not have to worry about your root CA’s certificate, although you probably need to worry about how it signs subordinate CA certificates and CRLs. Your browser’s reaction Some major browser vendors have been issuing warnings and error messages for two years.

Today, some browsers put an X through the HTTPS indicator (Google Chrome), don’t display the lock icon (Microsoft Edge and Internet Explorer), or simply remove the HTTPS portion of the URL (Apple Safari). Some browsers, such as Firefox, don’t show any indication when consuming an SHA-1 certificate; others may or may not depending on whether you're using a PC or mobile version of the browser.
In some cases, the protection given by the SHA-1 TLS certificate is still active even though the browser appears to indicate that it is not (for example, Chrome, Edge, or Internet Explorer). SHA-1 deprecation in the major browsers Certificate types and deprecation evaluation What certificate types will be evaluated for SHA-1 deprecation? It depends on the browser.  The CA Browser forum says all certificates will be evaluated except for root CA server and cross-CA certificates.

But I have seen browsers that popped up an error message on SHA-1 root CA certificates when they were acting as an “intermediate” root CA in a three- or four-tier PKI hierarchy and on cross-CA certificates. Microsoft will only evaluate certificates that originate from a PKI chain registered in the Microsoft Trusted Root program.

Certificates originating from a PKI chain registered in the Microsoft Trusted Root program will be evaluated only if they contain the Server Authentication OID.

This is an important point because some TLS certificates may contain the Client Authentication or Workstation Authentication OIDs only. (See Microsoft’s SHA-1 deprecation policy.) Other browser vendors say they will inspect “all” certificates for SHA-1 deprecation, but in practice this always excludes the root CA server certificates and may technically mean only web server or Server Authentication OID certificates.
I’ve had a hard time nailing down browser vendors on exactly which certificates they will include in deprecation-checking. Mozilla did confirm it also checks for the deprecated Netscape Step-Up OID. Mozilla Firefox, Google Chrome, and Opera browsers will check both public and private certificates by default, although you can manually register private PKI chains (sometimes called enterprise chains) to be excluded from SHA-1 deprecation checking. You can find Mozilla’s latest SHA-1 deprecation statement here; Google’s can be found here. As of Jan. 1, 2017, “full” SHA-1 deprecation enforcement is supposed to happen, although Microsoft will actually begin full enforcement on Feb. 14, 2017 (the second Patch Tuesday of the year). Mozilla says it will begin full enforcement in January 2017, with no specific date, whereas Google (and Opera) will begin full enforcement by the end of January 2017. All browsers will eventually evaluate all certificates, public or private, with no exceptions allowed, although this is will probably be many years out.

Expect any new improvement in SHA-1 cracking to speed up timelines and incur policy updates.
People who are upset that Hillary Clinton’s personal email server may have been hacked are missing the big picture. Nearly everything that is worth hacking and connected to the internet is already hacked -- and that which is not can be hacked at will. I don’t want to get into the morass of whether Clinton’s use of personal email while she was Secretary of State was legal or ethical.

That’s been debated to death. Instead, I’m talking about whether it was hacked. Could it have been? I'll say it again: Everything is hackable.
Stuxnet took down Iranian centrifuges that were running on an air-gapped private network.

The State Department’s email was hacked -- very likely before, during, and after Clinton's tenure there. Was Clinton's email server hacked? As for Clinton's personal email server, the fact is we’ll never know whether it was hacked. Her server ran Microsoft Exchange 2010.

Arrested Romanian hacker Marcel Lazăr (aka Guccifer) claimed he had hacked it.

But beyond his public claim no evidence has come to light to back up his statement. The FBI forensic investigation into the server did not corroborate his statement.

As far as I can tell, Guccifer socially engineered her aide, Sidney Blumenthal, out of his AOL account password and nothing more.

The same hacking technique was used against her senior adviser John Podesta for the thousands of emails now shared via Wikileaks.
I’ve yet to hear any evidence that the server itself was exploited. Could someone have hacked the server without leaving evidence? Yes, although it seems unlikely. Most hackers leave behind lots of evidence because it doesn't matter if they do.

Almost no one gets caught, much less prosecuted.

Thus, hackers have become lazy and don’t attempt to clear log files or cover up evidence of their crimes. For the sake of argument, let's say a Russian superhacker broke into Clinton's server without leaving behind signs of compromise.
In that case, wouldn't we see emails other than those coming from two aides? It’s highly unlikely that a hacker would gain complete access, download every email, and fail to leak emails from Hillary and Bill Clinton. Don't get me wrong -- I think plenty of hackers are capable of hacking her server and not leaving behind evidence.

But I seriously doubt those hackers realized the importance of the email server serving up the @clintonemail.com domain.

The FBI’s own investigation revealed the server was scanned and a few hacks were attempted, but none seemed to get through. How would you hack Clinton’s email server? This is penetration testing 101.

First, you canvas your target.
It’s Microsoft Exchange 2010 running on Microsoft Windows -- you can get that much by sending a few SMTP query commands to the email service port or running a port scanner like Nmap against the IP address. Using a port scanner and a few fingerprinting apps, you’d likely come away with the Windows version and perhaps even its patch status, along with whatever other services it was running. We know from reports that it was running Microsoft Outlook Web Access (OWA) and Remote Desktop Protocol (RDP) for remote access.

That helps a lot. OWA means it’s also running Microsoft’s Internet Information Services (IIS).

Any hacker worth his or her salt already has all the possible exploits that might work against Microsoft Windows, IIS, Exchange, and RDP. Lots of hackers like to use the Metasploit Framework, but I’m partial to custom code for each vulnerability. RDP and OWA also give you remote logons to try.

Even if they have account lockout enabled, you can guess slowly.

Better yet, you can guess against the Administrator account.

As long as it hasn’t been renamed, you can guess forever as many times as you like and you won’t get locked out.
If you have Bill's or Hillary’s email address, the logon account name is likely to be the same as their email address. One of my favorite penetration tests, when I have the time, is to identify all  running software and wait until a new vulnerability appears. Microsoft releases new patches at least once a month, and almost every Windows server needs to be patched each time.

All you need to do is wait for the patch announcement and exploit the identified vulnerability before the system administrator can patch it. You usually have a day or so before the admin patches a server, if not longer. If the exploit gets you on the email server, you can then configure Exchange to forward copies of all new emails. Or you can use a program like ExMerge to suck up every existing email, including deleted ones. Once you're on the server, you can create new accounts, add backdoors, or do pretty much anything else. A few critics have noted that Clinton’s email server didn’t have SSL protection.

The SSL page was available, but the system admin didn’t populate it with an SSL certificate.

This means the connections to the server were in plaintext. While not having an SSL cert to protect the server isn’t great, it isn’t necessarily game over.
It isn’t easy to pop onto someone else’s network streams simply because you know they are there. You have to get close to the server’s original point and perform a man-in-the-middle attack on the main connection.
It’s easy to do if you’re already on the local network, but not so easy if you’re not. One of the more interesting feats you can perform with a public email server is to try and take over its domain. Perhaps Clinton’s server is bulletproof -- fully patched and unhackable.

Email hackers are famous for gaining control over DNS domains (in this case, clintonemail.com and wjcoffice.com) and, if successful, redirect all email and connections headed to those domains to a fraudulent email server. You wouldn’t be able to see preexisting emails, but you'd be able to capture new inbound emails (and all the long threads of previous emails they probably contain). What would have stopped the leak? In the social engineering instances, using a system that required two-factor authentication (2FA) would have helped.

Gmail had 2FA available back then, although I’m not sure about AOL.

Clinton should have been using the State Department systems for all business email, and her personal email server should have required 2FA (although the system admin would have to know how to set it up and show the Clintons how to use it). That’s water under the bridge now. What I’m sure Clinton really wishes she had used, besides the State Department email system, is a mechanism that prevents private email from being easily read by unauthorized parties.

There are myriad solutions, including Microsoft’s Rights Management System (RMS). Information protection software such as RMS is pretty nifty.
It encrypts all protected email and requires the user to retrieve an authorized personal digital certificate to view, print, or copy the email.

At any time the personal certificate can be revoked. Hence, if a hacker stole the email, as soon as someone noticed, the certificate could be revoked and the email would become unreadable.

Try posting that to Wikileaks. After all the huge corporate hacking incidents, in which embarrassing private emails were leaked, I’m surprised the email information protection market isn’t growing faster. Remember, we are either hacked or the attackers haven't gotten around to it yet. Your confidential emails should be protected in a manner that prevents your emails from being so easy to share. What happened to Clinton could absolutely happen to any person in any company who fails to use strong information protection for email.

That’s the real lesson we all should take away.
Last Friday’s massive DDoS attack against Dyn.com and its DNS services slowed down or knocked out internet connectivity for millions of users for much of the day. Unfortunately, these sorts of attacks cannot be easily mitigated. We have to live with them for now. Huge DDoS attacks that take down entire sites can be accomplished for a pittance.
In the age of the insecure internet of things, hackers have plenty of free firepower.
Say the wrong thing against the wrong person and you can be removed from the web, as Brian Krebs recently discovered. Krebs' warning is not hyperbole.

For my entire career I’ve had to be careful about saying the wrong thing about the wrong person for fear that I or my employers would be taken down or doxxed. Krebs became a victim even with the assistance of some of the world’s best anti-DDoS services. Imagine if our police communications were routinely taken down simply because they sent out APBs on criminal suspects or arrested them. Online hackers have certainly tried. Plenty of them have successfully hacked the online assets of police departments and doxxed their employees. Flailing at DDoS attacks Readers, reporters, and friends have asked me what we can do to stop DDoS attacks, which break previous malicious traffic records every year. We're now seeing DDoS attacks that reach traffic rates exceeding 1Tb per second.

That’s insane! I remember being awed when attacks hit 100Mb per second. You can’t stop DDoS attacks because they can be accomplished anywhere along the OSI model -- and at each level dozens of different attacks can be performed.

Even if you could secure an intended victim's site perfectly, the hacker could attack upstream until the pain reached a point where the victim would be dropped to save everyone else. Because DDoS attackers use other people's computers or devices, it’s tough to shut down the attacks without taking out command-and-control centers. Krebs and others have helped nab a few of the worst DDoS attackers, but as with any criminal endeavor, new villains emerge to replace those arrested. The threats to the internet go beyond DDoS attacks, of course.

The internet is rife with spam, malware, and malicious criminals who steal tens of millions of dollars every day from unsuspecting victims.

All of this activity is focused on a global network that is more and more mission-critical every day.

Even activities never intended to be online -- banking, health care, control of the electrical grid -- now rely on the stability of the internet. That stability does not exist.

The internet can be taken down by disgruntled teenagers. What would it take? Fixing that sad state of affairs would take a complete rebuild of the internet -- version 2.0.
Version 1.0 of the internet is like a hobbyist's network that never went pro.

The majority of it runs on lowest-cost identity and zero trust assurance. For example, anyone can send an email (legitimate or otherwise) to almost any other email server in the world, and that email server will process the message to some extent.
If you repeat that process 10 million times, the same result will occur. The email server doesn’t care if the email claims to be from Donald Trump and originates from China or Russia’s IP address space.
It doesn’t know if Trump’s identity was verified by using a simple password, two-factor authentication, or a biometric marker.

There’s no way for the server to know whether that email came from the same place as all previous Trump emails or whether it was sent during Trump’s normal work hours.

The email server simply eats and eats emails, with no way to know whether a particular connection is more or less trustworthy than normal. Internet 2.0 I believe the world would be willing to pay for a new internet, one in which the minimum identity verification is two-factor or biometric.
I also think that, in exchange for much greater security, people would be willing to accept a slightly higher price for connected devices -- all of which would have embedded crypto chips to assure that a device or person’s digital certificate hadn’t been stolen or compromised. This professional-grade internet would have several centralized services, much like DNS today, that would be dedicated to detecting and communicating about badness to all participants.
If someone’s computer or account was taken over by hackers or malware, that event could quickly be communicated to everyone who uses the same connection. Moreover, when that person’s computer was cleaned up, centralized services would communicate that status to others.

Each network connection would be measured for trustworthiness, and each partner would decide how to treat each incoming connection based on the connection’s rating. This would effectively mean the end of anonymity on the internet.

For those who prefer today's (relative) anonymity, the current internet would be maintained. But people like me and the companies I've worked for that want more safety would be able to get it.

After all, many services already offer safe and less safe versions of their products.

For example, I’ve been using Instant Relay Chat (IRC) for decades. Most IRC channels are unauthenticated and subject to frequent hacker attacks, but you can opt for a more reliable and secure IRC.
I want the same for every protocol and service on the internet. I’ve been writing about the need for a more trustworthy internet for a decade-plus.

The only detail that has changed is that the internet has become increasingly mission-critical -- and the hacks have grown much worse.

At some point, we won’t be able to tolerate teenagers taking us offline whenever they like. Is that day here yet?
Thales and Ponemon Institute research confirms organisations’ biggest PKI challenge is inability of existing infrastructure to support new applicationsPlantation, FL – 11 October 2016 – Thales, leader in critical information systems, cyber security and data protection, announces the results of its 2016 PKI Global Trends Study.

The report, based on independent research by the Ponemon Institute and sponsored by Thales, reveals an increased reliance on public key infrastructures (PKIs) in today’s enterprise environment, driven by the growing use of cloud-based services and applications and the Internet of Things (IoT). More than 5000 business and IT managers were surveyed in 11 countries: US, UK, Germany, France, Australia, Japan, Brazil, the Russian Federation, Mexico, India, and for the first time this year the Middle East (Saudi Arabia and United Arab Emirates), with the aim of better understanding the use of PKI within organisations. News facts: 62% of businesses regard cloud-based services as the most important trend driving the deployment of applications using PKI (50% in 2015) and over a quarter (28%) say IoT will drive this deployment PKIs are increasingly used to support more and more applications. On average they support eight different applications within a business – up one from 2015, but in the United States this number went up by three applications The most significant challenge organisations face around PKI is the inability of their existing PKIs to support new applications (58% of respondents said this) Worryingly, a large percentage of respondents continue to report that they have no certificate revocation techniques The use of high assurance mechanisms such as hardware security modules (HSMs) to secure PKI has increased The top places where HSMs are deployed to secure PKIs are for the most critical root and issuing certificate authority (CA) private keys together with offline and online root certificate authorities Dr. Larry Ponemon, chairman and founder of The Ponemon Institute, says:As organisations digitally transform their business, they are increasingly relying on cloud-based services and applications, as well as experiencing an explosion in IoT connected devices.

This rapidly escalating burden of data sharing and device authentication is set to apply an unprecedented level of pressure onto existing PKIs, which now are considered part of the core IT backbone, resulting in a huge challenge for security professionals to create trusted environments.
In short, as organisations continue to move to the cloud it is hugely important that PKIs are future proofed – sooner rather than later.” John Grimm, senior director security strategy, Thales e-Security, says:“An increasing number of today’s enterprise applications are in need of digital certificate issuance services — and many PKIs are not equipped to support them.

A PKI needs a strong root of trust to be fit for purpose if it is to support the growing dependency and business criticality of its services.

By securing the process of issuing certificates and managing signing keys in an HSM, organisations can greatly reduce the risk of their loss or theft, thereby creating a high assurance foundation for digital security.

Thales has decades of experience providing HSM-based PKI solutions and services that help organisations deploy world-class PKIs and trusted infrastructures.” Download your copy of the new 2016 PKI Global Trends Study: http://go.thales-esecurity.com/2016GlobalPKITrends For industry insight and views on the latest key management trends check out our blog www.thales-esecurity.com/blogs Follow Thales e-Security on Twitter @Thalesesecurity, LinkedIn, Facebook and YouTube About Thales e-SecurityThales e-Security + Vormetric have combined to form the leading global data protection and digital trust management company.

Together, we enable companies to compete confidently and quickly by securing data at-rest, in-motion, and in-use to effectively deliver secure and compliant solutions with the highest levels of management, speed and trust across physical, virtual, and cloud environments.

By deploying our leading solutions and services, targeted attacks are thwarted and sensitive data risk exposure is reduced with the least business disruption and at the lowest life cycle cost.

Thales e-Security and Vormetric are part of Thales Group. www.thales-esecurity.com About ThalesThales is a global technology leader for the Aerospace, Transport, Defence and Security markets. With 62,000 employees in 56 countries, Thales reported sales of €14 billion in 2015. With over 22,000 engineers and researchers, Thales has a unique capability to design and deploy equipment, systems and services to meet the most complex security requirements.
Its exceptional international footprint allows it to work closely with its customers all over the world. Positioned as a value-added systems integrator, equipment supplier and service provider, Thales is one of Europe’s leading players in the security market.

Thales solutions secure the four key domains considered vital to modern societies: government, cities, critical infrastructure and cyberspace. Drawing on renowned cryptographic capabilities, Thales is one of the world leaders in cybersecurity products and solutions for defense, governmental bodies, critical infrastructures operators, communication, industrial and financial companies. With a presence throughout the information security chain, Thales offers a comprehensive range of services and solutions ranging from security consulting and audits, data protection, digital trust management, cybersecured sytem design, development, integration, certification and through-life management to cyber-threat intelligence, intrusion detection and security supervision with Security Operation Centres in France, the United Kingdom, The Netherlands and soon in Hong Kong. Press contactsThales, Media Relations SecurityDorothée Bonneil+33 (0)6 84 79 65 86dorothee.bonneil@thalesgroup.com Thales, Media Relations e-SecurityLiz Harris+44 (0)7973 903648liz.harris@thales-esecurity.com
I talk a lot about the security problems and weaknesses of the internet, as well as the devices connected to it.
It’s all true, and we badly need improvements. Yet the irony is that security in our online world is actually better than in our physical world. Think of how many people are scammed by someone phoning to say their computer is infected and needs repair.

As InfoWorld’s Fahmida Rashid recently chronicled, they typically say they’re with Microsoft or a Microsoft partner, and your computer is infected and needs fixing immediately. Unfortunately, millions of people fall for this scam and end up installing malicious software on their system.

They sometimes even pay for the privilege, compromising their credit card numbers in the process. The problem is there's no easy way in the real world to quickly and easily prove these phone solicitors are fake or legit.
In the digital world, all the major browser and email manufacturers spend a significant part of their coding to detect pretenders. My browser URL bar turns green in approval when I visit a legitimate website protected by an Extended Validation digital certificate.

That means I can trust it. There’s nothing like that in the physical world.
In the case of the fake Microsoft repair company, the best case I can hope for is to independently call the right Microsoft phone number and ask for verification. Any of Microsoft’s trained responders will readily and quickly tell you that you’re being scammed -- mainly because Microsoft doesn’t proactively call people to tell them their computer is infected.

But unless you know the phone number (800-426-9400) or the Microsoft website, or you enter the right words in an internet search engine, it’s going to take time and possibly a bunch of calls to get an answer. That’s not Microsoft’s fault.
It’s a huge, global company with tons of locations and products.
It has blogged about Microsoft phone scams dozens of times over the years, and it does advertise the right numbers and places to call for such inquiries. However, not everyone has heard of the scams or knows where to go when they have a question, so it takes effort.

Contrast that with looking at a green URL bar in one second. A few times I’ve been called, out of the blue, by a company I’m already affiliated with offers I'd normally be interested in -- say, faster internet for less per month.
It sounds great, and the company is ready to sign me up, but then asks for my “account password.” I ask the representative to tell me the account password on file, and I’ll verify it, but he or she says it doesn’t work that way.

Thus, I hang up.
If I try to call back in on the general, advertised phone number and get the same deal, it takes me an hour or I can’t find that call center at all. My bank recently did the same.
It was proactively calling to report that my debit card had been compromised. My bank had never called me before. How would I know that this complete stranger on the phone is who they say they are? Brian Krebs recently related a story in which digital scammers claiming to be from Google called someone who used a two-factor-enabled Gmail account and asked the user to tell them the code sent to the victim’s phone (via SMS) to verify the account. Luckily, the victim was suspicious and brought in her security-minded dad, and they didn’t give up the code. But it got me thinking.
In this particular instance, two-factor digital authentication was the strongest part of the authentication chain.

The phone call was the weak link and not easily verifiable. National Institute of Standards and Technology (NIST) now advises that SMS-sent two-factor authentications aren’t to be trusted, or at least not as trusted as we once thought them to be.

But to be honest, most of the problems with two-factor authentication using SMS verification apply to the phone, not the computer. We need a system that allows phone calls to be quickly and accurately verified.
I want EV certificates for the physical world! I want multiple defensive software programs that investigate my incoming calls and alert me if something seems risky.

Today most of those calls come in over cellphones.
I have to think a centralized phone number repository and a local phone app could solve much of the problem. Heck, we’d easily be able to kill unsolicited junk calls at the same time. The online world is nowhere near perfectly secure.

But I’m quickly starting to realize that, though insecure, the digital world is often in better shape than the physical world. How about that irony?
Starting January 2017, Chrome will explicitly mark web pages as insecure if they use HTTP for transmitting sensitive data. In its self-assumed quest to make the internet a safer place for everyone, Google will soon start publicly shaming websites that ...