6 C
Tuesday, November 21, 2017
Home Tags Open Systems Interconnection (OSI)

Tag: Open Systems Interconnection (OSI)

MYCOM OSI Assurance suite will be deployed for Next-Generation Cloud-Native Core NetworkLondon, UK, 24th October 2017 – MYCOM OSI, the leading independent provider of Assurance, Automation and Analytics solutions to the world’s largest Comm...
Distributor appointed in Middle East and North Africa (MENA)Reading, UK – 16 March 2017: Osirium Technologies plc (AIM: OSI.L), a UK-based cyber-security software provider, has today announced that it has appointed its first distribution partner in MENA region.
Spectrami, a distributor, focused on introducing niche and specialist technologies to the market, will now support a further recruitment campaign to onboard key reseller partners.As a further commitment to cement Osirium’s growth strategy in the region, Duncan... Source: RealWire
Deploys MYCOM OSI ProAssure™ digital services quality managementMobile World Congress, Barcelona, 28th February 2017 – MYCOM OSI, the leading independent provider of Assurance, Automation and Analytics solutions to the world’s largest Communications Service Providers (CSPs), today announced that Deutsche Telekom has deployed MYCOM OSI’s digital Service Quality Management (SQM) solution to assure its next generation corporate broadband service.Deutsche Telekom is one of the largest CSPs in Germany with 40 million mobile and 20 million fixed... Source: RealWire
MYCOM OSI PrOptima™ helps optimize performance, QoS and capacity of consolidated network Mobile World Congress, Barcelona, 27th February 2017 – MYCOM OSI, the leading independent provider of Assurance, Automation and Analytics solutions to the world’s largest Communications Service Providers (CSPs), today announced that Telefonica Germany is realizing Europe’s largest network merger project using its Network Performance Management solution to optimize network performance, quality of service and capacity for the consolidated network.In 2014 Telefonica acquired KPN’s... Source: RealWire
Reading, UK – 6th February 2017: Osirium Technologies plc (AIM: OSI.L), a UK based cyber-security software provider, has today announced that its market leading Privileged Access Management (PAM) product is now available in the Asia Pacific region.Extending into APAC is part of the strategic growth plan for Osirium which has now been accelerated following the resounding success of a recent intensive roadshow.

The roadshow enabled the company to promote both the benefits of its software,... Source: RealWire
One of the toughest parts of being a computer security pro is trying to figure out what to hang your career on every two to five years. Which new buzzwords will stick to become a new paradigms, and which will disappear into the ether? Keeping up with the latest and greatest enterprise tech is part of my job, and no source does it better than InfoWorld, but some “new” trends still end up surprising me. In 2016, we learned that the emerging ecosystem of containers, microservices, and cloud scalability is not a fad.

But it does present new security problems. Securing containers In 2015, I talked about securing containers, which were popularized by Docker, and are now used throughout the industry and supported by most industry players. Often inaccurately described as “micro-VMs,” containers hold packaged pieces of software that contain all the components (the software itself, system libraries, the file system) needed to run that software.

Containerized applications share a single instance of the OS, rather than running copies of an OS like VMs do. Since that 2015 article, a handful of companies have offered solutions to help you secure containers, including more default security and support from Docker itself. How hard is it to secure containers? The short answer: It depends on the scenario. Because applications can be abstracted from the operating system, it's easier to patch one without necessarily impacting the other.

At the same time, containers introduce an additional layer of complexity, so container deployments are harder to secure. For one thing, a great benefit of containers is that developers can create and share images much more easily than ever before -- raising the risks of propagating images containing flaws or malware.

Also, root access to the host OS provides an access to all containerized apps. Read this article by Amir Jerbi of Aqua Security for an excellent rundown of these issues. Securing microservices You need to add microservices to your security planning, too. Microservices are the modern method to create web and mobile applications: You break down functionality into separate mini-applications that are loosely coupled by RESTful APIs. Martin Fowler, one of the earliest proponents, describes microservices as “suites of independently deployable services.” Microsoft Azure CTO Mark Russinovich has a great article on microservices as well. You can think of microservices as an outgrowth of object-oriented coding, where each programming component is coded in such a way that, given the required inputs, it can function with any other component. Yet microservices are stand-alone services that, working in concert, power one or more applications. One of the best aspects of microservices is the ability to have multiple, redundant services, each of which can stand in for each other.

Administrators can remove, insert, stop, or start related microservices without impacting the whole application. You can patch or update one or more microservice components, and the larger supported application should hum along without a hiccup. Securing it all Let's review: We have physical computers and virtual machines. We have public and private clouds. We have containers and microservices.
It’s all running across physical and software-defined networks. Now imagine them all working in concert together to deliver a service or set of services.
In a full redundant model, you have containers running microservices in VMs in public clouds and/or on in your datacenter. How are computer security pros supposed to secure it all? You start by breaking it down into its individual components. You secure all the involved physical computers and networks as you have always traditionally done. You look at the threats along the OSI model and address your needs. Virtual machines have their own security issues (guest-to-guest, guest-to-host, and host-to-guest risks). Microservices are best handled using Security Development Lifecycle methods and tools.

At their base, microservices are simply software and should be treated like any software that needs to be securely programmed. Like VMs, containers have their own issues, but each container scenario demands a different security approach.

Be sure to check out the Docker security blog and the aforementioned InfoWorld article. The most important recommendation I can give you is that identity is the new security boundary.
I’m not talking user or device logon identities alone, though they play a major role.
I’m also talking about the identities and security contexts that run each of the individual components. Do they share the same namespace? If so, do multiple components run under the same shared identity? If they share different namespaces, do the involved identities still share common authentication credentials? That would be like someone using the same password across two different, completely unrelated websites. You have to know what libraries and components are shared by different microservices or containers.
If one of the subcomponents has a vulnerability, that means every dependent, upper-layer component has the same vulnerability.

Can you even patch the dependent subcomponent? Like the trials and tribulations of computer security people trying to patch Java clients, containers and microservices can open the door to the same patching hell. If you don’t know much about containers and microservices, start learning more about them today. Done right, containers and microservices can simplify security. Manage them poorly, and you're inviting another security nightmare.
Last Friday’s massive DDoS attack against Dyn.com and its DNS services slowed down or knocked out internet connectivity for millions of users for much of the day. Unfortunately, these sorts of attacks cannot be easily mitigated. We have to live with them for now. Huge DDoS attacks that take down entire sites can be accomplished for a pittance.
In the age of the insecure internet of things, hackers have plenty of free firepower.
Say the wrong thing against the wrong person and you can be removed from the web, as Brian Krebs recently discovered. Krebs' warning is not hyperbole.

For my entire career I’ve had to be careful about saying the wrong thing about the wrong person for fear that I or my employers would be taken down or doxxed. Krebs became a victim even with the assistance of some of the world’s best anti-DDoS services. Imagine if our police communications were routinely taken down simply because they sent out APBs on criminal suspects or arrested them. Online hackers have certainly tried. Plenty of them have successfully hacked the online assets of police departments and doxxed their employees. Flailing at DDoS attacks Readers, reporters, and friends have asked me what we can do to stop DDoS attacks, which break previous malicious traffic records every year. We're now seeing DDoS attacks that reach traffic rates exceeding 1Tb per second.

That’s insane! I remember being awed when attacks hit 100Mb per second. You can’t stop DDoS attacks because they can be accomplished anywhere along the OSI model -- and at each level dozens of different attacks can be performed.

Even if you could secure an intended victim's site perfectly, the hacker could attack upstream until the pain reached a point where the victim would be dropped to save everyone else. Because DDoS attackers use other people's computers or devices, it’s tough to shut down the attacks without taking out command-and-control centers. Krebs and others have helped nab a few of the worst DDoS attackers, but as with any criminal endeavor, new villains emerge to replace those arrested. The threats to the internet go beyond DDoS attacks, of course.

The internet is rife with spam, malware, and malicious criminals who steal tens of millions of dollars every day from unsuspecting victims.

All of this activity is focused on a global network that is more and more mission-critical every day.

Even activities never intended to be online -- banking, health care, control of the electrical grid -- now rely on the stability of the internet. That stability does not exist.

The internet can be taken down by disgruntled teenagers. What would it take? Fixing that sad state of affairs would take a complete rebuild of the internet -- version 2.0.
Version 1.0 of the internet is like a hobbyist's network that never went pro.

The majority of it runs on lowest-cost identity and zero trust assurance. For example, anyone can send an email (legitimate or otherwise) to almost any other email server in the world, and that email server will process the message to some extent.
If you repeat that process 10 million times, the same result will occur. The email server doesn’t care if the email claims to be from Donald Trump and originates from China or Russia’s IP address space.
It doesn’t know if Trump’s identity was verified by using a simple password, two-factor authentication, or a biometric marker.

There’s no way for the server to know whether that email came from the same place as all previous Trump emails or whether it was sent during Trump’s normal work hours.

The email server simply eats and eats emails, with no way to know whether a particular connection is more or less trustworthy than normal. Internet 2.0 I believe the world would be willing to pay for a new internet, one in which the minimum identity verification is two-factor or biometric.
I also think that, in exchange for much greater security, people would be willing to accept a slightly higher price for connected devices -- all of which would have embedded crypto chips to assure that a device or person’s digital certificate hadn’t been stolen or compromised. This professional-grade internet would have several centralized services, much like DNS today, that would be dedicated to detecting and communicating about badness to all participants.
If someone’s computer or account was taken over by hackers or malware, that event could quickly be communicated to everyone who uses the same connection. Moreover, when that person’s computer was cleaned up, centralized services would communicate that status to others.

Each network connection would be measured for trustworthiness, and each partner would decide how to treat each incoming connection based on the connection’s rating. This would effectively mean the end of anonymity on the internet.

For those who prefer today's (relative) anonymity, the current internet would be maintained. But people like me and the companies I've worked for that want more safety would be able to get it.

After all, many services already offer safe and less safe versions of their products.

For example, I’ve been using Instant Relay Chat (IRC) for decades. Most IRC channels are unauthenticated and subject to frequent hacker attacks, but you can opt for a more reliable and secure IRC.
I want the same for every protocol and service on the internet. I’ve been writing about the need for a more trustworthy internet for a decade-plus.

The only detail that has changed is that the internet has become increasingly mission-critical -- and the hacks have grown much worse.

At some point, we won’t be able to tolerate teenagers taking us offline whenever they like. Is that day here yet?
Cloud-based DDoS defences introduce delays Distributed Denial of Service (DDoS) attacks can be painful and debilitating. How can you defend against them? Originally, out-of-band or scrubbing-centre DDoS protection was the only show in town, but another approach, inline mitigation, provides a viable and automatic alternative. DDoS attacks can be massive, in some cases reaching hundreds of Gbits/sec, but those mammoths are relatively rare.

For the most part, attackers will flood companies with around 1 Gbit/sec of traffic or less.

They’re also relatively short affairs, with most attacks lasting 30 minutes or less.

This enables attackers to slow down computing resources or take them offline altogether while flying under the radar, making it especially difficult for companies to detect and stop them. This shows up in industry statistics.
In May 2015 the Ponemon Institute published a report on cyberthreats in the financial industry that found it took an average of 27 days for financial institutions to detect a denial of service attack.

Then, it took 13 days to mitigate it. These attacks are often highly costly.

Another Ponemon report showed an average cost of $1.5m in DDoS costs, almost a third of which was down to the cessation of customer-facing services. Yet a DDoS attack costs about $38 per hour (PDF) to mount on average.

Time to get some protection, then. Inline vs out-of-band The industry initially evolved with out-of-band DDoS protection.
In this model, the appliance sits on the network independently of the router that is passing through traffic from the Internet.

The router will send samples of metadata describing that traffic to the appliance, which then raises the alert if it detects suspicious packets that point to an emerging DDoS attack. Conversely, in-band DDoS protection puts itself in front of the firehose, sitting directly in the stream of traffic, analysing it, processing it, and determining whether to drop the attack traffic or pass the good user traffic along. Inline systems see the traffic on its way from one point on the network to another, enabling them to filter and process traffic in real time.

Conversely, out-of-band appliances are seeing sparse samples of traffic that is already being passed to its destination. “Out-of-band analysis allows for more complex analysis of traffic without impacting traffic flow, however there is a delay between the detection of an attack and the application of rules to defend against it,” explained Nick LeMesurier, security consultant at consulting firm MWR Infosecurity.

For this reason, out-of-band solutions tend to react more slowly to DDoS patterns.

They also aren’t in a position to do anything about it themselves, but must alert another system to take action. Dave Larson, COO and CTO at Corero Network Security, explains: “Deploying an in-line, automatic DDoS mitigation solution allows security teams to stay one step ahead of attackers.

By the time traffic is swung over to an out-of-band DDoS mitigation service, usually after at least 30 minutes of downtime, the damage has already been done.

To keep up with the growing problem of increasingly sophisticated and damaging DDoS attacks, effective solutions need to automatically remove the threats as they occur and provide real-time visibility into the network.” Commercial issues Redirection is a key feature in out-of-band systems, said Nathan Dornbrook, chief technology officer at IT and security consulting firm ECS. Traffic must be redirected from the router to the DDoS appliance so that it can conduct a deep-dive packet analysis, he explains.
If you’re a big company and you have two ISPs instead of one for load balancing purposes, that redirection entails one service provider letting the other one inject routes into its core, he warned, calling it a “big no-no”. “It can cause instability to let one of your competitors screw with your routing tables,” he warned. “In addition you’re talking about carrying a lot of bad traffic across your core.” All that creates headaches for the service providers and the customer, who just wants to secure their traffic. Handling sophisticated attacks Inline mitigation has developed as a worthy alternative, but this too can be implemented in different ways, points out Dornbrook. “There are other guys that do DDoS protection where they have a content distribution network and some kind of filtering capability and they filter the traffic and pass it on to you and they do it inline,” he said. “Those services definitely have a role to play but they’re better for smaller customers.” In its paper on withstanding DDoS attacks, the SANS Institute points out that cloud-based services may not protect companies as readily from "low and slow" DDoS attacks, in which incoming packets are consume server resources as a way to starve out legitimate traffic without heavily flooding the network. These attacks, typified by attack tools like RUDY and Slowloris, focus on bringing a target down quietly by creating a relatively low number of connections over time.

They will often operate at at the application level of the network stack, which is layer seven in the OSI model. “The layer seven attacks are the ones that are the trickiest to pick up on because they tend to exploit weaknesses that are architected into systems when the site is developed,” said Andy Shoemaker, CEO at DDoS research and simulation consulting firm Nimbus DDoS. “It may not use up the network resources but it uses the compute resources.
It hits the database, authentication services and so on.” These attacks can be particularly troublesome, as attackers can bring down a web server unobtrusively, sometimes with a single machine, making these attacks easily mountable and difficult to detect. In addition, to these concerns, Cloud-based DDoS services, which are by definition out-of-band, can also introduce delays in protection, which can result in service outages, the SANS paper warns. If you’re planning an inline solution, you’ll want to be sure that you can scale it to suit your traffic needs. Performance is critical as any inline solution with performance limitations could itself be exploited and become a traffic bottleneck. Right-sizing your traffic flow is a must-have skill here.

Choose a line-rate solution that you can cluster to increase performance. And hopefully, some jerk won’t be able to take your company down for profit – or just for the hell of it.
Human curiosity will always trump anti-phishing schemes Black Hat Research by German academics has shown there's very little that can be done to prevent people spreading malware by clicking on dodgy links in messages, particularly where Facebook is involved. In a presentation at Black Hat 2016 in Las Vegas today, Zinaida Benenson, leader of the Human Factors in Security and Privacy Group at the University of Erlangen-Nuremberg, detailed how students were recruited for a phishing test. It showed that OSI Layer 8 of the Open System Interconnection model, the human being, is impossible to fix. The testers were sent an email or Facebook message from an unknown person claiming to show pictures from a New Year's Eve party and asking the recipient not to share the images. 25 per cent of testees clicked on the email link and 43.5 per cent did the same for the Facebook message. To further complicate matters, the researchers found that people lied about their actions. While a quarter of the people clicked in the phishing email, only 15.5 per cent admitted doing so, and of the Facebook testees, only 18 per cent reported clicking. When questioned, the overwhelming reason for clicking on the link was curiosity, and Benenson said there was very little that could be done about it. She described the hyper-sensitive security mindset that questions every email as "James Bond mode," and said it was both unrealistic and, ultimately, unhealthy to live like that. Curiously, 16 per cent of clickers reported that they knew the sender, so figured it was safe. Another 5 per cent said they were confident their browser would protect them from malware attacks. While instructing staff to go into James Bond mode is a possibility, Benenson said it would never work all the time. She cited numerous examples of where she had clicked on message links without properly ascertaining the source. But if you are trying to train staff not to click on suspect links, she explained that doing so could cause more harm than good. Not only does such training mean that some legitimate emails go unanswered and IT staff have to deal with huge numbers of false positives, but it also destroys the staff's trust in the company. "Digital signing of messages will help, but non-experts often misinterpret digital signatures," Benenson said. "The most important thing companies can do is to stop sending legitimate emails that look phishy. Also, expect mistakes – people will make them and there is nothing we can do about it." ® Sponsored: 2016 Cyberthreat defense report