11.5 C
London
Saturday, October 21, 2017
Home Tags Threat

Tag: threat

Computer system threats come in many different forms. Some of the most common threats today are software attacks, theft of intellectual property, identity theft, theft of equipment or information, sabotage, and information extortion. Most people have experienced software attacks of some sort. Viruses, worms, phishing attacks, and Trojan horses are a few common examples of software attacks. The theft of intellectual property has also been an extensive issue for many businesses in the IT field. Intellectual property is the ownership of property usually consisting of some form of protection. Theft of software is probably the most common in IT businesses today. Identity theft is the attempt to act as someone else usually to obtain that person’s personal information or to take advantage of their access to vital information. Theft of equipment or information is becoming more prevalent today due to the fact that most devices today are mobile. Cell phones are prone to theft and have also become far more desirable as the amount of data capacity increases. Sabotage usually consists of the destruction of an organization′s website in an attempt to cause loss of confidence to its customers. Information extortion consists of theft of a company′s property or information as an attempt to receive a payment in exchange for returning the information or property back to its owner. There are many ways to help protect yourself from some of these attacks but one of the most functional precautions is user carefulness.

A Malaysian couple have been charged with sedition after posting a Ramadan greeting with them seen eating pork on Facebook. Alvin Tan and Vivian Lee became notorious in Malaysia for running a sexually explicit blog. But now it seems that the government has charged them with sedition after posting a Facebook photo which showed them eating pork. Apparently eating pork in a Muslim country in Ramadan is a threat to the government's stability. They pleaded not guilty in a Kuala Lumpur district court, and the court denied them bail. It looks like the pork snap means that they will remain in jail at least until the next court date which is set for 23 August. If found guilty of sedition, they face a jail term of up to three years. According to ZDNet it looks like the government has all sorts of pain lined up for the pair. The bloggers are likely to face charges of causing disharmony on religious grounds and possessing or producing obscene content, which can lead to a jail term of up to five years. The duo have apologised for the picture but Malaysia's Umno Youth chief Khairy Jamaluddin rejected it and demanded that they be punished. The country's prime minister, Najib Razak, also issued a rebuke. "The insolent and impudent act by the young couple who insulted Islam showed that freedom of expression and irresponsible opinion can jeopardise the community". Najib is currently trying to replace the current sedition act, but has been attacked by opposition groups saying that the new law is designed to stifle political dissent. If the old law can jail a young couple for eating pork, then the new one could be even sillier. Tan and Lee hit the headlines when they posted photographs and videos of their sexual exploits, including close-ups of their genitals. Tan's scholarship at the National University of Singapore was terminated following the public outrage over their blog. 
Despite Java being a favorite target of cyber-criminals and online attackers, only 1 percent of all enterprise systems have the latest version installed. Java is the application development language of the Internet. It is everywhere on the Web.

Although it is regularly updated, it has always contained serious flaws that make it an inviting target for hackers and cyber-criminals. In 2012, Java applications became software components most targeted by cyber-attackers, but companies have still not worked to cull older versions of the popular software from their systems, according to a research report released on July 18 by security firm Bit9. The study, based on data from more than 1 million endpoints, showed that computers and devices in an enterprise—whether desktops, laptops, servers or point-of-sale systems—had, on average, 1.6 versions of Java installed.

Almost two-thirds of endpoints had two or more versions of the software installed, Bit9 stated in the report. "The solution is that organizations need to take a serious look at their use of Java," Harry Sverdlove, chief technology officer of reputation-based security firm Bit9, told eWEEK. "This is not just one of a million things that organizations can do to improve their security posture—this is the most attacked vector.

They need to seriously consider what their policy is and where Java is deployed in their environment." Java has rapidly become the most exploited software component on computer systems, accounting for the method of compromise in 50 percent of attacks in 2012, compared to 25 percent in 2011, according to security firm Kaspersky.

At the same time, the Adobe PDF format accounted for 28 percent of attacks in 2012, down from 35 percent in 2011. The popularity of Java vulnerabilities among attackers is driven by a number of factors, including its widespread use in business environments and its existence on different operating-system platforms.  In addition, attacks against the software are quickly created from public vulnerabilities and incorporated into widely available "exploit kits" which allow even non-technical criminals to compromise systems. Despite these threats, companies still have a significant problem controlling the proliferation of Java versions in their organizations, says Sverdlove. Only 1 percent of organizations had the latest version of Java installed, while more than 90 percent of companies had at least one endpoint with a version of Java older than 5 years. "The fact that a majority of observed environments apparently use significantly out-of-date versions of Java points to potential issues in how well the average organization manages its software as well as the large attack surface area presented by Java in the majority of organizations," Bit9's report stated. The most widely deployed version of Java—Java 6 Update 20—has 96 critical vulnerabilities given the most serious severity rating, a 10.0, using the Common Vulnerability Scoring System (CVSS), according to the report. Security researchers and malware authors have both looked to Java as a fertile codebase in which to find vulnerabilities. In 2012, 47 highly critical vulnerabilities were discovered in the software, according to NSS Labs, a security analysis firm. With so many vulnerabilities discovered every year, Oracle has focused on locking down Java and making it more difficult for unsigned binaries to impact the operating system. In a blog post in May, Nandini Ramani, the software development lead for Java, told the technology community and Java developers that Oracle would work hard to maintain the "security-worthiness" of the software. "It is our belief that as a result of this ongoing security effort, we will decrease the exploitability and severity of potential Java vulnerabilities in the desktop environment and provide additional security protections for Java operating in the server environment," she said.
There’s no doubt that cybercrime is on the increase. That is the message from multiple sources across both the public and private sectors. Indeed, the Federal Bureau of Investigation (FBI) reported a decline in physical crimes such as bank robberies in 2012 as opposed to cybercrime, which has increased at an alarming rate. Clearly, the risk associated with physical...
READY FOR WHAT'S COMING? Join us at RSA® Conference Europe 2013 and we’ll make sure you are. Our Programme Committee has been hard at work selecting this year’s line-up of speakers and topics. The full agenda is live on 24th July - but here’s a preview glimpse of what’s to come. Session Previews We are still working to finalize our agenda, but the...
Shadowlock creators may have a sense of humor. Or maybe they're from another world.    
As a person who loves the technical specifics of how criminals invade networks, exploit vulnerabilities, and do all the bad stuff that vendors love to scare you with, I'm often dismayed by what we hear from the government. I'm sure it's the case with most government initiatives — don't drown people in the specifics — but after looking through the update to Australia's National Digital Economy Strategy for how we're going to secure ourselves, I'm left wondering, is that it? And didn't we already have something like this previously? The strategy is meant to be a blueprint for how Australians are going to use the National Broadband Network (NBN) and take advantage of the digital economy to prosper. It's probably a good idea to look into the online space, considering we're building this big network for it and the rocks are going to run out, but I get the feeling we're still this young, naive country that might have a good deal of intelligence, but no street smarts. The strategy lists safety and security as one of the eight enablers for Australia's digital economy, but that section of the report might as well have been about education. It sounds promising on the surface — a national plan to actually combat cybercrime — but, reading into it, the government is actually talking about big red help buttons, "cyber" bullying, and raising general awareness about installing antivirus software and patching. Granted, these are significant social issues, but where's our national strategy for actually doing something about these attacks? The big red button is more of a big red line in the Department of Broadband, Communications and Digital Economy's books, at a cost of AU$136,000 in 2010. It didn't even get accepted by Apple's iTunes App Store. Which wouldn't be too bad if it could just tweak the source of it. But it doesn't even own the code. A key part of the national plan I mentioned before is the Australian Cybercrime Online Reporting Network, which was actually proposed by the National Cybercrime Working Group (NCWG). What does the NCWG do? It was created in 2009-2010 under the Attorney-General's Department to "promote a nationally consistent approach to combating cybercrime", according to the department's 2009-10 annual report. So if we're only getting part of the national plan after three to four years, how far behind are we? At the same time, we have a federal Cyber Security Strategy (PDF) that we dreamed up in 2009. It also looks at making sure that Australians are aware of the dangers of the internet. What seemed to be a pretty good idea at the time was the Cyber White Paper, titled "Connecting with Confidence: Optimising Australia's Digital Future". That sounded like another paper to help tackle the issue of securing Australians on the internet.

After all, the attorney-general responsible for it at the time, Robert McClelland, said that online criminal activity is such an important national security issue that "there is a need for a cybersecurity whitepaper", and that "our cyber capacity is relevant to our strength as a nation". If you can bear with the overuse of "cyber", the paper was originally meant to cover topics including "cybersafety, cybercrime, cybersecurity, and cyberdefence". Today, try going to the original URL for the Cyber White Paper, and you're met with a 404 Not Found error message. That's because despite the need for a paper on online security, it's been renamed to the Digital White Paper, and has had its scope changed entirely. And what of the confusion over the local computer emergency response teams? Not-for-profit group AusCERT has been pretty much replaced by CERT Australia. But that's OK, because they'll probably be working together in the Cyber Security Operations Centre. Or is that the Australian Cyber Security Centre? The one that, initially, will be staffed mostly by Department of Defence officers? The point is, I've probably only covered a few of the initiatives, plans, proposals, and centres that Australia has in place to counter the scary threat of cyberterrorism, and yet years after we've made it a big deal, we're still talking about educating people, getting strategies in place, and getting our heads around the problem.

And even then, it doesn't look like we're doing a very good job of it. For example, a colleague of mine put me on to an AAP article today that seemed to be especially alarmed that the Department of Defence was seeing 1,800 "cyber incidents" in the last year. I can tell you now, if we think we're only exposed to a little under five attacks a day, we're screwed. I don't mean to belittle the efforts of the Australian government with its well-meaning reports, but coming out with a report in 2009 saying we need to establish a cybersafety group, then doing pretty much the exact same thing in 2013 makes me wonder if we know what we're doing. The fact that we're only now beginning to do things such as polling the private sector for attacks screams that we are blind, oblivious to what's happening, and we're spending millions, if not billions, on a problem we don't actually understand. Then again, I'm sure the folk at the Defence Signals Directorate know what they're doing. Oh, wait, that one's supposed to be called the Australian Signals Directorate now.
More often than not, the information security industry tends to focus on the negative. It's hard not to.

If we're not pointing out our numerous breaches and outdated systems, it's about how we're getting owned by month-old vulnerabilities rather than that flashy zero-day or advanced persistent threat. And it might sound like the noble and "right" thing to say when we apologise, that we can do better, and accept that we made stupid, avoidable mistakes, I think a lot of information security professionals are quite often simply too hard on themselves and on their colleagues. Why? Because when security is done right, you rarely ever hear about it. Do we ever hear the CISO being congratulated that yet another week has gone by and the company didn't make it to the headlines for some oversight? Do we ever hear the guys on the ground congratulated that they picked up a bug in their software and managed to close it before any one of the thousands of attackers probing their system discovered it? Do we ever read, in the company's annual report, how the actions of an response team saved however many millions of dollars in lost revenue, reputation damage, non-compliance fines, or potential investigations by privacy regulators? Of course we don't — it's easier to dismiss these things as the normal functions of their job. I wouldn't have so much of a problem with that — every industry has its unsung heroes — but it's when on top of the existing work they're not acknowledged for, we want, or even demand, that information security professionals go even further PayPal, for example, declined to pay up via its bug bounty scheme for 17 year old Robert Kugler. Kugler discovered a cross-site scripting vulnerability on PayPal's site, but because he hadn't yet turned 18, he was disqualified from participating. My initial reaction was one of disbelief. Was he meant to wait until he turned the right age and leave it vulnerable until then? And where did it say in PayPal's conditions that he needed to be of a certain age? What does that say about younger hackers? That they're not good enough? But as much as I thought PayPal's actions were dumb, I couldn't deny the fact that they are doing a hell of a lot more than some other companies.

And when PayPal wrote back to the young hacker and told him, in what felt like a veiled insult, that he had not practised "responsible disclosure" I was angry, but had to slowly force myself to agree. I understand that it's a frustrating experience knowing of a vulnerability that it should be seemingly trivial to fix, and yet nothing happens, but I've also been on the side of writing up code and seeing how small changes can have significant ripple effects across a project.

For example, we face-palm when we don't see the use of prepared statements to avoid SQL injection attacks, but actually going through legacy code and re-writing everything can be a nightmare. It takes just a couple of minutes to run a tool like Acunetix against a site, but it can take days to trawl through code — often times not even your own. That's why I don't think it's fair when we expect companies to fix problems within hours of it being reported and demand engineers and developers to do better. We often do so without giving any consideration into what happens behind the scenes, or of the tough decisions to completely shut down a service. And caught up in the moment of discovering something they've missed, it's easy to think we know better. Not everyone does it, but it's all too easy to ride the I-told-you-so train of righteousness. That's how we end up with hacktivist groups whose intent is to demonstrate how weak everyone's systems are.

The intent is positive, but often times, the vulnerabilities disclosed aren't being actively exploited and we end up with a bigger mess than we started with. Where does this leave us, then? Do we let companies hide behind the concept of responsible disclosure? Not at all. Disclosure can be an important tool in motivating a company to do something, but we need to be more realistic in the time frames given to fixing problems, for certain types of vulnerabilities, on both sides of the fence. The argument isn't for or against full or so-called responsible disclosure, it's about being smart about what is disclosed. Irresponsible bug hunters tend not to think far ahead enough and consider just how badly the information might be abused.

After Google said that we should reduce our definition of what is considered a reasonable disclosure period to seven days, I worry that most bug reporters will see this as Google's blessing for barely-delayed full disclosure on anything they see. However, Google's wording was very deliberate when they qualified their call for the seven day period, stating that it is appropriate for critical vulnerabilities "under active exploitation". I certainly agree that 60 to 90 day disclosure blackouts are antiquated these days, but we need to be careful not to prematurely disclose vulnerabilities that, as sad as it is, were protected by some level of security through obscurity.

The seven day period really applies to vulnerabilities that are already being so widely exploited that the benefits of letting everyone else know outweigh the negatives of adding it to every hacker's toolbox. If we're not careful about limiting reporting to those vulnerabilities, we're simply creating more work for our already overtaxed information security professionals. Instead of them discovering or fixing key, critical vulnerabilities, they're chasing bugs that are being exploited only because they're the hottest thing that someone leaked. It's like the psychiatrist who, although he's attending to someone who has their own concerns, has to stop what he's doing to deal with the frustrated client shooting everyone in the waiting room to prove he has issues. As for PayPal, it had to address this issue while treading very carefully. I was one of the many that misplaced their anger, and there are probably countless out there that didn't pick up on PayPal's message that part of better disclosure is actually about not creating undue work for their security team, while also ensuring that people like Kugler get the recognition they deserve. It wasn't about telling Kugler he did the wrong thing and that kids who don't play by the rules aren't welcome — it was about showing him what the best way of helping security teams like theirs would have been, and politely suggesting whoever he might help out in the future might also appreciate the responsible disclosure route. But as it turns out, PayPal is well within its right to refuse a payment because even though their terms and conditions are sketchy, it's covered. It's true that PayPal doesn't clearly state on its bug bounty program that those younger than 18 aren't eligible, but it does clearly state that a verified PayPal account must be used to accept payment. This is important because PayPal's general terms and conditions state that account holders must be 18 years of age or older. Given this, Kugler would have been unable to accept a payment. It also turns out that PayPal apparently already knew about the bug from a previous reporter. It states in its bounty program that it only awards the first person that reports the vulnerability, making the age debate a non-issue. Could PayPal have done more? Sure.

And it acknowledged that it could have. In its letter back to Kugler it promised him an official letter of recognition from its CISO — a first for its program — thanked him, and even offered to give him a call to have a chat.

Anyone else that would have come second in a bug bounty could have been rightfully told they were too late. A part of me did feel dejected that this is the best PayPal could muster, but when I consider what PayPal could have done — told the kid to go take a hike — I realised that it's yet another case of so many of us being far too harsh on a company that is trying. The reality is, there are few companies that have their security at a level that would please the most demanding of us, and fewer still that have bug bounty programs. Instead of pointing out the flaws in those that are trying, though, we should be encouraging others to listen to more bug reporters. We're not going to see others rise to the challenge of rewarding responsible disclosure if the only thing we do is point out the flaws in existing programs.

And sad as it may be, a flawed program is better than none at all.
Advanced persistent threat (APT) is commonly used to refer to cyber threats, in particular that of Internet-enabled espionage using a variety of intelligence gathering techniques to access sensitive information, but applies equally to other threats such as that of traditional espionage or attack. Other recognized attack vectors include infected media, supply chain compromise, and social engineering.Individuals, such as an individual...
Love it or hate it, Windows 8 is the bellwether for PCs. Where Microsoft goes, PCs follow.And now Microsoft is making a grab for the mobile market, too. The latest version of Windows is designed with touchscreens in mind, and one bright side of that evolution is the addition of features that make Windows more intuitive and easier to use...