8.7 C
London
Wednesday, September 20, 2017
Home Tags Software programs

Tag: software programs

By Bharath Vasudevan, Product Manager, HPE Software-defined and Cloud Group In 1932 a carpenter by the name of Kirk Kristiansen founded the company that makes the famous LEGO brick.

According to the companyrsquo;s history, the name ‘LEGOrsquo; is an abbreviation of two Danish words: leg godt, which means play well. Most people would agree that those little blocks have lived up to their name.

Today they are as popular as ever because they provide a simple yet structured way to build almost anything anyone could dream up.In the same way, developers and IT operations managers love APIs because they provide a defined way to connect diverse software programs so operations can be automated.

APIs actually are the building blocks that let software play together.

But how can we be assured that all APIs play well together?To read this article in full or to leave a comment, please click here
TEKsystems, a provider of IT staffing services, released the results of a survey highlighting the best IT jobs and key hiring trends for recent graduates.

The survey asked more than 250 IT hiring managers across the U.S. what they're looking for in ...
Is software getting in the way of our own business success? The notion may seem counterintuitive, but new data suggests that this is, indeed, a reality.

And as you would expect, today’s executives are not happy about it. A new survey released from my company, TrackVia, of 500 business and IT executives found that company leaders are frustrated with the limitations of their current software and its ability to prevent them from operating efficiently. In fact, 82 percent report having to change their operations or processes to match the way their software works, and 76 percent have replaced software programs because they needed updates or customizations that their vendor couldn't execute or the software itself couldn't accommodate.To read this article in full or to leave a comment, please click here
When attackers look for vulnerabilities, they target popular software. Why bother chasing flaws in applications that few people use? That’s why one of my best friends runs a third-party application instead of Adobe Acrobat Reader to open and read PDF documents.

Another friend runs the Maxthon browser to stay out of the way of exploits that target more popular browsers.[ 18 surprising tips for security pros. | Discover how to secure your systems with InfoWorld’s Security Report newsletter. ] Does running less popular software really reduce risk? Yes, it probably does a little bit.

But most of us don’t pick what software we run purely because of security. We pick software and hardware based on features, familiarity, support, and so on.

The world’s most secure software programs usually rank among the least popular, even when they are free and very functional.To read this article in full or to leave a comment, please click here
On 28 October, the cryptocurrency world saw the emergence of a new player, the Zcash (ZEC) cryptocurrency.
Its developers have described it rather figuratively: “If Bitcoin is like HTTP for money, Zcash is HTTPS.” They continue by noting that “unlike Bitcoin, Zcash transactions can be shielded to hide the sender, the recipient and value of all transactions.” The cryptocurrency market has been looking for this level of anonymity for a while now, so ZEC has attracted considerable interest from investors, miners and cybercriminals alike.
Several major cryptocurrency exchanges were quick to offer support for the new currency. Zcash got off to a flying start; within the first few hours, 1 ZEC reached $30,000.
It should be pointed out, however, that there were only a few dozen coins in existence at that time, so the actual turnover was very low. In the following days, ZEC’s value steadily declined against Bitcoin.

At the time of writing, it had leveled out temporarily at 0.07 – 0.01 ZEC/BTC (around $70).

Despite this dramatic drop from the initial values (which was anticipated), Zcash mining remains among the most profitable compared to other cryptocurrencies. Ranking of cryptocurrency mining profitability, as reported by the CoinWarz website This has led to the revival of a particular type of cybercriminal activity – the creation of botnets for mining.

A few years ago, botnets were created for bitcoin mining, but the business all but died out after it became only marginally profitable. In November, we recorded several incidents where Zcash mining software was installed on users’ computers without permission.

Because these software programs are not malicious in themselves, most anti-malware programs do not react to them, or detect them as potentially unwanted programs (PUP). Kaspersky Lab products detect them as not-a-virus:RiskTool.Win64.BitCoinMiner. Cybercriminals use rather conventional ways to distribute mining software – they are installed under the guise of other legitimate programs, such as pirated software distributed via torrents.
So far, we have not seen any cases of mass-mailings or vulnerabilities in websites being exploited to distribute mining software; however, provided mining remains as profitable as it is now, this is only a matter of time.

The software can also be installed on computers that were infected earlier and became part of a for-rent botnet. The most popular mining software to date is nheqminer from the mining pool Micemash.
It has two known variations: one earns payments in bitcoins, the other in Zcash.

Both are detected by Kaspersky Lab products, with the respective verdicts not-a-virus:RiskTool.Win64.BitCoinMiner.bez and not-a-virus:RiskTool.Win64.BitCoinMiner.bfa. All that cybercriminals need to do to start profiting from a mining program on infected computers is to launch it and provide details of their own bitcoin or Zcash wallets.

After that, the “coin mining” profit created by the pool will be credited to the cybercriminals’ addresses, from where it can be withdrawn and exchanged for US dollars or other cryptocurrencies.

This is what allows us to ‘snoop’ on some of the wallets used by cybercriminals. Here’s just one example: Using a wallet’s address, we can find out how much money arrived and from which source (i.e. the mining pool) (https://explorer.zcha.in/accounts/t1eVeeBYfPPLgonvi1zk8e9SnrhZdoCBAeM) We see that the address was created on 31 October, just a couple of days after Zcash launched, and payments are still being made to it at the current time. You may be wondering what happened to the promised anonymity.

Actually, there are two types of wallets in Zcash: completely private purses (z-address) and public wallets like that shown above (t-address).

At the current time, the completely private wallets are not very popular (they are not supported by exchanges), and are only used to store around 1% of all existing Zcash coins. We found approximately 1,000 unique users who have some version of the Zcash miner installed on their computers under a different name, which suggests these computers were infected without their owners’ knowledge.

An average computer can mine about 20 hashes per second; a thousand infected computers can mine about 20,000 hashes a second.

At current prices, that equals about $6,200 a month, or $75,000 a year in net profits. Here are just a few real-life examples of the names used by these program and where they are installed on infected computers: diskmngr.exemssys.exeC:\system\taskmngr.exesystem.exensdiag.exetaskmngr.exesvchost.exeC:\Users\[username]\AppData\Roaming\MetaData\mdls\windlw\mDir_r\rhost.exeqzwzfx.exeC:\Users\[username]\AppData\Local\Temp\afolder\mscor.exeC:\Program Files\Common Files\nheqminer64.exeC:\Windows\Logs\Logsfiles64\conhost.exeapupd.exe As you can see, the names of many mining programs coincide with those of legitimate applications, but the installation location is different.

For instance, the legitimate Windows Task Manager app (taskmngr.exe) should be located in the system folder C:\Windows\System32 and not in C:\system. To ensure that the mining program is launched each time the operating system starts, the necessary records are added either to Task Scheduler or to the registry auto-run keys. Here are some examples of these records: Task Scheduler\Microsoft\Windows Defender\MineHKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Run\Miner A couple of detected websites distributing mining programs: http://execsuccessnow[.]com/wp-includes/m/nheqminer.exehttps://a.pomf[.]cat/qzwzfx.exe Additional DLLs are required for the mining program to work.

These DLLs, shown below, are installed along with the mining program. cpu_tromp_AVX.dllcpu_tromp_SSE2.dllcudart64_80.dllcuda_tromp.dlllogsetuplib.dllmsvcp120.dllmsvcr120.dll So, what are the threats facing a user who is unaware that their computer is being used for cryptocurrency mining? Firstly, these operations are power hungry: the computer uses up a lot more electricity, which, in some countries, could mean the user ends up with a hefty electricity bill. Secondly, a mining program typically devours up to 90% of the system’s RAM, which dramatically slows down both the operating system and other applications running on the computer. Not exactly what you want from your computer. To prevent the installation of mining programs, Kaspersky Lab users should check their security products and make sure detection of unwanted software is enabled. All other users are encouraged, at the very least, to check their folders and registry keys for suspicious files and records.
Third-party addition not the time-saver the boss thinks it is Nearly all (97 per cent) of Java applications contain at least one component with a known vulnerability, according to a new study by app security firm Veracode. Veracode reports year-over-year improvements in the code organisations write, a positive finding somewhat undone by the increasing proliferation of risk from open source and third party component use.

A single popular component with a critical vulnerability spread to more than 80,000 other software components, which were in turn then used in the development of potentially millions of software programs. “The prevalent use of open source components in software development is creating unmanaged, systemic risks across companies and industries,” said Brian Fitzgerald, CMO at Veracode. The Veracode sitrep also highlights the progress and remaining challenges in software development more generally.

Three in five (60 per cent) of applications fail security policies upon first scan, it says. Best practices in secure software development are emerging but they’re still not pervasive enough to make a difference across the software development market as a whole. One positive improvement came from the practice among more forward-thinking organisations of giving developers more power to improve security.

For example, if developers used sandbox technology to scan apps prior to assurance testing this resulted in a doubling in fix rates. Training developers can make an even bigger difference.

Best practices like remediation coaching and eLearning can improve vulnerability fix rates by much more, with a sixfold increase in fix rate performance recorded in some cases, according to Veracode. DevOps practices are taking hold among industry leaders who have established mature application security programmes.
Some applications being scanned multiple times per day.

The average security tests per app is seven, with some apps being scanned 600-700 times, Building security into DevOps processes (DevSecOps) can yield great results for organisations in reducing risk without slowing down software development, Veracode argues. Despite improvements in some quarters web applications remain fragile: More than half of web applications tested using Veracode’s tools were affected by misconfigured secure communications or other security defence shortcomings. Veracode’s seventh State of Software Security Report (download here, registration required) covers metrics drawn from code-level analysis of billions of lines of code across 300,000 assessments, using Veracode’s code audit tools over the last 18 months. ®
Machine learning has long permeated all areas of human activity.
It not only plays a key role in the recognition of speech, gestures, handwriting and images – without machine learning it would be difficult to imagine modern medicine, banking, bioinformatics and any type of quality control.

Even the weather forecast cannot be made without machines capable of learning and generalization. I would like to warn about, or dispel, some of the misconceptions associated with the use of ML in the field of cybersecurity. For some reason, discussion of artificial intelligence in cybersecurity has become all the rage of late.
If you haven’t been following this theme over the longer term, you may well think it’s something new. A bit of background: one of the first machine learning algorithms, the artificial neural network, was invented in the 1950s.
Interestingly, at that time it was thought the algorithm would quickly lead to the creation of “strong” artificial intelligence.

That is, intelligence capable of thinking, understanding itself and solving other tasks in addition to those it was programmed for.

Then there is so-called weak AI.
It can solve some creative tasks – recognize images, predict the weather, play chess, etc. Now, 60 years later, we have a much better understanding of the fact that the creation of true AI will take years, and what today is referred to as artificial intelligence is in fact machine learning. Source: http://dilbert.com/strip/2013-02-02 When it comes to cybersecurity, machine learning is nothing new either.

Algorithms of this class were first implemented 10-12 years ago.

At that time the amount of new malware was doubling every two years, and once it became clear that simple automation for virus analysts was not enough, a qualitative leap forward was required.

That leap came in the form of processing the files in a virus collection, which made it possible to search for files similar to the one being examined.

The final verdict about whether a file was malicious was issued by a human, but this function was almost immediately transferred to the robot. In other words, there’s nothing new about machine learning in cybersecurity. It’s true that for some spheres where machine learning is used there are a few ready-made algorithms.

These spheres include facial and emotion recognition, or distinguishing cats from dogs.
In these and many other cases, someone has done a lot of thinking, identified the necessary signs, selected an appropriate mathematical tool, set aside the necessary computing resources, and then made all their findings publicly available. Now, every schoolkid can use these algorithms. Machine learning determines the quality of cookies by the number of chocolate chips and the radius of the cookieSource: http://simplyinsight.co/2016/04/26/an-introduction-to-machine-learning-theory-and-its-applications-a-visual-tutorial-with-examples/ This creates the false impression that the algorithms already exist for malware detection too.

That is not the case. We at Kaspersky Lab have spent more than 10 years developing and patenting a number of technologies.

And we continue to carry out research and come up with new ideas because … well, that’s where the next myth comes in. There is a conceptual difference between malware detection and facial recognition.

Faces will remain faces – nothing is going to change in that respect.
In the majority of spheres where machine learning is used, the objective is not changing with time, while in the case of malware things are changing constantly and rapidly.

That’s because cybercriminals are highly motivated people (money, espionage, terrorism…).

Their intelligence is not artificial; they are actively combating and intentionally modifying malicious programs to get away from the trained model. That’s why the model has to be constantly taught, sometimes even retrained from scratch. Obviously, with rapidly modifying malware, a security solution based on a model without an antivirus database is worthless.

Cybercriminals can think creatively when necessary. Let’s say, it processes the client’s files. Most of them will be clean, but some will be malicious.

The latter are mutating, of course, but the model will learn. However, it doesn’t work like that, because the number of malware samples passing through the computer of an average client is much smaller than the amount of malware samples collected by an antivirus lab system.

And because there will be no samples for learning, there will be no generalization.

Factor in the “creativity” of the virus writers (see the previous myth) and detection will fail, the model will start recognizing malware as clean files and will “learn the wrong things.” Why use multi-level protection based on different technologies? Why not put all your eggs in one basket if that basket is so smart and advanced? One algorithm is enough to solve everything. Right? The thing is most malware belongs to families consisting of a large number of modifications of one malicious program.

For example, Trojan-Ransom.Win32.Shade is a family of 30,000 cryptors.

A model can be taught with a large number of samples, and it will gain the ability to detect future threats (within certain limits, see Myth №3).
In these circumstances machine learning works well. However, it’s often the case that a family consists of just a few samples, or even one. Perhaps the author didn’t want to go into battle with security software after his “brainchild” was immediately detected due to its behavior.
Instead, he decided to attack those who had no security software installed or those who had no behavioral detection (i.e., those who had put all their eggs in one basket). These sorts of “mini-families” cannot be used to teach a model – generalization (the essence of machine learning) is impossible with just one or two examples.
In these circumstances, it is much more effective to detect a threat using time-tested methods based on the hash, masks, etc. Another example is targeted attacks.

The authors behind these attacks have no intention of producing more and more new samples.

They create one sample for one victim, and you can be sure this sample will not be detected by a protection solution (unless it is a solution specifically designed for this purpose, for example, Kaspersky Anti-Targeted Attack Platform). Once again, hash-based detection is more effective. Conclusion: different tools need to be used in different situations. Multi-level protection is more effective than a single level – more effective tools should not be neglected just because they are “out of fashion.” And one last thing.

This is more of a warning than a myth. Researchers are currently paying more attention to what mistakes complex models are making: in some cases, the decisions they take, cannot be explained from the point of view of human logic. Machine learning can be trusted.

But critical systems (the autopilot in planes and automobiles, medicine, control services, etc.) usually have very strict quality standards; formal verification of software programs is used, while in machine learning, we delegate part of the thought process and responsibility to the machine.

That’s why quality control of a model needs to be conducted by highly respected experts.
Honeypots provide the best way I know of to detect attackers or unauthorized snoopers inside or outside your organization. For decades I've wondered why honeypots weren't taking off, but they finally seem to be reaching critical mass.
I help a growing number of companies implement their first serious honeypots -- and the number of vendors offering honeypot products, such as Canary or KFSensor, continues to grow. If you're considering a honeypot deployment, here are 10 decisions you'll have to make. 1. What's the intent? Honeypots are typically used for two primary reasons: early warning or forensic analysis.
I'm a huge proponent of early-warning honeypots, where you set up one or more fake systems that would immediately indicate maliciousness if even slightly probed. Early-warning honeypots are great at catching hackers and malware that other systems have missed. Why? Because the honeypot systems are fake -- and any single connection attempt or probe (after filtering out the normal broadcasts and other legitimate traffic) means malicious action is afoot. The other major reason companies deploy honeypots is to help analyze malware (especially zero days) or help determine the intent of hackers. In general, early-warning honeypots are much easier to set up and maintain than forensic analysis honeypots. With an early-warning honeypot, when you detect a probe or connection attempt, the mere connection attempt gives you the information you need, and you can follow the probe back to its origination to begin your next defense. Forensic analysis honeypots, which can capture and isolate the malware or hacker tools, are merely the beginning of a very comprehensive analysis chain.
I tell my customers to plan on allocating several days to several weeks for each analysis performed using a honeypot. 2. What to honeypot? What your honeypots mimic is usually driven by what you think can best detect hackers earliest or best protect your "crown jewel" assets. Most honeypots mimic application servers, database servers, web servers, and credential databases such as domain controllers. You can deploy one honeypot that mimics every possible advertising port and service in your environment or deploy several, with each one dedicated to mimicking a particular server type.
Sometimes honeypots are used to mimic network devices, such as Cisco routers, wireless hubs, or security equipment. Whatever you think hackers or malware will most likely to attack is what your honeypots should emulate. 3. What interaction level? Honeypots are classified as low, medium, or high interaction. Low-interaction honeypots only emulate listening UDP or TCP ports at their most basic level, which a port scanner might detect.

But they don't allow full connections or logons. Low-interaction honeypots are great for providing early warnings of malicious behavior. Medium-interaction honeypots offer a little bit more emulation, usually allowing a connection or logon attempt to appear successful.

They may even contain basic file structures and content that could be used to fool an attacker. High-interaction honeypots usually offer complete or nearly complete copies of the servers they emulate.

They're useful for forensic analysis because they often trick the hackers and malware into revealing more of their tricks. 4. Where should you place the honeypot? In my opinion, most honeypots should be placed near the assets they are attempting to mimic.
If you have a SQL server honeypot, place it in the same datacenter or IP address space where your real SQL servers live.
Some honeypot enthusiasts like to place their honeypots in the DMZ, so they can receive an early warning if hackers or malware get loose in that security domain.
If you have a global company, place your honeypots around the world.
I even have customers who place honeypots that mimic the CEO's or other high-level C-level employees' laptops to detect if a hacker is trying to compromise those systems. 5.

A real system or emulation software? Most honeypots I deploy are fully running systems containing real operating systems -- usually old computers ready for retirement. Real systems are great for honeypots because attackers can't easily tell they're honeypots. I also install a lot of honeypot emulation software; my longtime favorite is KFSensor.

The good ones, like KFSensor, are almost "next, next, next" installs, and they often have built-in signature detection and monitoring.
If you want low-risk, quick installs, and lots of features, honeypot emulation software can't be beat. 6. Open source or commercial? There are dozens of honeypot software programs, but very few of them are supported or actively updated a year after their release.

This is true for both commercial and open source software.
If you find a honeypot product that's updated for longer than a year or so, you've found a gem. Commercial products, whether new or old, are usually easier to install and use. Open source products, like Honeyd (one of the most popular programs) are usually much harder to install, but often far more configurable. Honeyd, for example, can emulate nearly 100 different operating systems and devices, down to the subversion level (Windows XP SP1 versus SP2 and so on), and it can be integrated with hundreds of other open source programs to add features. 7. Which honeypot product? As you can tell, I'm partial to commercial products for their feature sets, ease of use, and support.
In particular, I'm a fan of KFSensor. If you choose an open source product, Honeyd is great, but possibly overly complex for the first-time honeypot user.
Several honeypot-related websites, such as Honeypots.net, aggregate hundreds of honeypot articles and link to honeypot software sites. 8. Who should administer the honeypot? Honeypots are not set-and-forget it solutions -- quite the opposite. You need at least one person (if not more) to take ownership of the honeypot.

That person must plan, install, configure, update, and monitor the honeypot.
If you don't appoint at least one honeypot administrator, it will become neglected, useless, and at worst, a jumping-off spot for hackers. 9. How will you refresh the data? If you deploy a high-interaction honeypot, it will need data and content to make it look real.

A one-time copy of data from somewhere else isn't enough; you need to keep the content fresh. Decide how often to update it and by what method. One of my favorite methods is to use a freely available copy program or a copy commands to replicate nonprivate data from another server of a similar type -- and initiate the copy every day using a scheduled task or cron job.
Sometimes I'll rename the data during the copy so that it appears more top secret than it really is. 10. Which monitoring and alerting tools should you use? A honeypot isn't of any value unless you enable monitoring for malicious activity -- and set up alerts when threat events occur.

Generally, you'll want to use whatever methods and tools your organization routinely uses for this.

But be warned: Deciding what to monitor and alert on is often the most time-consuming part of any honeypot planning cycle.
I talk a lot about the security problems and weaknesses of the internet, as well as the devices connected to it.
It’s all true, and we badly need improvements. Yet the irony is that security in our online world is actually better than in our physical world. Think of how many people are scammed by someone phoning to say their computer is infected and needs repair.

As InfoWorld’s Fahmida Rashid recently chronicled, they typically say they’re with Microsoft or a Microsoft partner, and your computer is infected and needs fixing immediately. Unfortunately, millions of people fall for this scam and end up installing malicious software on their system.

They sometimes even pay for the privilege, compromising their credit card numbers in the process. The problem is there's no easy way in the real world to quickly and easily prove these phone solicitors are fake or legit.
In the digital world, all the major browser and email manufacturers spend a significant part of their coding to detect pretenders. My browser URL bar turns green in approval when I visit a legitimate website protected by an Extended Validation digital certificate.

That means I can trust it. There’s nothing like that in the physical world.
In the case of the fake Microsoft repair company, the best case I can hope for is to independently call the right Microsoft phone number and ask for verification. Any of Microsoft’s trained responders will readily and quickly tell you that you’re being scammed -- mainly because Microsoft doesn’t proactively call people to tell them their computer is infected.

But unless you know the phone number (800-426-9400) or the Microsoft website, or you enter the right words in an internet search engine, it’s going to take time and possibly a bunch of calls to get an answer. That’s not Microsoft’s fault.
It’s a huge, global company with tons of locations and products.
It has blogged about Microsoft phone scams dozens of times over the years, and it does advertise the right numbers and places to call for such inquiries. However, not everyone has heard of the scams or knows where to go when they have a question, so it takes effort.

Contrast that with looking at a green URL bar in one second. A few times I’ve been called, out of the blue, by a company I’m already affiliated with offers I'd normally be interested in -- say, faster internet for less per month.
It sounds great, and the company is ready to sign me up, but then asks for my “account password.” I ask the representative to tell me the account password on file, and I’ll verify it, but he or she says it doesn’t work that way.

Thus, I hang up.
If I try to call back in on the general, advertised phone number and get the same deal, it takes me an hour or I can’t find that call center at all. My bank recently did the same.
It was proactively calling to report that my debit card had been compromised. My bank had never called me before. How would I know that this complete stranger on the phone is who they say they are? Brian Krebs recently related a story in which digital scammers claiming to be from Google called someone who used a two-factor-enabled Gmail account and asked the user to tell them the code sent to the victim’s phone (via SMS) to verify the account. Luckily, the victim was suspicious and brought in her security-minded dad, and they didn’t give up the code. But it got me thinking.
In this particular instance, two-factor digital authentication was the strongest part of the authentication chain.

The phone call was the weak link and not easily verifiable. National Institute of Standards and Technology (NIST) now advises that SMS-sent two-factor authentications aren’t to be trusted, or at least not as trusted as we once thought them to be.

But to be honest, most of the problems with two-factor authentication using SMS verification apply to the phone, not the computer. We need a system that allows phone calls to be quickly and accurately verified.
I want EV certificates for the physical world! I want multiple defensive software programs that investigate my incoming calls and alert me if something seems risky.

Today most of those calls come in over cellphones.
I have to think a centralized phone number repository and a local phone app could solve much of the problem. Heck, we’d easily be able to kill unsolicited junk calls at the same time. The online world is nowhere near perfectly secure.

But I’m quickly starting to realize that, though insecure, the digital world is often in better shape than the physical world. How about that irony?
There's an old security mantra that says "always change the defaults!" Although this seems like a good general rule, in fact it's true only for certain kinds of settings.

Changing the defaults in other cases will just end up biting you in the end with ...