Tag: Apple iTunes
VMworld Europe 2016 was an opportunity for these experts to meet and share best practices.
Shavlik and AppSense used it to collect data from these frontline experts, to highlight patch management and security concerns in corporations. Shavlik and AppSense Infographic Key figures:- 80% of IT professionals have implemented a patch policy to enhance their organisation’s security.- 77% said that Microsoft OS represents the biggest challenge in terms of patching operating systems, and 59% indicated that Oracle is the most challenging 3rd party application.- 55% of IT professionals believe that the visibility they have into their company’s IT security posture is insufficient.- 55% of the companies surveyed give employees’ administrator rights, substantially increasing security risk.- Patch management takes more than 8 hours per month for two-thirds of the companies. A whopping 178 professionals responded to the survey.
For 76.5% of them, the Microsoft OS poses the biggest patch challenges for their company.
This figure is down from last year’s 86%, so Microsoft seems to be improving. Linux (19.1%) and Mac (4.2%) are also mentioned but to a lesser extent, which can be explained, at least in part, by the smaller number of devices using these operating systems, and fewer patches released for them each month. Patching the OS is only a small part of the equation when it comes to an effective patch management strategy. When asked about the challenges of patching applications, Java was mentioned as the most difficult application to update by 59% of respondents, followed by Adobe Reader/Flash Player - 38%, Google Chrome - 21%, Firefox - 18% and Apple iTunes - 10%. 79.7% of IT managers have implemented a policy to manage patching, which is good news. However, while 37.2% report spending fewer than 8 hours a month on patching, 29.6% spend more than 16 hours a month, and 14% spend more than 48! This amounts to a day and a half on average for most organisations, which is far from efficient. Finally, 54.7% of companies grant full administrative rights to their employees, making their systems more vulnerable to malware.
This approach increases risk in the event of a malware attack, since there is no way to limit the damage by restricting user rights to infected devices. Andy Baldin, VP EMEA Shavlik, comments: “The results of this survey show that the need to establish a patch management policy is recognised by an increasing number of IT departments.
Despite this, many companies spend too much time on patch management issues, and manage the rights of their employees in a way that unknowingly promotes risk.
This confirms the importance of our work in supporting companies in managing their patches, enabling them to reduce costs, save time and minimise risks to the security of their IT assets.”Baldin emphasises the importance of facilitating companies’ work to secure and manage their patching: "The results of our study shows that 7% of respondents do not have IT security systems in place or do not know if there is one, 3% have only one backup system, 13% just have antivirus, 7% a firewall and 10% an antivirus coupled with a firewall.
This means, 40% of respondents could easily improve their endpoint security.
To help organisations, Shavlik publishes a monthly report each Patch Tuesday: we watch and provide our analysis of the latest patches, to help companies prioritise the allocation of their IT security resources. " About ShavlikShavlik is a recognised leader in patch management, and a pioneer in agentless patching technology, virtual machine (VM) patching and third-party application patching.
Shavlik solutions include Shavlik Protect, Shavlik Patch for Microsoft System Center and Shavlik Empower.
Shavlik's combination of premise- and cloud-based solutions enables organizations of all sizes to begin improving organizational security in as little as 30 minutes.
For in-depth Patch Tuesday analysis, see: http://www.shavlik.com/patch-tuesday http://www.shavlik.com/ About AppSenseAppSense is the leading provider of User Environment Management solutions for the secure endpoint.
The technology allows IT to secure and simplify workspace control at scale across physical, virtual and cloud-delivered desktops.
AppSense solutions have been deployed by 3,600 enterprises worldwide to nine million endpoints.
AppSense is now a part of the LANDESK family with offices around the world.
For more information, please visit http://www.appsense.com/. Copyright © 2017, Shavlik.
All rights reserved.
The Glass Room is a collection of art pieces designed to make you think of how you're selling yourself.
The National Security Agency can read your email. Verizon knows where you are at all times. Amazon is confident you're in the market for a new printer. We know these things about the weird world we live in, but few of us ever stop to think about what they really mean. Recently, a group of artists got together in New York City to change that.
Apple doesn't have a retail store on Mulberry Street in New York City, but at a glance, you might think it does. The Glass Room exhibit borrows heavily from the Apple Store design aesthetic: white walls, white ceiling, white podium, and even helpful "inGenious" staffers in matching white hoodies (see the photo below). But nothing is for sale in The Glass Room. It's a collection of art pieces designed to make you think of how you're selling yourself, maybe without even knowing it.
The exhibit is curated by Tactical Technology Collective along with The Mozilla Foundation, maker of the Firefox browser. The subjects addressed in The Glass Room are digital: online privacy, location tracking, psychographic profiling, the gamification of security, and so on. But the pieces themselves are grounded in the real world; you can see them, touch them, and in one case, smell them. Here are a few that stood out to me.
Forgot Your Password? (Aram Bartholl): We're so used to massive password hacks that we barely even notice them. In fact, Yahoo recently disclosed that it had compromised at least 1 billion more of its users' passwords. Back in 2012, LinkedIn held the record for the biggest password hack—a paltry 4.6 million. For this exhibit, Bartholl printed all 4.6 million of those passwords alphabetically and bound them into volumes. (I looked for mine; it wasn't in there.)
Subscribe today to the PC Magazine Digital Edition for iOS devices.
Random Darknet Shopper (!Medien-gruppe Bitnik): This artist collective created an online shopping bot and gave it a budget of $100 in bitcoins. They set it loose on the dark Web to make random purchases and have them mailed to the exhibition space. No drugs or pornography arrived; just random stuff. A copy of Mastering the Art of French Cooking, a Hungarian passport photo, and—featured in the Glass Room—a pair of fake Kanye West Nike Air Yeezy 2 sneakers.
Online Shopping Center (Sam Levigne): Amazon does a great job of identifying what you want to buy and getting it to your quickly. As a logical (perhaps inevitable) next step, in 2014, the company was granted a patent for "predictive shopping." Levigne's art takes the concept even further. As a Glass Room "shopper," you strap on a brainwave monitor and allow an algorithm to determine what your brain looks like when you're shopping. When Levigne first conducted this experiment, he had his bot shop for him on Amazon and Alibaba whenever his brain was in the "shopping state." I tried it, but so far, Amazon hasn't sent me anything.
Not all the exhibits at The Glass Room are art. Some are demos of real-world products and services. The Texas Virtual BorderWatch, for example, was a real-time camera system (live from 2008 to 2012) that let volunteers monitor the United States–Mexico border from their homes and alert authorities of infractions. Another, the Silver Mother ($299), is a monitoring solution for seniors that gives medication reminders, tracks sleep, and gives front-door alerts. And then there's Churchix, a facial-recognition system that enables churches to track attendance automatically—a whole new meaning for "witnessing."
Finally, at the back of the room was a "data detox" center. For those who were moved by the exhibit and wanted to make a change in their digital lifestyle, experts at the counter explained their options. We've reviewed a lot of the tools used to manage your privacy, including Signal, Ghostery, Tor, and more.
If The Glass Room made anything clear, it is that technology is the dominant force for change in the world right now. It is affecting our jobs, our home lives, our relationships, our environment, and even our bodies. I'm a big believer in technological progress, but not all of these changes are for the better. PC Magazine is committed to getting you the tools, techniques, and information you need to thrive in this new world.
Now, if you'll excuse me, I need to go clear my browsing history.
For more, check out the January issue of the PC Magazine Digital Edition, available now via Apple iTunes.
By comparison, 12.8 per cent of computer users in Blighty had unpatched non-Microsoft programs in Q3 2016, up from 12.6 per cent in Q2 of 2016 and 11.3 per cent in Q3 of 2015. The top three poorly-patched programs in Q3 2016 were Oracle Java JRE (45 per cent of installations unpatched, 57 vulnerabilities), Apple iTunes (44 per cent unpatched, 50 vulnerabilities), and VLC Media Player (45 per cent unpatched, 7 vulnerabilities). The figures further underline a longer-term trend of flaws in non-Microsoft software, such as Java and (of course) Adobe Flash, becoming a greater vulnerability risk than flaws in Microsoft Windows itself. Redmond's decision to move towards offering cumulative roll-up updates for Windows 7 SP1, Windows 8.1, Windows Server 2008 R2, Windows Server 2012 and Windows Server 2012 R2 updates is likely to further simplify patching thereby encouraging more users to potentially run more secure systems. “We will be tracking this closely to determine whether the recent declines in unpatched Windows operating systems are a blip or indicative of a long term trend,” said Kasper Lindgaard, director of Secunia Research at Flexera Software. “If it is a trend, the consumer will ultimately benefit by the reduced attack surface that hackers can exploit within the Windows OS.” A separate study from authentication firm Duo Security, also published on Tuesday, claims that 65 per cent of all Windows devices are running Windows 7, which released in 2009.
Approximately 600 security vulnerabilities have hit Windows 7 over its lifetime.
Tens of thousand of devices are still running Windows XP 15 years after its release, and long after Redmond consigned it to the trashcan. Twenty percent of devices running Internet Explorer (IE) are running unsupported versions 8, 9 and 10, we're told.
In contrast, each non-Microsoft vendor may have its own patch process – requiring the user to be much more knowledgeable and diligent,” Secunia noted, adding that non-Microsoft programs represent 60 per cent of the code on a computer. “Most users do not devote the time and attention necessary to keep up-to-date with the latest security patches across all the applications on their PCs.
And for non-Windows applications, it takes more effort,” Lindgaard concluded. Secunia’s stats are based on data from scans by consumers using its Personal Software Inspector patching inspection tool between the start of July and the end of September. ®
But the level of unpatched non-Windows applications on private PCs continues to rise. These conclusions can be drawn from just-released Country Reports covering Q3 2016 for 12 countries, published by Secunia Research at Flexera Software, the leading provider of Software Vulnerability Management Solutions.
The reports provide status on vulnerable software products on private PCs in 12 countries, listing the vulnerable applications and ranking them by the extent to which they expose those PCs to hackers. Key Findings in the U.K.
Country Report Include: 6.4 percent of users had unpatched Windows operating systems in Q3 of 2016, up from 5.4 percent in Q2 of 2016 and down from 7.9 percent in Q3, 2015. 12.8 percent of users had unpatched non-Microsoft programmes in Q3, 2016, up from 12.6 percent in Q2 of 2016 and 11.3 percent in Q3 of 2015. The top three most exposed programmes for Q3, 2016 were Oracle Java JRE 1.8.x / 8.x. (45 percent unpatched, 41 percent market share, 57 vulnerabilities), Apple iTunes 12.x (44 percent unpatched, 39 percent market share, 50 vulnerabilities), and VLC Media Player 2.x (45 percent unpatched, 36 percent market share, 7 vulnerabilities). Level of Unpatched Windows Operating Systems StabilizingThough the level of unpatched private PC Windows operating systems may tick up or down from quarter to quarter, it appears to be stabilising at lower levels compared to this time last year.
Time will tell whether this trend continues, but Microsoft’s recent announcement moving to a roll-up model for Windows 7 SP1, Windows 8.1, Windows Server 2008 R2, Windows Server 2012 and Windows Server 2012 R2 updates may help. Microsoft says all supported versions of Windows will now follow a similar update servicing model, bringing a more consistent and simplified servicing experience. “We will be tracking this closely to determine whether the recent declines in unpatched Windows operating systems are a blip or indicative of a long term trend,” said Kasper Lindgaard, Director of Secunia Research at Flexera Software. “If it is a trend, the consumer will ultimately benefit by the reduced attack surface that hackers can exploit within the Windows OS.” The Attack Surface for Non-Microsoft Applications Continues to GrowThe security news was not all rosy for private PC users.
The level of unpatched non-Microsoft programmes continues its upward trend.
The reasons are likely due to the process consumers must utilise to implement security patches. Microsoft is standardising its patch process and automation across its entire application portfolio.
In contrast, each non-Microsoft vendor may have its own patch process – requiring the user to be much more knowledgeable and diligent.
And according to the 2016 Vulnerability Review, non-Microsoft programs represent 60 percent of the applications on a computer. “Most users do not devote the time and attention necessary to keep up-to-date with the latest security patches across all the applications on their PCs.
And for non-Windows applications, it takes more effort,” added Lindgaard. “This why automated patch management systems like Corporate Software Inspector for enterprises, and Personal Software Inspector for consumers, are so important.” The 12 Country Reports are based on data from scans by Personal Software Inspector between July 1, 2016 and September 30, 2016. - # # # - Resources:Download the Q3 2016 Country Reports Learn more about: Follow us on… About Flexera SoftwareFlexera Software helps application producers and enterprises increase application usage and security, enhancing the value they derive from their software. Our software licensing, compliance, cybersecurity and installation solutions are essential to ensure continuous licensing compliance, optimised software investments, and to future-proof businesses against the risks and costs of constantly changing technology.
A marketplace leader for more than 25 years, 80,000+ customers turn to Flexera Software as a trusted and neutral source of knowledge and expertise, and for the automation and intelligence designed into our products.
For more information, please go to: www.flexerasoftware.com. For more information, contact:Vidushi Patel/ Nicola MalesVanilla PRprflexera@vanillapr.co.uk+44 7958474632 / +447976652491 *All third-party trademarks are the property of their respective owners.
There may well be approaches that don't require Apple to build a custom firmware to defeat some of the iPhone's security measures. The iPhone 5c used by the San Bernardino killers encrypts its data using a key derived from a combination of an ID embedded in the iPhone's processor and the user's PIN.
Assuming that a 4-digit PIN is being used, that's a mere 10,000 different combinations to try out. However, the iPhone has two protections against attempts to try every PIN in turn.
First, it inserts delays to force you to wait ever longer between PIN attempts (up to one hour at its longest).
Second, it has an optional capability to delete its encryption keys after 10 bad PINs, permanently depriving access to any encrypted data. The FBI would like to use a custom firmware that allows attempting multiple PINs without either of these features.
This custom firmware would most likely be run using the iPhone's DFU mode.
Device Firmware Update (DFU) mode is a low-level last resort mode that can be used to recover iPhones that are unable to boot.
To use DFU mode, an iPhone must be connected via USB to a computer running iTunes. iTunes will send a firmware image to the iPhone, and the iPhone will run that image from a RAM disk.
For the FBI's purposes, this image would include the PIN-attack routines to brute-force the lock on the device. Developing this firmware should not be particularly difficult—jailbreakers have developed all manner of utilities to build custom RAM disks to run from DFU mode, so running custom code from this environment is already somewhat understood—but there is a problem.
The iPhone will not run any old RAM disk that you copy to it.
It first verifies the digital signature of the system image that is transferred. Only if the image has been properly signed by Apple will the phone run it. The FBI cannot create that signature itself. Only Apple can do so.
This means also that the FBI cannot even develop the code itself.
To test and debug the code, it must be possible to run the code, and that requires a signature.
This is why it is asking for Apple's involvement: only Apple is in a position to do this development. Do nothing at all The first possibility is that there's simply nothing to do.
Erasing after 10 bad PINs is optional, and it's off by default.
If the erase option isn't enabled, the FBI can simply brute force the PIN the old-fashioned way: by typing in new PINs one at a time.
It would want to reboot the phone from time to time to reset the 1 hour delay, but as tedious as the job would be, it's certainly not impossible. It would be a great deal slower on an iPhone 6 or 6s.
In those models, the running count of failed PIN attempts is preserved across reboots, so resetting the phone doesn't reset the delay period.
But on the 5c, there's no persistent record of bad PIN trials, so restarting the phone allows an attacker to short-circuit the delay. Why it might not work Obviously, if the phone is set to wipe itself, this technique wouldn't work, and the FBI would want to know one way or the other before starting.
It ought to be a relatively straightforward matter for Apple to tell, as the phone does have the information stored in some accessible way so that it knows what to do when a bad PIN is entered. But given the company's reluctance to assist so far, getting them to help here may be impossible.Update: It turns out that this bug was fixed in iOS 8.1, so it probably wouldn't work after all. Acid and laserbeams One risky solution that has been discussed extensively already is to use lasers and acid to remove the outer layers of the iPhone's processor and read the embedded ID. Once this embedded ID is known, it's no longer necessary to try to enter the PIN directly on the phone itself.
Instead, it would be possible to simply copy the encrypted storage onto another computer and attempt all the PINs on that other computer.
The iPhone's lock-outs and wiping would be irrelevant in this scenario. Why it might not work The risk of this approach is not so much that it won't work, but that if even a tiny mistake is made, the hardware ID could be irreparably damaged, rendering the stored data permanently inaccessible. Jailbreak the thing The iPhone's built-in lockouts and wipes are unavoidable if running the iPhone's operating system... assuming that the iPhone works as it is supposed to.
It might not.
The code that the iPhone runs to enter DFU mode, load a RAM image, verify its signature, and then boot the image is small, and it should be simple and quite bullet-proof. However, it's not impossible that this code, which Apple calls SecureROM, contains bugs.
Sometimes these bugs can enable DFU mode (or the closely related recovery mode) to run an image without verifying its signature first. There are perhaps six known historic flaws in SecureROM that have enabled jailbreakers to bypass the signature check in one way or another.
These bugs are particularly appealing to jailbreakers, because SecureROM is baked into hardware, and so the bugs cannot be fixed once they are in the wild: Apple has to update the hardware to address them.
Exploitable bugs have been found in the way SecureROM loads the image, verifies the signature, and communicates over USB, and in all cases they have enabled devices to boot unsigned firmware. If a seventh exploitable SecureROM flaw could be found, this would enable jailbreakers to run their own custom firmwares on iPhones.
That would give the FBI the power to do what it needs to do: it could build the custom firmware it needs and use it to brute force attack the PIN.
Some critics of the government's demand have suggested that a government agency—probably the NSA—might already know of such a flaw, arguing that the case against Apple is not because of a genuine need to have Apple sign a custom firmware but merely to give cover for their own jailbreak. Why it might not work Of course, the difficulty with this approach is that it's also possible that no such flaw exists, or that even if it does exist, nobody knows what it is.
Given the desirability of this kind of flaw—it can't be fixed through any operating system update—jailbreakers have certainly looked, but thus far they've turned up empty-handed.
As such, this may all be hypothetical. Ask Apple to sign an FBI-developed firmware Apple doesn't want to develop a firmware to circumvent its own security measures, saying that this level of assistance goes far beyond what is required by law.
The FBI, however, can't develop its own firmware because of the digital signature requirements. But perhaps there is a middle ground.
Apple, when developing its own firmwares, does not require each test firmware to be signed.
Instead, the company has development handsets that have the signature restriction removed from SecureROM and hence can run any firmware.
These are in many ways equivalent to the development units that game console manufacturers sell to game developers; they allow the developers to load their games to test and debug them without requiring those games to be signed and validated by the console manufacturer each time. Unlike the consoles, Apple doesn't distribute these development phones.
It might not even be able to, as it may not have the necessary FCC certification.
But they nonetheless exist.
In principle, Apple could lend one of these devices to the FBI so that the FBI would then be responsible for developing the firmware.
This might require the FBI to do the work on-site at Cupertino or within a Faraday cage to avoid FCC compliance concerns, but one way or another this should be possible. Once it had a finished product, Apple could sign it.
If the company was truly concerned with how the signed firmware might be used, it might even run the firmware itself and discard it after use. This would relieve Apple of the burden of creating the firmware, and it could be argued that it was weakening Apple's first amendment argument against unlocking the firmware. While source code is undoubtedly expressive and protected by the first amendment, it seems harder to argue that a purely mechanical transformation such as stamping a file with a digital signature should be covered by the same protection. Why it might not work Apple may very well persist in saying no, and the courts may agree. Andrew Cunningham Stop the phone from wiping its encryption keys The way the iPhone handles encryption keys is a little more complex than outlined above.
The encryption key derived from the PIN combined with the hardware ID isn't used to encrypt the entire disk directly.
If it were, changing the PIN would force the entire disk to be re-encrypted, which would be tiresome to say the least.
Instead, this derived key is used to encrypt a second key, and that key is used to encrypt the disk.
That way, changing the PIN only requires re-encryption of the second key.
The second key is itself stored on the iPhone's flash storage. Normal flash storage is awkward to securely erase, due to wear leveling.
Flash supports only a limited number of write cycles, so to preserve its life, flash controllers spread the writes across all the chips. Overwriting a file on a flash drive may not actually overwrite the file but instead write the new file contents to a different location on the flash drive, potentially leaving the old file's contents unaltered. This makes it a bad place to store encryption keys that you want to be able to delete.
Apple's solution to this problem is to set aside a special area of flash that is handled specially.
This area isn't part of the normal filesystem and doesn't undergo wear leveling at all.
If it's erased, it really is erased, with no possibility of recovery.
This special section is called effaceable storage. When the iPhone wipes itself, whether due to bad PIN entry, a remote wipe request for a managed phone, or the built-in reset feature, this effaceable storage area is the one that gets obliterated. Apart from that special handling, however, the effaceable area should be readable and writeable just like regular flash memory. Which means that in principle a backup can be made and safely squirreled away.
If the iPhone then overwrites it after 10 bad PIN attempts, it can be restored from this backup, and that should enable a further 10 attempts.
This process could be repeated indefinitely. This video from a Shenzhen market shows a similar process in action (we came at it via 9to5Mac after seeing a tweet in February and further discussion in March). Here, a 16GB iPhone has its flash chip desoldered and put into a flash reader.
A full image of that flash is made, including the all-important effaceable area.
In this case, the chip is then replaced with a 128GB chip, and the image restored, with all its encryption and data intact.
The process for the FBI's purposes would simply use the same chip every time. By restoring every time the encryption keys get destroyed, the FBI could—slowly—perform its brute force attack.
It would probably want to install a socket of some kind rather than continuously soldering and desoldering the chip, but the process should be mechanical and straightforward, albeit desperately boring. A more exotic possibility would be to put some kind of intermediate controller between the iPhone and its flash chip that permitted read instructions but blocked all attempts to write or erase data. Hardware write blockers are already routinely used in other forensic applications to prevent modifications to SATA, SCSI, and USB disks that are being used as evidence, and there's no reason why such a thing could not be developed for the flash chips themselves.
This would allow the erase/restore process to be skipped, requiring the phone to be simply rebooted every few attempts. Why it might not work The working assumption is that the iPhone's processor has no non-volatile storage of its own.
So it simply doesn't remember that it is supposed to have wiped its encryption keys, and thus will offer another ten attempts if the effaceable storage area is restored, or that even if it does remember, it doesn't care.
This is probably a reasonable assumption; the A6 processor used in the iPhone 5c doesn't appear to have any non-volatile storage of its own, and allowing restoration means that even a securely wiped phone can be straightforwardly restored from backup by connecting it to iTunes. For newer iPhones, that's less clear.
Apple implies that the A7 processor—the first to include the "Secure Enclave" function—does have some form of non-volatile storage of its own. On the A6 processor and below, the time delay between PIN attempts resets every time the phone is rebooted. On the A7 and above, it does not; the Secure Enclave somehow remembers that there has been some number of bad PIN attempts earlier on.
Apple also vaguely describes the Secure Enclave as having an "anti-replay counter" for data that is "saved to the file system." It's not impossible that this is also used to protect the effaceable storage in some way, allowing the phone to detect that it has been tampered with.
Full restoration is similarly still likely to be possible. There is also some risk to disassembling the phone, but if the process is reliable enough for Shenzhen markets, the FBI ought to be able to manage it reliably enough. This last technique in particular should be quite robust.
There's no doubt that Apple's assistance would help a great deal; creating a firmware to allow brute-forcing the PIN would be faster and lower risk than any method that requires disassembly.
But if the FBI is truly desperate to bypass the PIN lockout and potential disk wipe, there do appear to be options available to it that don't require Apple to develop the firmware.
Somewhere warmer, brighter, more fun. And someone else is there waiting and ready to steal your information -- and your money -- in the process. Travel scams are ripe and ripening as the days grow longer, in some high and very low tech ways. "The really staggering message that came through in 2015 was that it was the year attackers spent a lot less time and energy on really sophisticated technology intrusions and instead spent the year exploiting us," says Kevin Epstein, vice president of the Threat Operations Center at Proofpoint. Criminals don't just want to grab your information when you're planning either. Your trip itself makes you a target, too. Pre-trip hacking Travel is a focus of scamming for two reasons. The first is money -- lots of it. "Booking the trip turns out to be a great way to give away a lot of money," says Epstein. "You voluntarily provide lots of personal information." Not only do most sites require you to put in your credit card information to book a trip, but many also have you create a login and password to use the site. If a criminal can make you believe that you're putting that kind of information into the right place, they can take over your money and your digital life. Or, if they can send you something that looks legit, and you download what they ask, they get into your computer and everything that's stored therein. The second reason is that travel companies have lagged behind when it comes to the security of their sites. When other online sectors strengthened their walls, scammers went the path of least resistance, which lately has been travel. Banks, says Charlie Abrahams, senior vice president at MarkMonitor, used to be the subject of such cloning, by have "taken steps to deal with it," adding that MarkMonitor has recently seen an uptick in travel companies requesting the same kinds of service they have been providing for banks. "We deal with sites that illegally pretend to be a site for the purposes of capturing credential information," says Abrahams.
Some of these sites can be found by searching for deals, and some by clicking on emails that purport to be from a legitimate travel entity. Fraudsters are also moving into the app space with travel as a target, though attacks there aren't big -- yet.
Abrahams says that MarkMonitor has been spending more time scanning online app stores "because there are a lot of apps there that are completely fake," he says.
Sometimes these apps will glom onto famous name brands in the hopes of just getting people to download the apps; they may also be looking to get your information too.
Sticking to big name brands and downloading only from well known app stores like iTunes or Google Play is the best way to keep those out of your life, and off your data. The same is true for where you go online to book your travel, says Epstein. "If you pick the wrong site, you've just handed over everything to someone." Sticking to known companies there too, whether that's with hotels or airlines or cruise companies themselves or well-known online travel agents, is your best bet.
Deals that look too good to be true probably are. Read the find print too, and make sure that if your booking is cancelled -- especially by the booker -- that the entire amount isn't considered a non-refundable deposit. Epstein also suggests calling the hotel to make sure they have the booking in case something went awry. While on the road The scams don't stop there, of course.
Traveling presents more ways that criminals can get into your life, especially if your guard's down because you're on the beach, drinking margaritas, or both. "Free Wi-Fi is the most dangerous cyber vector" for travelers says Epstein.
Even if your hotel offers it for free, don't use it.
If you can't create your own Wi-FI network by tethering, Epstein says stick to your phone.
If you must use your laptop, make sure you use full tunnel encrypted VPN.
That way, what you're sending or receiving is protected. Securing your laptop and phone might sound basic, but Epstein says it's something that travelers can forget about -- especially the laptop. "If you don't know where something is, even if you get it back, it may not be what you thought it was," he says.
That's because someone could put malware on it. "It's a path into your company.
It's a front door." If you can, he says, leave the corporate laptop at home.
If you must bring it with you, have it locked away in a place where only you know the password. Besides, it's a vacation. Who wants to bring their laptop with them on that? Now you have a security reason not to work with you. This story, "How to avoid common travel and vacation scams" was originally published by CIO.
If the order stands up to legal challenges, Apple would be forced to create a new customized iOS firmware that would remove the passcode lockout on a seized iPhone as part of the ongoing San Bernardino terrorism investigation. Maricopa County, the nation’s fourth most populous county, which encompasses Phoenix and the surrounding area, is also well-known for its very conservative sheriff, Joe Arpaio. "Apple’s refusal to cooperate with a legitimate law enforcement investigation to unlock a phone used by terrorists puts Apple on the side of terrorists instead of on the side of public safety," County Attorney Bill Montgomery said in a statement. "Positioning their refusal to cooperate as having anything to do with privacy interests is a corporate PR stunt and ignores the Fourth Amendment protections afforded by our Constitution." The agency specified that it has 564 smartphones, and of those, 366 are iPhones. "If the potential for unauthorized access to an encryption key is truly motivating Apple’s unwillingness to assist in downloading information from specific iPhones, then let’s define the problem in those terms and work on that concern," Montgomery added. "Otherwise, Apple is proving indifferent to the need for evidence to hold people accountable who have harmed or intend to harm fellow citizens."
That’s not news, but it’s become newly relevant since the company and the FBI started a very loud, very public fight about the data stored on a particular iPhone. Privacy advocates have praised Apple’s commitment to full-device encryption by default, and after a false start last year, all new Android phones shipping with version 6.0 or higher should be encrypted by default as well.
It’s an effective tool for keeping thieves from grabbing your data even if they can take your phone. If you're looking for comprehensive privacy, including protection from law enforcement entities, there’s still a loophole here: iCloud.
Apple encourages the use of this service on every iPhone, iPad, and iPod Touch that it sells, and when you do use the service, it backs up your device every time you plug it into its power adapter within range of a known Wi-Fi network. iCloud backups are comprehensive in a way that Android backups still aren’t, and if you’ve been following the San Bernardino case closely, you know that Apple’s own legal process guidelines (PDF) say that the company can hand iMessages, SMS/MMS messages, photos, app data, and voicemail over to law enforcement in the form of an iOS device backup (though some reports claim that Apple wants to strengthen the encryption on iCloud backups, removing the company's ability to hand the data over to law enforcement). For most users, this will never be a problem, and the convenience of iCloud backups and easy preservation of your data far outweigh any risks.
For people who prefer full control over their data, the easiest option is to stop using iCloud and use iTunes instead.
This, too, is not news, and in some ways is a regression to the days before iOS 5 when you needed to use a computer to activate, update, and back up your phone at all.
But there are multiple benefits to doing local backups, so while the topic is on everyone’s mind we’ll show you how to do it (in case you don’t know) and what you get from it (in case you don’t know everything). How to do it Andrew Cunningham The relevant section of iTunes. You'll definitely want to encrypt your local backup, but whether you want to shut iCloud backups off entirely is up to you. Andrew Cunningham The relevant section of iTunes. You'll definitely want to encrypt your local backup, but whether you want to shut iCloud backups off entirely is up to you. Andrew Cunningham Backing up to the computer. Andrew Cunningham Here's where you manage backups.
It'll show you the name of the device and the timestamp as well as a padlock icon to signify that the backups are encrypted. First, obviously, you’ll need iTunes.
If you have OS X, it’s pre-installed; if you have Windows, it’s over here.
Version 12.3 is the first to support iOS 9, and it will run on OS X 10.8 or Windows 7 and anything newer.
Connect your phone to your computer with your Lightning cable and, if you’ve never connected them before, tell both the computer and the phone to trust each other when prompted. From the summary screen, the next thing you’ll want to do is check the “Encrypt iPhone backup” box and set a password—ideally, this would be a unique password separate from your account password or your iOS passcode. Whatever it is, don’t forget it, because if you do, you won’t be able to restore your backup later. Hit the big Back Up Now button, wait for the backup to finish, and you’re done. You can back your phone up to your computer and to iCloud if you want.
This isn’t an all-or-nothing proposition, and there are non-privacy-related reasons for doing a local backup even if you continue to do iCloud backups.
If you don’t want your backups on iCloud at all, you can click the “This computer” button to disable backups, or you can do it from within the settings on your iPhone.
Also remember to delete old backups from iCloud if you don’t want them up there, since they’ll stay there even if you turn backups off, and Apple won’t keep copies of your stuff on its servers after you’ve deleted it. To manage and delete your backups locally, you’ll need to go to the iTunes preferences and hit the Devices tab.
All backups for all devices along with timestamps and lock icons (to denote an encrypted backup) will be listed here, and you can delete them if you’re running out of drive space or if you just don’t need them anymore. Why do local, encrypted backups? Privacy is definitely one reason to use local backups; if your encrypted phone backup is stored on your encrypted laptop that is itself protected with a strong password, there’s very little chance that anyone without the right credentials can get access to anything. There are also benefits when you’re restoring that backup to your iPhone.
As Apple’s page on encrypted iTunes backups outlines, encrypted local backups are the only ones that contain saved account passwords, Wi-Fi settings, browsing history, and data from the Health app.
Apple doesn’t want this data on its servers for security and privacy reasons, and it’s not stored in unencrypted local backups for the same reason. Use encrypted local backups, and you get that info back if you need to do a restore. It also helps if you’re upgrading to a new phone or using a loaner or replacement phone. When you restore an iCloud backup to a phone or tablet that’s not the phone or tablet you backed it up from, you don’t lose any of your photos or iMessage history or anything like that, but you do lose the credentials for e-mail accounts and any other apps that require authentication. This makes it so that someone with the password for your Apple ID couldn’t restore one of your backups to his or her own phone and gain access to every e-mail account and app that you’ve signed into on your own phone.
But it also makes it a hassle to move over to a new iPhone or one AppleCare just replaced.
Encrypted backups retain and restore that account information even if you’re moving to a different phone. Encrypted local backups won’t be the best choice for everybody. You give up convenience, as is usually the case when you make security and privacy a priority. You need to remember to connect your phone to your computer every day or two, you need to devote local storage space to the backups, and you won’t have access to those backups if anything happens to the computer.
It’s up to you to decide what balance of privacy and security works for you.