Home Tags Penetration Testing

Tag: Penetration Testing

APT Threat Evolution in Q1 2017

Kaspersky Lab is currently tracking more than a hundred threat actors and sophisticated malicious operations in over 80 countries.

During the first quarter of 2017, there were 33 private reports released to subscribers of our Intelligence Services, with IOC data and YARA rules to assist in forensics and malware-hunting.

Rapid7 Adds IoT Hardware Support to Metasploit Security Testing

Open-source Metasploit penetration testing framework gets new hardware support, enabling researchers to target IoT.

Metasploit Can Now Be Directly Linked To Hardware For Vulnerability Testing

New hardware bridge extends penetration testing tools capabilities into physical world.

Oracle Patches 270 Vulnerabilities in First Patch Update of 2017

Oracle is patching a long list of different vulnerabilities in its software portfolio.

This time, it's the Oracle E-Business Suite that is getting the most patches. Oracle is out with its first Critical Patch Update (CPU) for 2017 and it's a big one.
I...

Why companies offer a hacking bounty — and why there are...

Want to make a cool $20,000? All you have to do is hack the Nintendo 3DS, a handheld console that’s been out for a few years already.

A listing on HackerOne spells everything out: Hackers will receive a cash payment for discovering a vulnerability in the system, which does let gamers make purchases and stores private information like your age and gender.

There’s a range for this, of course—some discoveries will pay $100.

Also, anyone who files a report must follow the exact template. It makes you wonder—why would a major Japanese corporation offer a reward like this? Why is it even worth the expense, especially when you know they have internal security researchers? Many companies, including Apple, Uber, and Yelp, regularly offer bounties. One report said Apple would pay as much as $200,000 if you find an exploit in the new iPhone.

The expense is obviously worth it or the bounty programs—and sites like HackerOne—wouldn’t exist. “The main advantage is that you get researchers that think like a hacker and will try to find vulnerabilities like a hacker,” says Alvaro Hoyos, the CSO at OneLogin, an identity and access management company. “This helps you identify issues that either your internal or external penetration testing teams might miss, not just because of that hacker frame of mind, but also because you have a greater quantity of researchers constantly testing your systems.” Chris Roberts, the chief security architect at Acalvio Technologies, an endpoint protection company, says the rise of hacking bounties is due to how the community has become more organized and helpful.
Sites like BugCrowd and BugSheet have made it easier for larger firms to post a bounty, accept research findings, and pay the researcher. He tells CSO that he has been paid about $3,000 to $5,000 to find a vulnerability, although in some cases the company only gives him a warm thanks.
In some cases, a bounty for his team has run as high as $25,000 to find a bug a hacker could expose. Challenges in offering a bounty Roberts noted that companies are not always prepared to offer a bounty or set up the bounty program. One big challenge is finding the right bounty amount to match the vulnerability. “This can lead to some unpleasant exchanges with researchers,” he says. “You will have to properly manage the input, the responses and the findings—even though you are now hoping that your IT security budget is lower. You will have to staff up to work through the submitted results or risk the wrath of people getting fed up not getting a response.” In some cases, hackers will not want to be identified and may not want to work with a corporate legal team once a bug is discovered, he says. Not all researchers want to read through a complex reporting template that spells out every detail.

And, if the program is not configured properly (say, having a test environment only for the researchers), real attacks might be hard to discern. Hoyos says one potential challenge to a bounty is that it can call attention to the new service, gadget, or app.
It could alert a criminal hacker that a company like Apple or Uber knows there could be a vulnerability, even if that’s not necessarily true. “If your company lacks the resources to close out bugs being reported in a timely manner, you are, in theory, letting more and more third parties know an exploitable bug exists,” says Hoyos. “Chances that none of those third parties will disclose that bug to a malicious actor or abuse it themselves goes up as more of them become aware.

This of course is assuming the worst possible outcome and knowing what you don’t know is still extremely valuable.” Paul Innella, the CEO of TDI, a cybersecurity company, says some bounty programs go awry—hackers discover an exploit, and instead of letting the company know and collecting the reward, the sell the discovery on the Dark Web.

The bounty program created a new problem. What to expect from both sides Offering a bounty—or being the researcher who looks for the exploits—is also challenging because in many ways the temptation is to offer a bounty instead of hiring security professionals, running your own penetration tests, and setting up a security infrastructure. “If you’re using this methodology because you don’t understand your corporate defenses, meaning you’re not equipped to detect attacks and act upon them, then offering a bounty is not for you,” says Innella. “Bounty programs should be used by companies with robust cyber defenses and considered a part of regimental cybersecurity testing, essentially in an outsourced capacity.” Jumping into ethical hacking to find exploits is not something to take lightly, according to Nathan Wenzler, a security architect at AsTech Consulting. One important point he made: While there is a rise in the number of hacking bounties, there’s also a trend in offering lower amounts. Uber, for example, has paid a total of $819,085 since launching a bounty with a top range of $5,000 to $10,000, but the average is more like $750 to $1,000 per exploit. Still, Paul Calatayud, the CTO at FireMon, a firewall management company, says finding a zero-day exploit for a large enterprise can pay much higher—into the seven-figure amount. That’s a pretty good pay day. This story, "Why companies offer a hacking bounty -- and why there are challenges" was originally published by CSO.

Free cybersecurity tools for all your needs

There are more free information security tools than you can highlight with a fist full of whiteboard pointers. While many are trialware-based enticements designed to lure decision makers to purchase the pricey premium counterparts of these freebies, many are full-blown utilities.

A few important categories include threat intelligence tools, tools to build security in during the development stage, penetration testers, and forensics tools. Threat intelligence tools include AlienVault’s Open Threat Exchange, which collects and shares online threat intelligence as well as the Hailataxii and Cymon.io threat exchanges.

There are a variety of SAST (Static Application Security Testing) tools for security testing software applications that developers write using different languages whether C/C++, Ruby on Rails, or Python.

For penetration testing, we present the Nmap Security Scanner and the broadly useful Wireshark network protocol analyzer.
Specific forensics products include the GRR remote forensic framework, and Autopsy and SleuthKit, which analyze hard drives and smartphones, and the Volatility Foundation’s open source framework for memory analysis/forensics.

US fails in bid to renegotiate arms trade restrictions on exploit...

Guns, bullets, and malware samples—all now controlled under the Wassenaar Arrangement.Aurich Lawson reader comments 0 Share this story If you work involves exploiting vulnerabilities in software, congratulations—you're potentially an arms merchant in the eyes of many governments. Your knowledge about how to hack could be classified as a munition. A United States delegation yesterday failed to convince all of the members of the Wassenaar Arrangement—a 41-country compact that sets guidelines for restricting exports of conventional weapons and "dual use goods"—to modify rules that would place export restrictions on technologies and data related to computer system exploits.

And while the US government has so far declined to implement rules based on the existing convention, other countries may soon require export licenses from anyone who shares exploit data across borders—even in the form of security training. The changes governing "intrusion software" were adopted by the Wassenaar plenary in 2013, and they were set to be implemented by member countries last year.

Those changes were intended to prevent repressive regimes from gaining access to commercial malware—such as the code sold by the Italy-based Hacking Team to Sudan, and the surveillance tools from Blue Coat that were resold to Syria's Assad regime and used to catch dissident bloggers. But when the language of the new controls were passed to the Commerce Department by the State Department for implementation, the new language quickly caused consternation.
Security researchers and industry revolted at the proposed rules, calling them too broad in their definition of "intrusion software." Harley Geiger, the director of public policy at the security testing software firm Rapid7, explained: The US proposed an implementation rule [for the controls].

But it did so knowing there were problems.
So during the course of this year, they did not put forth an implementing rule because they said they did not want to put forth a rule until the problems were resolved. It soon became apparent there was no way to reconcile the concerns raised by security experts with the language of the control agreed upon by the Wassenaar members.
So the US moved to renegotiate the restrictions in March as the new round of negotiations began.

That renegotiation collapsed yesterday. Katie Moussouris, a member of the US Wassenaar delegation, CEO of Luta Security, and former chief policy officer at the bug bounty company HackerOne, said the problem lied in the language of the controls themselves.
She told Ars Technica: It's the words.

Finding precise enough language that translates well into 41 countries' domestic export laws is the challenge here.
It shouldn't surprise anyone that it will take longer than a few months of renegotiation to get consensus on the revised words. Moussouris noted that some of the changes the US wanted were approved, including "more precise 'command and control' terminology that is now in the Arrangement." The previous language could have been construed to include "more routine software," she said—including security software that is purely defensive.

The new language tightens the definition to specifically cover software that controls remote malware. Geiger agreed that there had been some beneficial changes to the Wassenaar Arrangement's language. "But those [changes] were minor," Geiger noted.

The key control language remains in place, and other countries have already begun implementing export controls based on it. Moussouris explained: There has already been a chilling effect on security researchers that we've observed over the past few years, since many are not sure how they are affected. Non-disclosure and decreasing participation among researchers based in Wassenaar countries in international exploitation competitions like Pwn2own has already been observed. As of yet, since the rules have not been implemented in the US, they've had no direct impact on US security firms.

But the rules have been a hindrance for companies with a presence in multiple countries, Geiger said. "US organizations would not have to get export licenses," he explained, "but if they're working with people in another country to receive, that person would be bound by a different set of rules.
If you're working with a partner in another country, it slows down the exchange of information." Geiger said that it could potentially affect companies trying to move data about exploits they were trying to defend against from operations in one country to another—potentially slowing their ability to respond to new threats. "The ongoing uncertainty among security practitioners and researchers will delay the passing between defenders many important exploitation techniques and malicious command and control software samples," Moussouris agreed. "The presence of these controls in their current form only serves to increase disadvantages of defenders by introducing uncertainty and potential delays in passing vital samples and analysis." Now it will be left to the incoming Trump administration to decide how, or if, to implement rules based on the existing agreement, or to return to the negotiating table to hammer out universally acceptable language that fixes the problems with the controls.

And in the meantime, security researchers and companies will have to lobby the governments that are going ahead with rules based on the control to give them more freedom to move information—or deal with the headaches of applying for export licenses.

This could apply to things like training courses for penetration testing and other skills that deal with exploits—companies are likely to run into restrictions about who they can allow to attend those classes, since passing the information to someone from out of the country could be considered the same as exporting a munition without a license. Moussouris is relatively confident that the US will return to the table to reform the restrictions. "It is impossible to predict the next administration's choices here," she said. "But if our new leadership listens to any of the tech giants who were sitting around the table at the recent tech summit, they would all unanimously support the ongoing renegotiation of the Wassenaar Arrangement, as did the bipartisan Congressional Cybersecurity Caucus co-chaired by Congressman Langevin.

This isn't just about clearing the operational path for security research or security tech companies; this is about all technological defense, and the need for Internet defenders to work together in real time across borders."

No, there’s no evidence (yet) the feds tried to hack Georgia’s...

Enlarge / Georgia politician Brian Kemp reads at a Holocaust remembrance ceremony in the state.Georgia.gov reader comments 31 Share this story Accusations that the US Department of Homeland security tried to hack Georgia's voter registration database are running rampant.

But until officials from that state's Secretary of State office provide basic details, people should remain highly skeptical. The controversy erupted after Georgia Secretary of State Brian Kemp sent and publicly released a letter addressed to DHS Secretary Jeh Johnson.
In it, Kemp made a series of statements so vague in their technical detail that it's impossible to conclude any kind of hacking or breach—at least as those terms are used by security professionals—took place. "On November 15, 2016, an IP address associated with the Department of Homeland Security made an unsuccessful attempt to penetrate the Georgia Secretary of State's firewall," Kemp wrote. "I am writing you to ask whether DHS was aware of this attempt and, if so, why DHS was attempting to breach our firewall." Kemp continued: The private-sector security provider that monitors the agency's firewall detected a large unblocked scan event on November 15 at 8:43 AM.

The event was an IP address (216.81.81.80) attempting to scan certain aspects of the Georgia Secretary of State's infrastructure.

The attempt to breach our system was unsuccessful. At no time has my office agreed to or permitted DHS to conduct penetration testing or security scans of our network. Moreover, your Department has not contacted my office since this unsuccessful incident to alert us of any security event that would require testing or scanning of our network.

This is especially odd and concerning since I serve on the Election Cyber Security Working Group that your office created. As you may know, the Georgia Secretary of State's office maintains the statewide voter registration database containing the personal information of over 6.5 million Georgians.
In addition, we hold the information for over 800,000 corporate entities and over 500,000 licensed or registered professionals. As Georgia's Secretary of State, I take cyber security very seriously.

That is why I have contracted with a global leader in monitored security services to provide immediate responses to these types of threats.

This firm analyzes more than 180 billion events a day globally across a 5,000+ customer base which includes many Fortune 500 companies.

Clearly, this type of resource and service is necessary to protect Georgians' data against the type of event that occurred on November 15. The letter uses some scary language, including an "attempt to penetrate" and "breach" the agency's firewall and system plus "security event." However, nowhere does it say what gives rise to such claims.

The phrases "large blocked scan event" and "attempting to scan certain aspects of the Georgia Secretary of State's infrastructure" are vague to the point of being almost meaningless. Many security professionals on social media are interpreting them to mean a computer with an IP address belonging to the DHS sent a request to one or more Internet ports on a Georgia Secretary of State network to see if they provided some sort of response. Such scans allow someone to know if network ports reserved for e-mail, Web traffic, and all sorts of other Internet services are responding to queries from outside services.
Security professionals and blackhat hackers alike use such scans all the time to identify vulnerable networks.

For instance, in the weeks following the 2014 discovery of the Heartbleed vulnerability—arguably one of the most severe security bugs ever to hit the Internet—it was network scans that allowed the public to learn that huge swaths of the Internet remained vulnerable and to identify the 300,000 specific sites that had yet to install a patch. It was the same sort of scan in 2013 that identified more than 81 million IP addresses that were exposing a networking feature known as Universal Plug and Play to the Internet at large.

The setting, which was in violation of guidelines that say UPnP isn't supposed to communicate with devices that are outside a local network, put them at risk of being remotely hijacked by people halfway around the world.

The discovery was only possible by performing a scan on every routable IPv4 address about once a week over a six-month period. As a security researcher and CEO of penetration testing firm Errata Security, Rob Graham regularly scans the entire Internet for insights about vulnerabilities. "I get these letters all the time," he told Ars, referring to the type of letter Kemp sent. While some people argue the practice is unethical or even illegal, Graham has never been sued or prosecuted for it, and Ars isn't aware of any practicing attorneys who say such scans are unlawful. (Graham does agree to stop sending IP addresses upon request by the owners of those addresses.) Playing devil's advocate In fairness, there's no way to be certain Kemp's letter is complaining of a network scan.

The references to penetration testing and attempts to breach the agency's system and to penetrate or breach its firewall raise the possibility of something that went beyond passive scans.
If, for example, the DHS computer attempted to exploit a SQL injection vulnerability that divulged protected data or accounts, such a move could very well run afoul of criminal hacking statutes.

Trying to exploit specific vulnerabilities in the agency's firewall might also be unlawful. Meanwhile, the phrase "large unblocked scan event" is so technically clumsy that security practitioners say it could mean just about anything. The problem with Kemp's letter is that readers have no way of knowing what gave rise to his exceptional claims. Yet despite the vagueness, the Internet is now awash with reports that the DHS tried and failed to hack Georgia's Secretary of State office, an event that if true, would amount to an extremely serious offense.

Georgia Secretary of State officials didn't respond to Ars' request for an interview.
In the absence of crucial details left out of Thursday's letter, there's little that's odd or concerning about the reported November 15 complaint, and there's certainly no evidence of an attempted breach by the DHS at this time.

10 essential PowerShell security scripts for Windows administrators

PowerShell is an enormous addition to the Windows toolbox that gives Windows admins the ability to automate all sorts of tasks, such as rotating logs, deploying patches, and managing users. Whether it's specific Windows administration jobs or security-...

Stealing, scamming, bluffing: El Reg rides along with pen-testing ‘red team...

Broad smiles, good suits and fake IDs test security in new dimensions FEATURE "Go to this McDonald's," Chris Gatford told me. "There's a 'Create Your Taste' burger-builder PC there and you should be able to access the OS.

Find that machine, open the command prompt and pretend to do something important. "I'll be watching you." Gatford instructed your reporter to visit the burger barn because he practices a form of penetration testing called "red teaming", wherein consultants attack clients using techniques limited only by their imagination, ingenuity, and bravado. He wanted me to break the burger-builder to probe my weaknesses before he would let The Register ride along on a red-team raid aimed at breaking into the supposedly secure headquarters of a major property chain worth hundreds of millions of dollars. Before we try for that target, Gatford, director of penetrations testing firm HackLabs, wants to know if I will give the game away during a social engineering exploit. Chris Gatford (Image: Darren Pauli / The Register) So when the McDonald's computer turns out to have been fixed and my fake system administrator act cancelled, we visit an office building's lobby where Gatford challenges me to break into a small glass-walled room containing a shabby-looking ATM. I can't see a way into the locked room.
I think I see a security camera peering down from the roof, but later on I'm not sure I did.
I can't think of a way in and I'm trying to look so casual I know I'm certain to look nervous. Time's up.

Gatford is finished with the lobby clerk. He asks how I would get in, and hints in my silence that the door responds to heat sensors. I mutter something stupid about using a hair dryer.

Gatford laughs and reminds me about heat packs you'd slip into gloves or ski boots. "Slide one of those under the crack," he says. I've failed that test but stayed cool, so Gatford decides he's happy to have me along on a red-team raid, if only because red teams seldom face significant resistance. "At the end of the day, people just want to help," Gatford says. Red alert Costume is therefore an important element of a red team raid.

For this raid, our software exploits are suits and clipboards.
Sometimes it's high-visibility tradie vests, hard hats, or anything that makes a security tester appear legitimate. Once dressed for the part, practitioners use social-engineering skills to manipulate staff into doing their bidding.

Fans of Mr Robot may recall an episode where the protagonist uses social engineering to gain access to a highly secure data centre; this is red teaming stylised.

Think a real-world capture the flag where the flags are located in the CEO's office, the guard office, and highly secure areas behind multiple layers of locked doors. By scoring flags, testers demonstrate the fallibility of physical defences. Only one manager, usually the CEO of the target company, tends to know an operation is afoot. Limited knowledge, or black-box testing, is critical to examine the real defences of an organisation. Red teamers are typically not told anything outside of the barebones criteria of the job, while staff know nothing at all.
It catches tech teams off guard and can make them look bad.

Gatford is not the only tester forced to calm irate staff with the same social engineering manipulation he uses to breach defences. Red teamers almost always win, pushing some to more audacious attacks. Vulture South knows of one Australian team busted by police after the black-clad hackers abseiled down from the roof of a data centre with Go-Pro cameras strapped to their heads. Across the Pacific, veteran security tester Charles Henderson tells of how years back he exited a warehouse after a red-teaming job. "I was walking out to leave and I looked over and saw this truck," Henderson says. "It was full of the company's disks ready to be shredded.

The keys were in it." Henderson phoned the CEO and asked if the truck was in-scope, a term signalling a green light for penetration testers.
It was, and if it weren't for a potential call to police, he would have hopped into the cab and drove off. Henderson now leads Dell's new red-teaming unit in the United States, which he also built from the ground up. "There are some instances where criminal law makes little distinction between actions and intent, placing red teams in predicaments during an assignment, particularly when performing physical intrusion tasks," Nathaniel Carew and Michael McKinnon from Sense of Security's Melbourne office say. "They should always ensure they carry with them a letter of authority from the enterprise." Your reporter has, over pints with the hacking community, heard many stories of law enforcement showing up during red-team ops. One Australian was sitting off a site staring through a military-grade sniper scope, only to have a cop tap on the window.

Gatford some years ago found himself face-to-face in a small room with a massive industrial furnace while taking a wrong turn on a red-team assignment at a NSW utility. He and his colleagues were dressed in suits.

Another tester on an assignment in the Middle East was detained for a day by AK-47-wielding guards after the CEO failed to answer the phone. Red teamers have been stopped by police in London, Sydney, and Quebec, The Register hears. One of Australia's notably talented red teamers told of how he completely compromised a huge gaming company using his laptop and mobile phone. Whether red teaming on site or behind the keyboard, the mission is the same: breach by any means necessary. Equipment check A fortnight after the ATM incident, The Register is at HackLabs' Manly office.
It's an unassuming and unmarked door that takes this reporter several minutes to spot. Upstairs, entry passes to international hacker cons are draped from one wall, a collection of gadgets on a neighbouring shelf.

Then there's the equipment area.
Scanners, radios, a 3D printer, and network equipment sit beside identity cards sporting the same face but different names and titles.

There's a PwnPlug and three versions of the iconic Wi-Fi Pineapple over by the lockpicks.

A trio of neon hard hats dangle from hooks. "What do you think?" Gatford asks.
It's impressive; a messy collection of more hacking gadgets than this reporter had seen in one place, all showing use or in some stage of construction.

This is a workshop of tools, not toys. "No one uses the secure stuff, mate." In his office, Gatford revealed the target customer. The Register agrees to obscure the client's name, and any identifying particulars, so the pseudonym "Estate Brokers" will serve.

Gatford speaks of the industry in which it operates, Brokers' clientele, and their likely approach to security. The customer has multiple properties in Sydney's central business district, some housing clients of high value to attackers.
It has undergone technical security testing before, but has not yet evaluated its social engineering resilience. The day before, Gatford ran some reconnaissance of the first building we are to hit, watching the flow of people in and out of the building from the pavement. Our targets, he says, are the bottlenecks like doors and escalators that force people to bunch up. JavaScript Disabled Please Enable JavaScript to use this feature. He unzips a small suitcase revealing what looks like a large scanner, with cables and D-cell batteries flowing from circuit boards. "It's an access card reader", Gatford says.
It reads the most common frequencies used by the typically white rigid plastic door entry cards that dangle from staffer waists.

There are more secure versions that this particular device does not read without modification. "No one uses the secure stuff, mate," Gatford says with the same half-smile worn by most in his sector when talking about the pervasive unwillingness to spend on security. I point to a blue plastic card sleeve that turns out to be a SkimSAFE FIPS 201-certified anti-skimming card protector.

Gatford pops an access card into it and waves it about a foot in front of the suitcase-sized scanner.
It beeps and card number data flashes up on a monitor. "So much for that," Gatford laughs. He taps away at his Mac, loading up Estate Brokers' website. "We'll need employee identity cards or we'll be asked too many questions," Gatford says. We are to play the role of contractors on site to conduct an audit of IT equipment, so we will need something that looks official enough to pass cursory inspection. The company name and logo image is copied over, a mug shot of your reporter snapped, and both are printed on a laminated white identity card.

Gatford does the same for himself. We're auditors come to itemise Estate Brokers' security systems and make sure everything is running. "We should get going," he says as he places hacking gear into a hard shell suitcase.
So off we go. Beep beep beep beepbeepbeep Our attack was staged in two parts over two days.

Estate Brokers has an office in a luxurious CBD tower. We need to compromise that in order to breach the second line of defences. We'll need an access card to get through the doors, however, and our laptop-sized skimmer, which made a mockery of the SkimSAFE gadget, will be the key. It is 4:32pm and employees are starting to pour out of the building.

Gatford hands me the skimmer concealed in a very ordinary-looking laptop bag. "Go get some cards," he says. Almost everyone clips access cards on their right hip.
If I can get the bag within 30cm of the cards, I'll hear the soft beep I've been training my ear to detect that signals a successful read. Maybe one in 20 wear their access cards like a necklace. "Hold your bag in your left hand, and pretend to check the time on your watch," Gatford says.

That raises the scanner high enough to get a hit. I'm talking to no one on my mobile as I clumsily weave in and out of brisk walking staff, copping shade from those whose patience has expired for the day.

Beep.

Beep.

Beep, beep, beep, beep, beepbeepbeepbeep.

There are dozens of beeps, far too many to count.

Then we enter a crowded lift and it's like a musical.
It's fun, exhilarating stuff.

The staff hail from law firms, big tech, even the Federal Government.

And we now have their access cards. Estate Brokers is on level 10, but we need a card to send the lift to it. No matter, people just want to help, remember? The lady in the lift is more than happy to tap her card for the two smiling blokes in suits.

Gatford knows the office and puts me in front. "Walk left, second right, second left, then right." I recite it. With people behind us, I walk out and start to turn right, before tightening, and speeding up through the security door someone has propped open. We enter an open-plan office. "They are terrible for security," I recall Gatford saying earlier that day.
It allows attackers to walk anywhere without the challenge of doors. Lucky for us.

Gatford takes the lead and we cruise past staff bashing away their final hour in cubicles, straight to the stationery room. No one is there as Gatford fills a bag with letter heads and branded pens, while rifling through for other things that could prove useful. We head back to the lobby for a few more rounds of card stealing. Not all the reads come out clean, and not all the staff we hit are from Estate Brokers, so it pays to scan plenty of cards. "Look out for that guard down there," Gatford says, indicating the edge of the floor where a security guard can be seen on ground level. "Tell you what, if you can get his card, I'll give you 50 bucks." "You're on," I say. The guard has his card so high on his chest it is almost under his chin.

At this point I think I'm unbeatable so after one nerve-cooling circuit on the phone, I walk up to him checking my watch with my arm so high I know I look strange.
I don't care, though, because I figure customer service is a big thing in the corporate world and he'll keep his opinions to himself.
I ask him where some made-up law firm is as I hear the beep. Silver tongue It is 8:30am the next day and I am back in Gatford's office. We peruse the access cards. He opens up the large text file dump of yesterday's haul and tells me what the data fields represent. "These are the building numbers; they cycle between one and 255, and these are the floor numbers," he says.

There are blank fields and junk characters from erroneous scans. He works out which belong to Estate Brokers and writes them to blank cards.

They work. More reconnaissance.

Estate Brokers has more buildings that Gatford will test after your reporter leaves. He fires up Apple Maps, and Google Maps Street View. With the eyes of a budding red teamer I am staggered by the level of detail it offers.

Apple is great for external building architecture, like routing pathways across neighbouring rooftops, Gatford says, while Google lets you explore the front of buildings for cameras and possible sheltered access points.
Some mapping services even let you go inside lobbies. Today's mission is to get into the guards' office and record the security controls in place.
If we can learn the name and version of the building management system, we've won.

Anything more is a bonus for Gatford's subsequent report. We take the Estate Brokers stationery haul along with our access cards and fake identity badges and head out to the firm's second site. "Don't hesitate, be confident." But first, coffee in the lobby. We chat about red teaming, about how humans are always the weakest link. We eat and are magnanimous with the waiting staff.

Gatford gets talking to one lady and says how he has forgotten the building manager's name. "Jason sent us in," he says, truthfully. Jason is the guy who ordered the red team test, but we don't have anything else to help us.

The rest is up to Gatford's skills. It takes a few minutes for the waitress to come back.

The person who she consulted is suspicious and asks a few challenging questions. Not to worry, we have identity cards and Gatford is an old hand.
I quietly muse over how I would have clammed up and failed at this point, but I'm happily in the backseat, gazing at my phone. We use the access cards skimmed the day earlier to take the lift up to an Estate Brokers level.
It is a cold, white corridor, unkempt, and made for services, not customers.

There's a security door, but no one responds to our knocks.

There are CCTV cameras. We return down to the lobby. Michael is the manager Gatford had asked about. He is standing at the lifts with another guy, and they greet us with brusque handshakes, Michael's barely concealed irritation threatening to boil over in response to our surprise audit. He rings Jason, but there's no answer.
I watch Gatford weave around Michael's questions and witness the subtle diffusion.
It's impressive stuff. Michael says the security room is on the basement level, so we head back into the lift and beep our way down with our cards. This room is lined with dank, white concrete and dimly lit. We spy the security room beaming with CCTV. "Don't hesitate, be confident," Gatford tells me. We stride towards the door, knock, and Gatford talks through the glass slit to the guard inside. Gatford tells him our story. He's a nice bloke, around 50 years old, with a broad smile.

After some back-and-forth about how Jason screwed up and failed to tell anyone about the audit, he lets us in. My pulse quickens as Gatford walks over to a terminal chatting away to the guard.

There are banks of CCTV screens showing footage from around the building.

A pile of access cards.
Some software boxes. I hear the guard telling Gatford how staff use remote desktop protocol to log in to the building management system, our mission objective. "What version?" Gatford asks. "Uh, 7.1.
It crashes a lot." Bingo. Day one, heading up in a crowded lift.
Shot with a pen camera I look down and there are logins scrawled on Post-it notes. Of course.
I snap a few photos while their backs are turned. Behind me is a small room with a server rack and an unlocked cabinet full of keys.
I think Gatford should see it so I walk back out and think of a reason to chat to the guard.
I don't want to talk technology because I'm worried my nerves will make me say something stupid.
I see a motorbike helmet. "What do you ride?" I ask. He tells me about his BMW 1200GS. Nice bike.
I tell him I'm about ready to upgrade my Suzuki and share a story about a recent ride through some mountainous countryside. Gatford, meanwhile, is out of sight, holed up in the server room snapping photos of the racks and keys. More gravy for the report. We thank the guard and leave.
I feel unshakably guilty. From the red to the black Gatford and I debrief over drinks, a beer for me, single-malt whiskey for him. We talk again about how the same courtesy and acquiescence to the customer that society demands creates avenues for manipulation. It isn’t just red teamers who exploit this; their craft is essentially ancient grifts and cons that have ripped off countless gullible victims, won elections or made spear phishing a viable attack. I ask Gatford why red teaming is needed when the typical enterprise fails security basics, leaving old application security vulnerabilities in place, forgetting to shut down disused domains and relying on known bad practice checkbox compliance-driven audits. "You can't ignore one area of security just to focus on another," he says. "And you don't do red teaming in isolation." Carew and McKinnon agree, adding that red teaming is distinct from penetration testing in that it is a deliberately hostile attack through the easiest path to the heart of organisations, while the former shakes out all electronic vulnerabilities. "Penetration testing delivers an exhaustive battery of digital intrusion tests that find bugs from critical, all the way down to informational... and compliance problems and opportunities," they say in a client paper detailing aspects of red teaming [PDF]. "In contrast, red teaming aims to exploit the most effective vulnerabilities in order to capture a target, and is not a replacement for penetration testing as it provides nowhere near the same exhaustive review." Red teaming, they say, helps organisations to better defend against competitors, organised crime, and even cops and spys in some countries. Gatford sells red teaming as a package.

Australia's boutique consultancies, and those across the ditch in New Zealand, pride themselves on close partnerships with their clients.

They point out the holes, and then help to heal.

They offer mitigation strategies, harass vendors for patches, and help businesses move bit by bit from exposed to secure. For his part, Gatford is notably proud of his gamified social engineering training, which he says is designed to showcase the importance of defence against the human side of security, covering attacks like phishing and red teaming. He's started training those keen on entering red teaming through a three-day practical course. "Estate Brokers", like others signing up for this burgeoning area of security testing, will go through that training.

Gatford will walk staff through how he exploited their kindness to breach the secure core of the organisation. And how the next time, it could be real criminals who exploit their willingness to help. ®

Could this be you? Really Offensive Security Engineer sought by Facebook

'Here's your new password, champ – GoF*!#Urs3lf' Facebook is hiring an Offensive Security Engineer, and not the sort inclined to disparage the length of your keys or your choice of encryption algorithm. "Facebook's Security team is looking for an offensive security engineer that can deliver technical leadership for our offensive security team and execute tactical, offensive assessments across our environments," a recent company job posting says. Facebook isn't looking join the dark side, subverting systems and launching denial of service attacks through a botnet. Nor is it aiming to retaliate against attackers, a model pursued and abandoned a decade ago by Blue Security. Rather, it's looking for an individual versed in attack techniques: a penetration tester. While this isn't a new development at Facebook – the social network has had a "red team" tasked with penetration testing for years – it appears to be at Microsoft, at least in its Windows and Devices group. Microsoft in September posted a job "seeking top-notch talent to lead a new team focused on offensive security research in the Windows and Devices group at Microsoft." Facebook and Microsoft declined to comment. Apple is also looking to fill at least three positions that involve penetration testing. Joyce Brocaglia, CEO of cybersecurity recruiting firm Alta Associates, in a phone interview with The Register, said her firm has recently been retained to perform multiple personnel searches for companies looking to hire senior security executives and to build security operations centers.
She said that there's growing interest in hiring security engineers versed in penetration testing. "We absolutely see that happening more often," Brocaglia said. "A lot of companies in the past had been outsourcing that function and are now bringing it inside." Brocaglia said not only are companies looking for security engineers capable of penetration testing, but they want people skilled enough to build their own tools. Asked about possible reasons for the interest in staff hackers, Brocaglia suggested that some of it is cyclical and that outsourcing is just less appealing at the moment. Alan Paller, founder and director of research for the SANS Institute, in an email to The Register said that the initial surge in internal penetration testing began about ten years ago and was focused on testing applications for internal and external use, to minimize flaws. Firms complemented internal efforts with external application testers, Paller said, noting that most of the time, systems and network penetration testing was handled by outside firms and represented a source of business for security consulting companies. "But the confidence that people had in the completeness of outside system and network penetration testing has been lessened," Paller said. "Part of that is due to the increased skill set that many companies are developing for their internal staff, recognizing that to do good defense you have to understand offense." Another reason to hire security personnel with an affinity for offense, Paller suggested, is that putting security staff through hacking courses isn't worth the money. "Both for cost savings and for privacy, [companies] like doing internal penetration testing," he said, adding that the exception is when senior leadership or auditors require security testing conducted by outsiders. ® Sponsored: Next gen cybersecurity.
Visit The Register's security hub

Hackers waste Xbox One, PS4, MacBook, Pixel, with USB zapper

What would happen if someone sticks this USBBQ into an airplane seat socket? VIDS Hackers are destroying everything from the latest gaming systems, phones, and even cars with a dangerous circuit-frying USB device that could put critical systems at risk. The -220V USBKill device developed last year and since refined is an inconspicious USB stick that can ruin devices in seconds by delivering continous power surges through USB ports. [That link, and all others in this story, is to a youTube video of USBKill at work - Ed] Unlike malicious USB sticks which can be safely examined in virtualised or secure environments, USBKill will ruin anything that does not have isolated power protection on USB ports. So far hackers with more dollars than cents have murdered top of the line gaming consoles, the Xbox One S and PlayStation 4 Pro, and Microsoft Surface. One notable lunatic nuked a brand new MacBook Pro, Google Pixel, and a Samsung Galaxy S7 Edge as soon as the top end devices were unboxed.

The iPad Pro survived the USB barbeque as did a set of Beats headphones.

Apple's iPhone 7 Plus. The Samsung Galaxy Note 7 also - surprisingly - failed to go nova when the same unboxing YouTube psychopath connected it to USBKill. Youtube Video The opportunity for serious harm extends far beyond wasting high end consumer products. USBKill's Russian creator, a chap known as "Dark_Purple" says unnamed car manufacturers have purchased his product to evaluate the susceptibility of vehicle USB ports. The hardware hacker plugged USBKill into his own car of unspecified make and model, frying the dashboard head unit. Chris Gatford, director of Sydney-based penetration testing firm HackLabs, says the threat posed by the devices is unlimited. "USB ports are everywhere - in cars, in power sockets, in charging stations," Gatford says. "And in planes." There appear to have been no public tests against aircraft USB ports which could fry connected entertainment and charging systems, if not cause further faults. Gatford says the attacks are possible when vendors take engineering design shortcuts and do not optically isolate the data lines on USB ports. ® Sponsored: Customer Identity and Access Management