Home Tags Cruise

Tag: Cruise

Open the pod bay doors, Watson: IBM introduces “cognitive rooms”

Will have a "wake word" like Google Home, Amazon Echo—but you can choose it.

Avoid "Hal."

Cadillac Super Cruises to the front with the most advanced semi-autonomous...

Geofenced to highways, it uses head-tracking to know when the driver's distracted.

The best of the 2017 New York International Auto Show

Much to like, including Cadillac's semi-autonomous system and a Ford police hybrid.

Use of biofuel could reduce aviation-related emissions

But burning biofuel still gives off a lot of soot particles.

Up close and personal: Russian spy ship skims edge of US...

It's the latest provocation as Russia's military appears to test Trump.

Can ISPs step up and solve the DDoS problem?

Apply best routing practices liberally. Repeat each morning Solve the DDoS problem? No problem. We’ll just get ISPs to rewrite the internet.
In this interview Ian Levy, technical director of GCHQ’s National Cyber Security Centre, says it’s up to ISPs to rewrite internet standards and stamp out DDoS attacks coming from the UK.
In particular, they should change the Border Gateway Protocol, which lies at the heart of the routing system, he suggests. He’s right about BGP.
It sucks.

ENISA calls it the “Achilles’ heel of the Internet”.
In an ideal world, it should be rewritten.
In the real one, it’s a bit more difficult. Apart from the ghastly idea of having the government’s surveillance agency helping to rewrite the Internet’s routing layer, it’s also like trying to rebuild a cruise ship from the inside out. Just because the ship was built a while ago and none of the cabin doors shut properly doesn’t mean that you can just dismantle the thing and start again.
It’s a massive ship and it’s at sea and there are people living in it. In any case, ISPs already have standards to help stop at least one category of DDoS, and it’s been around for the last 16 years.

All they have to do is implement it. Reflecting on the problem Although there are many subcategories, we can break down DDoS attacks into two broad types.

The first is a direct attack, where devices flood a target with traffic directly. The second is a reflected attack. Here, the attacker impersonates a target by sending packets to another device that look like they’re coming from the target’s address.

The device then tries to contact the target, participating in a DDoS attack that knocks it out. The attacker fools the device by spoofing the source of the IP packet, replacing their IP address in the packet header’s source IP entry with the target’s address.
It’s like sending a letter in someone else’s name.

The key here is amplification: depending on the type of traffic sent, the response sent to the target can be an order of magnitude greater. ISPs can prevent this by validating source addresses and using anti-spoofing filters that stop packets with incorrect source IP addresses from entering or leaving the network, explains the Mutually Agreed Norms for Routing Security (MANRS).

This is a manifesto produced by a collection of network operators who want to make the routing layer more secure by promoting best practices for service providers. Return to sender One way to do this is with an existing standard from 2000 called BCP 38. When implemented in network edge equipment, it checks to see whether incoming packets contain a source IP address that’s approved and linked to a customer (eg, within the appropriate block of IPs).
If it isn’t, it drops the packet.
Simple.

Corero COO & CTO Dave Larson adds, “If you are not following BCP 38 in your environment, you should be.
If all operators implemented this simple best practice, reflection and amplification DDoS attacks would be drastically reduced.” There are other things that ISPs can do to choke off these attacks, such as response rate limiting.

Authoritative DNS servers are often used as the unwitting dupe in reflection attacks because they send more traffic to the target than the attacker sends to them.

Their operators can limit the number of responses using a mechanism included by default in the BIND DNS server software, for example, which can detect patterns in incoming traffic and limit the responses to avoid flooding a target. The Internet of Pings We’d better sort this out, because the stakes are rising.

Thanks to the Internet of Things, we’re seeing attackers forklift large numbers of dumb devices such as IP cameras and DVRs, pointing them at whatever targets they want. Welcome to the Internet of Pings. We’re at the point where some jerk can bring down the Internet using an army of angry toasters.

Because of the vast range of IP addresses, it also makes things more difficult for ISPs to detect and solve the problem. We saw this with the attack on Dyn in late October, which could well be the largest attack ever at this point, hitting the DNS provider with pings from tens of millions of IP addresses.

Those claiming responsibility said that it was a dry run. Bruce Schneier had already reported someone rattling the Internet’s biggest doors. “What can we do about this?” he asked. “Nothing, really.” Well, we can do something. We can implore our ISPs to pull their collective fingers out and start implementing some preventative technology. We can also encourage IoT manufacturers to impose better security in IoT equipment. Let’s get to proper code signing later, and start with just avoiding the use of default login credentials first. When a crummy malware strain like Mirai takes down half the web using nothing but a pre-baked list of usernames and passwords, you know something’s wrong. How do we persuade IoT vendors to do better? Perhaps some government regulation is appropriate.
Indeed, organizations are already exploring this on both sides of the pond. Unfortunately, politicians move like molasses, while DDoS packets move at the speed of light.
In the meantime, it’s going to be up to the gatekeepers to solve the problem voluntarily. ® Sponsored: Want to know more about PAM? Visit The Register's hub

Stealing, scamming, bluffing: El Reg rides along with pen-testing ‘red team...

Broad smiles, good suits and fake IDs test security in new dimensions FEATURE "Go to this McDonald's," Chris Gatford told me. "There's a 'Create Your Taste' burger-builder PC there and you should be able to access the OS.

Find that machine, open the command prompt and pretend to do something important. "I'll be watching you." Gatford instructed your reporter to visit the burger barn because he practices a form of penetration testing called "red teaming", wherein consultants attack clients using techniques limited only by their imagination, ingenuity, and bravado. He wanted me to break the burger-builder to probe my weaknesses before he would let The Register ride along on a red-team raid aimed at breaking into the supposedly secure headquarters of a major property chain worth hundreds of millions of dollars. Before we try for that target, Gatford, director of penetrations testing firm HackLabs, wants to know if I will give the game away during a social engineering exploit. Chris Gatford (Image: Darren Pauli / The Register) So when the McDonald's computer turns out to have been fixed and my fake system administrator act cancelled, we visit an office building's lobby where Gatford challenges me to break into a small glass-walled room containing a shabby-looking ATM. I can't see a way into the locked room.
I think I see a security camera peering down from the roof, but later on I'm not sure I did.
I can't think of a way in and I'm trying to look so casual I know I'm certain to look nervous. Time's up.

Gatford is finished with the lobby clerk. He asks how I would get in, and hints in my silence that the door responds to heat sensors. I mutter something stupid about using a hair dryer.

Gatford laughs and reminds me about heat packs you'd slip into gloves or ski boots. "Slide one of those under the crack," he says. I've failed that test but stayed cool, so Gatford decides he's happy to have me along on a red-team raid, if only because red teams seldom face significant resistance. "At the end of the day, people just want to help," Gatford says. Red alert Costume is therefore an important element of a red team raid.

For this raid, our software exploits are suits and clipboards.
Sometimes it's high-visibility tradie vests, hard hats, or anything that makes a security tester appear legitimate. Once dressed for the part, practitioners use social-engineering skills to manipulate staff into doing their bidding.

Fans of Mr Robot may recall an episode where the protagonist uses social engineering to gain access to a highly secure data centre; this is red teaming stylised.

Think a real-world capture the flag where the flags are located in the CEO's office, the guard office, and highly secure areas behind multiple layers of locked doors. By scoring flags, testers demonstrate the fallibility of physical defences. Only one manager, usually the CEO of the target company, tends to know an operation is afoot. Limited knowledge, or black-box testing, is critical to examine the real defences of an organisation. Red teamers are typically not told anything outside of the barebones criteria of the job, while staff know nothing at all.
It catches tech teams off guard and can make them look bad.

Gatford is not the only tester forced to calm irate staff with the same social engineering manipulation he uses to breach defences. Red teamers almost always win, pushing some to more audacious attacks. Vulture South knows of one Australian team busted by police after the black-clad hackers abseiled down from the roof of a data centre with Go-Pro cameras strapped to their heads. Across the Pacific, veteran security tester Charles Henderson tells of how years back he exited a warehouse after a red-teaming job. "I was walking out to leave and I looked over and saw this truck," Henderson says. "It was full of the company's disks ready to be shredded.

The keys were in it." Henderson phoned the CEO and asked if the truck was in-scope, a term signalling a green light for penetration testers.
It was, and if it weren't for a potential call to police, he would have hopped into the cab and drove off. Henderson now leads Dell's new red-teaming unit in the United States, which he also built from the ground up. "There are some instances where criminal law makes little distinction between actions and intent, placing red teams in predicaments during an assignment, particularly when performing physical intrusion tasks," Nathaniel Carew and Michael McKinnon from Sense of Security's Melbourne office say. "They should always ensure they carry with them a letter of authority from the enterprise." Your reporter has, over pints with the hacking community, heard many stories of law enforcement showing up during red-team ops. One Australian was sitting off a site staring through a military-grade sniper scope, only to have a cop tap on the window.

Gatford some years ago found himself face-to-face in a small room with a massive industrial furnace while taking a wrong turn on a red-team assignment at a NSW utility. He and his colleagues were dressed in suits.

Another tester on an assignment in the Middle East was detained for a day by AK-47-wielding guards after the CEO failed to answer the phone. Red teamers have been stopped by police in London, Sydney, and Quebec, The Register hears. One of Australia's notably talented red teamers told of how he completely compromised a huge gaming company using his laptop and mobile phone. Whether red teaming on site or behind the keyboard, the mission is the same: breach by any means necessary. Equipment check A fortnight after the ATM incident, The Register is at HackLabs' Manly office.
It's an unassuming and unmarked door that takes this reporter several minutes to spot. Upstairs, entry passes to international hacker cons are draped from one wall, a collection of gadgets on a neighbouring shelf.

Then there's the equipment area.
Scanners, radios, a 3D printer, and network equipment sit beside identity cards sporting the same face but different names and titles.

There's a PwnPlug and three versions of the iconic Wi-Fi Pineapple over by the lockpicks.

A trio of neon hard hats dangle from hooks. "What do you think?" Gatford asks.
It's impressive; a messy collection of more hacking gadgets than this reporter had seen in one place, all showing use or in some stage of construction.

This is a workshop of tools, not toys. "No one uses the secure stuff, mate." In his office, Gatford revealed the target customer. The Register agrees to obscure the client's name, and any identifying particulars, so the pseudonym "Estate Brokers" will serve.

Gatford speaks of the industry in which it operates, Brokers' clientele, and their likely approach to security. The customer has multiple properties in Sydney's central business district, some housing clients of high value to attackers.
It has undergone technical security testing before, but has not yet evaluated its social engineering resilience. The day before, Gatford ran some reconnaissance of the first building we are to hit, watching the flow of people in and out of the building from the pavement. Our targets, he says, are the bottlenecks like doors and escalators that force people to bunch up. JavaScript Disabled Please Enable JavaScript to use this feature. He unzips a small suitcase revealing what looks like a large scanner, with cables and D-cell batteries flowing from circuit boards. "It's an access card reader", Gatford says.
It reads the most common frequencies used by the typically white rigid plastic door entry cards that dangle from staffer waists.

There are more secure versions that this particular device does not read without modification. "No one uses the secure stuff, mate," Gatford says with the same half-smile worn by most in his sector when talking about the pervasive unwillingness to spend on security. I point to a blue plastic card sleeve that turns out to be a SkimSAFE FIPS 201-certified anti-skimming card protector.

Gatford pops an access card into it and waves it about a foot in front of the suitcase-sized scanner.
It beeps and card number data flashes up on a monitor. "So much for that," Gatford laughs. He taps away at his Mac, loading up Estate Brokers' website. "We'll need employee identity cards or we'll be asked too many questions," Gatford says. We are to play the role of contractors on site to conduct an audit of IT equipment, so we will need something that looks official enough to pass cursory inspection. The company name and logo image is copied over, a mug shot of your reporter snapped, and both are printed on a laminated white identity card.

Gatford does the same for himself. We're auditors come to itemise Estate Brokers' security systems and make sure everything is running. "We should get going," he says as he places hacking gear into a hard shell suitcase.
So off we go. Beep beep beep beepbeepbeep Our attack was staged in two parts over two days.

Estate Brokers has an office in a luxurious CBD tower. We need to compromise that in order to breach the second line of defences. We'll need an access card to get through the doors, however, and our laptop-sized skimmer, which made a mockery of the SkimSAFE gadget, will be the key. It is 4:32pm and employees are starting to pour out of the building.

Gatford hands me the skimmer concealed in a very ordinary-looking laptop bag. "Go get some cards," he says. Almost everyone clips access cards on their right hip.
If I can get the bag within 30cm of the cards, I'll hear the soft beep I've been training my ear to detect that signals a successful read. Maybe one in 20 wear their access cards like a necklace. "Hold your bag in your left hand, and pretend to check the time on your watch," Gatford says.

That raises the scanner high enough to get a hit. I'm talking to no one on my mobile as I clumsily weave in and out of brisk walking staff, copping shade from those whose patience has expired for the day.

Beep.

Beep.

Beep, beep, beep, beep, beepbeepbeepbeep.

There are dozens of beeps, far too many to count.

Then we enter a crowded lift and it's like a musical.
It's fun, exhilarating stuff.

The staff hail from law firms, big tech, even the Federal Government.

And we now have their access cards. Estate Brokers is on level 10, but we need a card to send the lift to it. No matter, people just want to help, remember? The lady in the lift is more than happy to tap her card for the two smiling blokes in suits.

Gatford knows the office and puts me in front. "Walk left, second right, second left, then right." I recite it. With people behind us, I walk out and start to turn right, before tightening, and speeding up through the security door someone has propped open. We enter an open-plan office. "They are terrible for security," I recall Gatford saying earlier that day.
It allows attackers to walk anywhere without the challenge of doors. Lucky for us.

Gatford takes the lead and we cruise past staff bashing away their final hour in cubicles, straight to the stationery room. No one is there as Gatford fills a bag with letter heads and branded pens, while rifling through for other things that could prove useful. We head back to the lobby for a few more rounds of card stealing. Not all the reads come out clean, and not all the staff we hit are from Estate Brokers, so it pays to scan plenty of cards. "Look out for that guard down there," Gatford says, indicating the edge of the floor where a security guard can be seen on ground level. "Tell you what, if you can get his card, I'll give you 50 bucks." "You're on," I say. The guard has his card so high on his chest it is almost under his chin.

At this point I think I'm unbeatable so after one nerve-cooling circuit on the phone, I walk up to him checking my watch with my arm so high I know I look strange.
I don't care, though, because I figure customer service is a big thing in the corporate world and he'll keep his opinions to himself.
I ask him where some made-up law firm is as I hear the beep. Silver tongue It is 8:30am the next day and I am back in Gatford's office. We peruse the access cards. He opens up the large text file dump of yesterday's haul and tells me what the data fields represent. "These are the building numbers; they cycle between one and 255, and these are the floor numbers," he says.

There are blank fields and junk characters from erroneous scans. He works out which belong to Estate Brokers and writes them to blank cards.

They work. More reconnaissance.

Estate Brokers has more buildings that Gatford will test after your reporter leaves. He fires up Apple Maps, and Google Maps Street View. With the eyes of a budding red teamer I am staggered by the level of detail it offers.

Apple is great for external building architecture, like routing pathways across neighbouring rooftops, Gatford says, while Google lets you explore the front of buildings for cameras and possible sheltered access points.
Some mapping services even let you go inside lobbies. Today's mission is to get into the guards' office and record the security controls in place.
If we can learn the name and version of the building management system, we've won.

Anything more is a bonus for Gatford's subsequent report. We take the Estate Brokers stationery haul along with our access cards and fake identity badges and head out to the firm's second site. "Don't hesitate, be confident." But first, coffee in the lobby. We chat about red teaming, about how humans are always the weakest link. We eat and are magnanimous with the waiting staff.

Gatford gets talking to one lady and says how he has forgotten the building manager's name. "Jason sent us in," he says, truthfully. Jason is the guy who ordered the red team test, but we don't have anything else to help us.

The rest is up to Gatford's skills. It takes a few minutes for the waitress to come back.

The person who she consulted is suspicious and asks a few challenging questions. Not to worry, we have identity cards and Gatford is an old hand.
I quietly muse over how I would have clammed up and failed at this point, but I'm happily in the backseat, gazing at my phone. We use the access cards skimmed the day earlier to take the lift up to an Estate Brokers level.
It is a cold, white corridor, unkempt, and made for services, not customers.

There's a security door, but no one responds to our knocks.

There are CCTV cameras. We return down to the lobby. Michael is the manager Gatford had asked about. He is standing at the lifts with another guy, and they greet us with brusque handshakes, Michael's barely concealed irritation threatening to boil over in response to our surprise audit. He rings Jason, but there's no answer.
I watch Gatford weave around Michael's questions and witness the subtle diffusion.
It's impressive stuff. Michael says the security room is on the basement level, so we head back into the lift and beep our way down with our cards. This room is lined with dank, white concrete and dimly lit. We spy the security room beaming with CCTV. "Don't hesitate, be confident," Gatford tells me. We stride towards the door, knock, and Gatford talks through the glass slit to the guard inside. Gatford tells him our story. He's a nice bloke, around 50 years old, with a broad smile.

After some back-and-forth about how Jason screwed up and failed to tell anyone about the audit, he lets us in. My pulse quickens as Gatford walks over to a terminal chatting away to the guard.

There are banks of CCTV screens showing footage from around the building.

A pile of access cards.
Some software boxes. I hear the guard telling Gatford how staff use remote desktop protocol to log in to the building management system, our mission objective. "What version?" Gatford asks. "Uh, 7.1.
It crashes a lot." Bingo. Day one, heading up in a crowded lift.
Shot with a pen camera I look down and there are logins scrawled on Post-it notes. Of course.
I snap a few photos while their backs are turned. Behind me is a small room with a server rack and an unlocked cabinet full of keys.
I think Gatford should see it so I walk back out and think of a reason to chat to the guard.
I don't want to talk technology because I'm worried my nerves will make me say something stupid.
I see a motorbike helmet. "What do you ride?" I ask. He tells me about his BMW 1200GS. Nice bike.
I tell him I'm about ready to upgrade my Suzuki and share a story about a recent ride through some mountainous countryside. Gatford, meanwhile, is out of sight, holed up in the server room snapping photos of the racks and keys. More gravy for the report. We thank the guard and leave.
I feel unshakably guilty. From the red to the black Gatford and I debrief over drinks, a beer for me, single-malt whiskey for him. We talk again about how the same courtesy and acquiescence to the customer that society demands creates avenues for manipulation. It isn’t just red teamers who exploit this; their craft is essentially ancient grifts and cons that have ripped off countless gullible victims, won elections or made spear phishing a viable attack. I ask Gatford why red teaming is needed when the typical enterprise fails security basics, leaving old application security vulnerabilities in place, forgetting to shut down disused domains and relying on known bad practice checkbox compliance-driven audits. "You can't ignore one area of security just to focus on another," he says. "And you don't do red teaming in isolation." Carew and McKinnon agree, adding that red teaming is distinct from penetration testing in that it is a deliberately hostile attack through the easiest path to the heart of organisations, while the former shakes out all electronic vulnerabilities. "Penetration testing delivers an exhaustive battery of digital intrusion tests that find bugs from critical, all the way down to informational... and compliance problems and opportunities," they say in a client paper detailing aspects of red teaming [PDF]. "In contrast, red teaming aims to exploit the most effective vulnerabilities in order to capture a target, and is not a replacement for penetration testing as it provides nowhere near the same exhaustive review." Red teaming, they say, helps organisations to better defend against competitors, organised crime, and even cops and spys in some countries. Gatford sells red teaming as a package.

Australia's boutique consultancies, and those across the ditch in New Zealand, pride themselves on close partnerships with their clients.

They point out the holes, and then help to heal.

They offer mitigation strategies, harass vendors for patches, and help businesses move bit by bit from exposed to secure. For his part, Gatford is notably proud of his gamified social engineering training, which he says is designed to showcase the importance of defence against the human side of security, covering attacks like phishing and red teaming. He's started training those keen on entering red teaming through a three-day practical course. "Estate Brokers", like others signing up for this burgeoning area of security testing, will go through that training.

Gatford will walk staff through how he exploited their kindness to breach the secure core of the organisation. And how the next time, it could be real criminals who exploit their willingness to help. ®

Long-range projectiles for Navy’s newest ship too expensive to shoot

Enlarge / The USS Zumwalt (DDG-1000), commissioned in Baltimore in October.
Its two AGS guns depend on projectiles too expensive to pass a Navy gut-check.US Navy reader comments 40 Share this story The USS Zumwalt (DDG-1000) is the US Navy’s latest warship, commissioned just last month—and it comes with the biggest guns the Navy has deployed since the twilight of the battleships.

But it turns out the Zumwalt's guns won’t be getting much of a workout any time soon, aside from acceptance testing.

That’s because the special projectiles they were intended to fire are so expensive that the Navy has canceled its order. Back when it was originally conceived, the Zumwalt was supposed to be the modern-day incarnation of the big-gunned cruisers and battleships that once provided fire support for Marines storming hostile beaches.

This ability to lob devastating volleys of powerful explosive shells deep inland to take out hardened enemy positions, weapons, and infrastructure was lost after the Gulf War’s end, when the last of the Iowa-class battleships were retired.

To bring it back, the Zumwalt’s design included a new gun, the Advanced Gun System (AGS).

As we described it in a story two years ago: The automated AGS can fire 10 rocket-assisted, precision-guided projectiles per minute at targets over 100 miles away.

Those projectiles use GPS and inertial guidance to improve the gun’s accuracy to a 50 meter (164 feet) circle of probable error—meaning that half of its GPS-guided shells will fall within that distance from the target. The projectile responsible for that accuracy—something far too complex to just be called a "shell" or "bullet"—is the Long Range Land-Attack Projectile (LRLAP).

Each projectile has precision guidance provided by internal global positioning and inertial sensors, and bursts of LRLAPs could in theory be fired over a minute following different ballistic trajectories that cause them to land all at the same time. Enlarge / A Lockheed Martin image of the LRLAP. Lockheed Martin won the competition to produce the LRLAPs, and the company described their capabilities thusly: 155mm LRLAP provides single strike lethality against a wide range of targets, with three times the lethality of traditional 5-inch naval ballistic rounds—and because it is guided, fewer rounds can produce similar or more lethal effects at less cost. LRLAP has the capability to guide multiple rounds launched from the same gun to strike single or multiple targets simultaneously, maximizing lethal effects. The "less cost" part, however, turned out to be a pipe dream. With the reduction of the Zumwalt class to a total of three ships, the corresponding reduction in requirements for LRLAP production raised the production costs just as the price of the ships they would be deployed to soared.

Defense News reports that the Navy is canceling production of the LRLAP because of an $800,000-per-shot price tag—more than 10 times the original projected cost.

By comparison, the nuclear-capable Tomahawk cruise missile costs approximately $1 million per shot, while the M712 Copperhead laser-guided 155-millimeter projectile and M982 Excalibur GPS-guided rounds cost less than $70,000 per shot.

Traditional Navy 5-inch shells cost no more than a few hundred dollars each. In theory, the Army's Copperhead or Excalibur rounds could be adapted to the AGS, because the gun is the same bore-size and is essentially a sea-based howitzer—it fires at a higher angle than previous naval guns and is designed strictly for firing at land targets.

The Excalibur has been used successfully in combats against targets more than 20 miles away.

The Navy is reportedly looking at the Excalibur as one option, as well as the Hyper Velocity Projectile (HVP)—a projectile being developed by BAE Systems under contract with the Office of Naval Research for use both in traditional powder-fired guns and a future Navy electromagnetic railgun system. In the long run, the HVP will likely win out—that is, if the Zumwalt is successfully fitted with a railgun.

The ship’s all-electric design was created with the intention of being compatible with high-energy electrical weapons (like railguns) once they're generally available, and the HVP would be the obvious next step.

Exclusive: Our Thai prison interview with an alleged top advisor to...

reader comments 7 Share this story BANGKOK, Thailand—Few people were watching when the prison truck doors swung open at Ratchada Criminal Court to reveal a 55-year-old Canadian inmate.

But there he was: Roger Thomas Clark, the man accused of being “Variety Jones,” notorious dope dealer and top advisor to Silk Road founder Ross “Dread Pirate Roberts” Ulbricht. Enlarge / Clark entering court. Clark did the perp-walk, shuffling unchained and unnoticed past the Bangkok press brigade, which was focused that day on the trial of an accused Spanish murderer.

Accompanied by a lone Thai corrections officer in a sand-coloured uniform, Clark was led to the eighth floor and was greeted by his team of lawyers and interpreters. Clark was here to battle extradition to America and a possible life sentence on charges of narcotics conspiracy and conspiracy to commit money laundering.

But face-to-face, whether in a Thai court or a prison, Clark appeared unfazed by the powerful forces seeking him for a trial on the other side of the planet. Though acknowledging that his odds of beating extradition are slim, Clark remained in high spirits during his July day-trip to the courthouse. He even slipped in a brag or two on the way. “Normally a senior person signs an extradition order, but my order was signed and stamped by John Kerry,” he said, adding that the order came with a blue silk ribbon. “Very few people ever have an extradition signed by John Kerry.” (In the past, Clark has proven to be an eccentric interviewee who has made bold, unsubstantiated claims, such as having access to helicopters and being guarded by members of the Thai Tourist Police, the Khmer Palace Guard, and the Vietnamese Special Forces.) Clark is fighting for his life any way he knows how.

But one thing he’s sure of: he won’t go down like Ulbricht, laptop open and unencrypted.

During a series of recent interviews from prison, Clark bragged about how his machines, when seized by Thai police last year, were all cryptographically secured. Enlarge / Bangkok Remand Prison, where Clark is being held as he awaits the outcome of his extradition hearing. Sam Cooley "They found my three notebooks closed and encrypted" Silk Road functioned for years as a sort of “Amazon.com for drugs.” Equipped with the proper software, users around the world could log into Silk Road and cruise through hundreds of drug listings, read reviews, and decide to purchase a kilogram of heroin off someone named “BigDaddy24”—all without leaving their bedrooms.

During its lifetime, from 2011 to 2013, Silk Road’s user base exploded. Ulbricht eventually had to hire administrators to keep things running smoothly—and Clark is believed to have been one of the most important. In 2013, Ulbricht was captured red-handed in a San Francisco library with his laptop open and logged into Silk Road—and on that laptop was a photograph of Clark. (To this day, the photograph functions as one of the few public pieces of evidence linking Clark to the “Variety Jones” name.) Also on Ulbricht’s computer was a 2011 journal entry paying tribute to Variety Jones’ influence on Silk Road. “He has helped me better interact with the community around Silk Road, delivering proclamations, handling troublesome characters, running a sale, changing my name, devising rules, and on and on,” Ulbricht wrote. “He also helped me get my head straight regarding legal protection, cover stories, devising a will, finding a successor, and so on. He’s been a real mentor.” This evidence, in part, led investigators to suggest that Clark was in fact Variety Jones and that he had advised Ulbricht “on all aspects of the [Silk Road], including how to maximize profits and use threats of violence to thwart law enforcement,” according to a press release issued after Clark’s arrest in Thailand. On the Internet, Variety Jones came across as a bit of a tough guy.

According to seized chat logs, Jones may have been instrumental to Ulbricht’s decision to commission the killing of one of his workers whom he believed had defected. (The “killing” was actually faked by a corrupt—and now-convicted—DEA agent.) That toughness came through in prison, where Clark periodically receives visitors. When the buzzers rang at the visitation segment of Bangkok Remand Prison this June, Clark took a seat at a row of telephones to discuss his predicament during a series of interviews with co-author Sam Cooley. (Disclosure: Cooley purchased two containers of Pringles and three cartons of soy milk for Clark before one interview.) “Guilt is a technical term,” Clark said, adding that he won’t be taken by the FBI the same way Ulbricht was in 2013. “They don’t have shit on me.
I’m not going [to the US].
It’s an impossible circumstance.” “They might have caught Ross with his notebook opened, as they claim, but they found my three notebooks closed and encrypted,” Clark added, claiming his home was raided without a warrant on the Thai island of Koh Chang in December 2015. “Forensics could spend 30 years trying to decrypt those hard drives and still not get anywhere; so in a way, those hard disks are a headache,” he said. “The longer they need to open them, the longer I can relax here in Bangkok.

They would rather deny that they seized all this evidence.” For the past 20 years, Clark says he’s been living internationally—though most recently on the concrete floor of the jail, where he’s been held for the past nine months. Clark shook his head when asked if he was mistreated. He laughed, saying the only people who complain about the conditions are foreigners—and that he wasn’t about to do so over a jail telephone. “My chances of survival are zero if I go to the US,” he added. Clark also repeated a previous claim to have knowledge about a so-far undiscovered dirty FBI agent—information which he said he’s keeping “under (his) hat” until the right opportunity presents itself. Enlarge / A Thai prison guard. Sam Cooley "39 words exactly" During Clark's July appearance at Ratchada court, an officer of Thailand’s Ministry of Foreign Affairs functioned as a liaison between the US government and its Thai counterparts. Discussion in court that day—all of it in Thai, which was interpreted into English by co-author Akbar Khan—revolved around domain registration and whether the prosecution could provide information about the official registrant of the Silk Road domain name.

Given the complexities of Silk Road’s operations, which formerly existed in the semi-public darknet, prosecutors were forced to concede they did not have a copy of the domain registry. Clark’s defence team responded by launching a barrage of strategic questions which could, at the least, prolong the extradition process.
Shortly afterwards, the court session concluded and Clark was shuffled back to prison. (The hearing was attended by only one other person, a slick-looking Chinese man who described himself as a law student.) As for Clark's newest gambit to save himself from extradition, it comes right out of a spy movie. He said that he recently requested a meeting with an intelligence official close to Thailand’s Prime Minister, Prayut Chan-ocha, because Clark has “top secret information” for the military government. “I am going to write (the information) on a piece of paper for them and hand it to them to read.
It’s not even going to be 40 words; it’s just going to be 39 words. 39 words exactly,” he told me. “The deal can only be done within six days after the verdict has been read, and I have no idea how long this is going to drag on for.” Freelance journalist Sam Cooley tweets at @samcooley. Listing image by Sam Cooley

The World Series of Hacking—without humans

reader comments 1 Share this story LAS VEGAS—On a raised floor in a ballroom at the Paris Hotel, seven competitors stood silently.

These combatants had fought since 9:00am, and nearly $4 million in prize money loomed over all the proceedings. Now some 10 later, their final rounds were being accompanied by all the play-by-play and color commentary you'd expect from an episode of American Ninja Warrior. Yet, no one in the competition showed signs of nerves. To observers, this all likely came across as odd—especially because the competitors weren't hackers, they were identical racks of high-performance computing and network gear.

The finale of the Defense Advanced Research Projects Agency's Cyber Grand Challenge, a DEFCON game of "Capture the Flag," is all about the "Cyber Reasoning Systems"(CRSs).

And these collections of artificial intelligence software armed with code and network analysis tools were ready to do battle. Inside the temporary data center arena, referees unleashed a succession of "challenge" software packages.

The CRSs would vie to find vulnerabilities in the code, use those vulnerabilities to score points against competitors, and deploy patches to fix the vulnerabilities.

Throughout the whole thing, each system had to also keep the services defined by the challenge packages up and running as much as possible. And aside from the team of judges running the game from a command center nestled amongst all the compute hardware, the whole competition was untouched by human hands. Greetings, Professor Falken < data-sub-html="#caption-937151"> Waiting to rumble—the seven AI supercomputing hacker "bots" and their supporting cast hum quietly in the Paris Hotel ballroom Sean Gallagher < data-sub-html="#caption-937125"> Some of the 7 supercomputers required just to keep tabs on the participating systems. < data-sub-html="#caption-937145"> The Airgap robot allows scoring data to be passed to DARPA's visualization team by moving burned Blu-Ray discs from one side to the other. < data-sub-html="#caption-937149"> It's time to rumble. Well, actually, they started 10 hours ago...it's just time for the show. < data-sub-html="#caption-937135"> The crowd watches as the show gets underway. Daniel Tkacik, Ph.D, Carnegie Mellon University The Cyber Grand Challenge (CGC) was based on the formula behind the successful Grand Challenges that DARPA has funded in areas such as driverless vehicles and robotics.

The intent is to accelerate development of artificial intelligence as a tool to fundamentally change how organizations do information security. Yes, in the wrong hands such systems could be applied to industrial-scale discovery and weaponization of zero-days, giving intelligence and military cyber-operators a way to quickly exploit known systems to gain access or bring them down.

But alternatively, systems that can scan for vulnerabilities in software and fix them automatically could, in the eyes of DARPA director Arati Prabhakar, create a future free from the threat of zero-day software exploits.
In such a dream world, "we can get on with the business of enjoying the fruits of this phenomenal information revolution we're living through today," he said. For now, intelligent systems—call them artificial intelligence, expert systems, or cognitive computing—have already managed to beat humans at increasingly difficult reasoning tasks with a lot of training.

Google's AlphaGo beat the world's reigning Go master at his own game.

An AI called ALPHA has beaten US Air Force pilots in simulated air combat.

And, of course, there was that Jeopardy match with IBM's Watson. But those sorts of games have nothing on the cutthroat nature of Capture The Flag—at least as the game is operated by the Legitimate Business Syndicate, the group that oversees the long-running DEFCON CTF tournament. This was the culmination of an effort that began in 2013, when DARPA's Cyber Grand Challenge program director Mike Walker began laying the groundwork for the competition. Walker was a computer security researcher and penetration tester who had competed widely in CTF tournaments around the world. He earned this project after working on a "red team" that performed security tests on a DARPA prototype communications system. After a 2012 briefing he gave the leadership of DARPA's Information Innovation Office (I2O) on vulnerability detection and patching, the I20 leadership had one thought. "Can we do this in an automated fashion?" I20 deputy director Brian Pierce told Ars. "When it comes down to cyber operations, everything operates on machine time—the question was could we think about having the machine assist the human in order to address these challenges." The same question was clearly on the minds of Defense Department leaders, particularly at the US Cyber Command with its demand for some way to "fight the network." In 2009, Air Force Gen. Kevin Chilton, then commander of the Strategic Command, said, "We need to operate at machine-to-machine speeds…we need to operate as near to real time as we can in this domain, be able to push software upgrades automatically, and have our computers scanned remotely." Walker saw an opportunity to push forward what was possible by combining the CTF tournament model with DARPA's "Grand Challenge" experience. He drew heavily from then-deputy I20 director Norm Whitaker's experience running the DARPA self-driving vehicle grand challenges from 2004 to 2007. "We learned a lot from that," said Pierce.

But even with a template to follow, "a lot of things had to be built from scratch." Those things included the creation of a virtual arena in which the competitors could be fairly judged against each other—one that was a vastly simplified version of the real world of cybersecurity so competitors focused on the fundamentals. That was the same model DARPA followed in its initial self-driven vehicle challenges, as Walker pointed out at DEFCON this month.

The winning vehicle of the 2005 DARPA Grand Challenge, a modified Volkswagen Touareg SUV named "Stanley," "was not a self-driving car by today's standards," said Walker. "It was filled with computing and sensor and communications gear.
It couldn't drive on our streets, it couldn't handle traffic…it couldn't do a lot of things.

All the same, Stanley earned a place in the Smithsonian by redefining what was possible, and today vehicles derived from Stanley are driving our streets." Similarly, the Cyber Grand Challenge devised by Walker and DARPA didn't look much like today's world of computer security.

The systems would "work only on very simple research operating systems," Walker said. "They work on 32-bit native code, and they spent a huge amount of computing power to think about the security problems of small example services. the complex bugs they found are impressive, but they're not as complex as their real-world analogues, and a huge amount of engineering remains to be done before something like this guards the networks we use." Strange game
A highlight reel from DARPA's Cyber Grand Challenge finale. The "research operating system" built by DARPA for the CGC is called DECREE (the DARPA Experimental Cyber Research Evaluation Environment).
It was purpose-built to support playing Capture the Flag with an automated scoring system that changes some of the mechanics of the game as it is usually played by humans. There are many variations on CTF.

But in this competition, the "flag" to be captured is called a Proof of Vulnerability (POV)—an exploit that successfully proves the flaw on opponents' servers.

Teams are given "challenge sets," or pieces of software with a one or sometimes multiple vulnerabilities planted in them, to run on the server they are defending.

The teams race to discover the flaw through analysis of the code, and they can then score points either by patching their own version of the software and submitting that patch to the referee for verification or by using the discovered exploit to hack into opponents' systems and obtain a POV. The problem with patching something is that once a team patches, its code can be used to patch by everyone else. Patching generally means bringing down the "challenge set" code briefly to apply the patch.
It's a risk for competitors: if the patch fails and the software doesn't work properly, that counts against your score. Based on 32-bit Linux for the Intel architecture, DECREE only supports programs compiled to run in the Cyber Grand Challenge Executable Format (CGCEF)—a format that supports a much smaller number of possible system calls than used in software on general-purpose operating systems.

CGCEF also comes with tools that allow for quick remote validation that software components are up and running, debugging and binary analysis tools, and an interface to throw POVs at the challenge code either as XML-based descriptions or as C code. So just as with the human version of CTF, each of the CRSs was also tasked with defending a "server"—in this case, running instances of DECREE.

The AI of each bot controlled the strategy used to analyze the code and the creation of potential POVs and patches.

AIs made decisions based on the strategy they were trained with plus the state of the game, adapting to either submit patches (and share them with everyone else as a result), create a network-based defense to prevent exploits from landing, or go on the attack and prove vulnerabilities on other systems. While teams could score points based on patches submitted, successfully deflected attacks, and POVs scored against the other teams' systems, those points were multiplied by the percentage of time their copies of the challenge sets were available. Patching would mean losing availability, as the challenge sets would have to go down to be patched.

That meant possibly giving up the chance to exploit the bug against others to score more points.
Setting up an intrusion detection system to block attacks could also affect availability—especially if it was set wrong and it blocked legitimate traffic coming in.

All of this makes for a complex set of game strategies in CTF, and that really tests the flexibility of the AI controlling the bots. One of the major differences between human CTF and the DARPA version of the game was its pace.
In a typical human CTF tournament, only about 10 challenge sets would be posted over the course of a two-day tournament.
In the qualifying round of the Cyber Grand Challenge, held last August at DEFCON, there were 131 challenge sets with a total of 590 vulnerabilities.
In total, 28 systems made the first cut over the course of 24 hours—so they handle a challenge about once every 10 minutes.
In the finale of the Cyber Grand Challenge, there would be 100—but they would be posted to systems at the rate of one challenge every five minutes. Another difference in this event was how the "offensive" part of the CTF worked. Rather than launching their POVs directly at competitors' servers to try to score, the POVs were submitted to and launched by the referee system.

That way, the success or failure of any POV could be instantly assessed by the scoring system, just as any patch submitted by competitors could be independently evaluated.

All of the action recorded by the referee system could then be played back through a set of visualization tools to show the results of each round. On top of all this was the hardware itself.

CTF tournaments generally look like rooms full of people huddled at tables around laptops.

The CGC version of the game required the construction of a portable data center, assembled in Las Vegas: an "air-gapped" network of 15 supercomputing systems, each with 128 2.5 GHz Intel Xeon processors (totaling over 1000 processor cores) and 16 terabytes of RAM. Physically disconnected from any outside network, the only way data left the "arena" network was via a robotic arm that passed Blu-ray discs burned with scoring data from one tray to another. Seven of the identical supercomputing racks ran the AI-powered "bots." Seven more were dedicated to running the match itself—handling the deployment of the challenge sets, verifying POVs and patches, performing forensic analysis, and tracking the score.

The last system acted as a sparring partner for the competitors in warm-ups.

The whole raised-floor rig was cooled by water piped in from three industrial chillers sitting on the Paris Convention Center's loading dock; it drew 21 kilowatts of power over cables snaked in from outside. Listing image by Sean Gallagher Report to the game grid, program < data-sub-html="#caption-937131"> Members of the seven teams behind the battling hacker AIs relax as the battle begins, redundant for now. < data-sub-html="#caption-937125"> Some of the 7 supercomputers required just to keep tabs on the participating systems. < data-sub-html="#caption-937119"> A DARPA slide explains the visualization system for scoring the Cyber Grand Challenge. < data-sub-html="#caption-937117"> A view of the "card" for a participant in the CGC, showing active "challenge sets" and their level of security. < data-sub-html="#caption-937123"> Each round gets visualized like a game of Missile Command, with incoming "proof of vulnerability" attacks color-coded to the team they're coming from. < data-sub-html="#caption-937127"> The Cyber Grand Challenge scoreboard after a few rounds favors DeepRed from Raytheon... < data-sub-html="#caption-937129"> ...but ForAllSecure's Mayhem soon pulls ahead for good. When the Cyber Grand Challenge was first announced in 2014, 104 teams of security researchers and developers registered to take on the challenge of building systems that could compete in a Capture the Flag competition. Of them, 28 teams completed a "dry run"—demonstrating that they could find software flaws in new code and interact with the CTF game system. Those 28 battled in the first-ever artificial intelligence CTF competition—last year's first round of the CGC held at DEFCON 2015.

The 131 challenge sets were the most ever used in a single CTF event.
In that first full run, several competitors were able to detect and patch bugs in individual software packages in less than an hour.

All of the 590 bugs introduced into the "challenge" software packages used in the competition were patched by at least one of the competing systems during the match. The seven finalists were given a budget of $750,000 to prepare their systems for this year's competition.

They would need it: the final round would not only bring on challenges at twice the speed of the first round, but it would include more difficult challenge sets.
Some were even based on "historic" vulnerabilities such as the Morris Worm, SQL Slammer, and Heartbleed.

The most challenging of these, in the minds of the team behind the DARPA CTF, was a reproduction of the Sendmail bug known as Crackaddr.

This exploit took advantage of a bug that defied the usual types of static analysis of code. The final seven teams brought a mix of skills to the game: Dr.

David Brumley, CEO of the security start-up ForAllSecure and director of Carnegie Mellon University's CyLab, sent a team led by CMU doctoral student Alexandre Rebert. Most of the team were also members of CMU's Plaid Parliament of Pwning CTF team, which has participated in DEFCON's human CTF tournaments for a decade. TechX featured a team made up of engineers from the software assurance company GrammaTech and researchers from the University of Virginia. ShellPhish was an academic team from the University of California-Santa Barbara with deep experience in human CTF tournaments. DeepRed came from Raytheon's Intelligence, Information and Services division and was led by Mike Stevenson, Tim Bryant, and Brian Knudson. CodeJitsu was a team of researchers from UC Berkeley, Syracuse University, and the Swiss company Cyberhaven. CSDS featured a two-person team of researchers from the University of Idaho. Disekt was led by University of Georgia professor Kang Li, also a CTF veteran. It turned out ForAllSecure's human-winning CTF experience paid off. ForAllSecure's bot was named Mayhem after the symbolic execution analysis system developed by CMU researchers.
It almost ran away with the match early on, eventually finishing at the front of the pack with 270,042 points to win the $2 million prize for first place. Brumley told Ars right after the victory that part of the reason Mayhem was able to succeed was that the team intentionally avoided using any sort of intrusion detection.
Instead, they focused on attacks and patching. "We did everything as software security, not network security," he explained. "For intrusion detection, we did zero.

Every scenario we ran through, an IDS slowed us down. We weren't dedicating all the cores to offense or defense—we dedicated more to deciding what the right thing to do was." That additional computing power gave the AI the resources to do more testing before making a decision. "We have a lot of patch strategies, and we choose which one we use depending on where we are in the competition," Brumley explained. Mayhem made patching decisions in part by using some of its processing cores to run multiple versions of the patch. "We test all the different patches we generate, Brumley said. "We use the AI to select the best one. We had actually two fundamentally different approaches to patching—one was a hot patch, where if we detected a vulnerability, we would fix it specifically; the other was more agnostic or general patches that would fix a broad range of things. We had a bunch of candidate patches [pre-built].

The executive system would run the two different patches on parallel cores to see how they performed." To decide when to patch, Mayhem used an expert system that looked at information about the state of the game, including how Mayhem was doing relative to competitors on the scoreboard.

The AI "executive" was "running modules through an expert system where we had different weights based on where we were," said Brumley. "If we were behind, we would switch strategies.
If we were getting exploited a lot, we would switch to a more heavy defense." To find those vulnerabilities in the first place, Mayhem had two separate analytical components: Sword (for offensive base analysis of code, seeking exploits) and Shield (an analysis tool for creating patches).

To find bugs in the challenge sets, the system used a mix of old school brute force "fuzzing" and a technique called symbolic execution. Fuzzing is essentially throwing random inputs at software to see what makes it crash, and it's the most common way vulnerabilities are found. Mayhem's fuzzing analysis was built on AFL, developed by Michal Zalewski (also known as lcamtuf). "We took that as sort of the base idea," Brumley explained. "Then we built on a variety of techniques- first to make fuzzing faster.

But the problem with fuzzing is it can often get stuck in one area of code and it can't get out.
So we paired it with symbolic execution, which is a much different approach." Symbolic execution tries to bring some order to fuzzing by defining ranges of inputs, varying them to see what combinations of inputs trigger different paths within the program and which cause errors like endless loops, memory buffer overflows, and crashes.

By using a more controlled approach to applying variables, symbolic execution can sometimes drill deeper into programs to expose bugs than fuzzing can.

But it can also be slower. "One of the keys in our strategy was how do you do this handoff between dumb fuzzing and symbolic execution which is more of a formal method," Brumley said. "We have a nice system where they flow between each other." The AI also engaged in a bit of deception, generating fake network traffic "that was actually chaff traffic that we were generating to shoot at our competitors," Brumley added.

This traffic might have been detected by other systems as attempted exploits, triggering patching or IDS changes. Elsewhere in the competition, CodeRed's Reubus was apparently bit more dependent on IDS. While Reubus scored a number of early POVs, in mid-match the availability of its server started to plummet and seriously impacted its score.

Tim Bryant said he wasn't sure what caused the drop (since the airgap was still in place immediately following the competition), but his suspicion was that the IDS may have brought the performance of the server down. By mid-match, almost all of the competitors were relatively close from round to round in their performance, though Mayhem had built up a lead of over 10,000 points.
If it hadn't been for the failure of a software component, Mayhem's margin of victory may have been much wider than the 8,000 points it ultimately won by over TechX's Xandra bot. "As far as we can tell, what happened was that the submitter—the thing that's supposed to submit our patch and POV candidates—started lagging," Brumley said. "It started submitting binaries for the wrong part of the competition.
It's actually the simplest part of the system. We'll just have to do some analysis to figure out what happened.
It's kind of cool, though, because we had such a big lead that we were able to cruise in and it started working again in the end." TechX's Xandra bot earned a $1 million second place finish.
ShellPhish's Mechanical Phish narrowly defeated CodeRed for third, earning the all-student team $750,000. Mechanical Phish also won some other bragging rights—it was the only one of the seven systems to successfully patch the Crackaddr vulnerability with its symbolic execution analysis.

All others choked on the problem. Shall we play a game? < data-sub-html="#caption-937133"> ForAllSecure's team celebrates their $2 million win. < data-sub-html="#caption-937137"> The champion: Mayhem. In victory, however, there was no rest.

As Alex Rebert, the team leader for ForAllSecure, accepted a trophy from DARPA' s Prabhakar and Wilson, he also accepted a challenge to bring Mayhem (virtually) to play in another CTF tournament at DEFCON.

But this tournament, called by some the World Series of Hacking, features humans. "Yeah, we accepted the challenge from the CTF organizer," said Brumley just after the award ceremony, "We're going to go and have our system compete against the best hackers in the world and see how it does.
I think it's going to be pretty exciting—we honestly have no idea what's going to happen.
I think the machine is going to win if there's a high number of challenges, just through brute force.
It will be best art the parts of the competition that require a quick reaction.
I think we're going to have an advantage." But Brumley also noted that human creativity was important in CTF competitions—and that the AI would have to play more aggressively against human competitors. "In the DARPA event, because we weren't competing against humans, we were pretty careful about not doing anything that looks like counter-autonomy (attacks on the system the other AIs were running on).
It wasn't in the scope of the DARPA competition.
In DEFCON, it's a bit more aggressive, so we're going to enable a bit more aggressive techniques." The DEFCON tournament, however, added another wrinkle for Mayhem. Many members of the team would be playing against the AI as the Plaid Parliament of Pwning, a burgeoning dynasty at this competition. "Half the team is going to segregate off and has been playing at DEFCON for the past 10 years," Brumley said. "There'll be an airgap between the two teams, so it won't just be DEFCON vs. Mayhem—it'll be Mayhem against the creators of Mayhem.
It'll be fun." It turned out not to be as much fun as expected. When we checked in with Alex Rebert late on the first day of the DEFCON CTF, he was too busy to talk—the interface between Mayhem and the tournament system wasn't working properly, and he and a teammate were trying to fix it. On Saturday, he and the team minding Mayhem were a bit more relaxed given there was nothing more for them to do but watch.

The late start had hurt them, but he told Ars he was just hoping the system would work well enough not to be a total embarrassment. At the conclusion of the tournament on Sunday, it was clear that Mayhem was not ready to compete with its masters quite yet.

The Plaid Parliament of Pwning again emerged victorious with 15 points, their third victory in four years. Mayhem found itself on the other end of the scoreboard with a single point to its credit. Maybe next year, robots.

Jeep hackers: How we swerved past Chrysler’s car security patches

Clue: It involves physically breaking into a ride this time Black Hat Last year, the Black Hat presentation by Charlie Miller and Chris Valasek caused Chrysler to recall 1.4 million vehicles to install a software update after they proved they could remotely hack Jeeps. This year, in Las Vegas, the pair showed us how to defeat that update. The dynamic duo praised Chrysler's efforts to secure their vehicles, noting that the new firmware won't accept commands via the builtin diagnostic port if the car is traveling more than five miles per hour.
Vehicles also can't receive data via their wireless Sprint connection unless they are fully patched against the vulnerabilities Miller and Valasek found, which makes remote hacking difficult to virtually impossible. So instead, the two focussed on direct physical attacks against the car's Controller Area Network (CAN) bus, by plugging into the OBD-II (on-board diagnostics) port fitted as standard to modern vehicles to control the onboard computers. Using the official mechanics software program, costing $1,650, they went for a rummage in the car's operating system. Using this method the two found that, with nine hours work, they could brute force their way into the car's sensitive subsystems, including the speedometer.

By manipulating this so that the car thought it was going under the 5MPH limit, they found it was possible, although not easy, to take control of the vehicle's steering and brakes via diagnostic messages. They found that the adaptive cruise control was pretty secure because it automatically shut down if someone tried to push it commands.

By reverse-engineering the system, the two managed to get control of the brakes and throttle, though. They also successfully got into the steering system and were able to make the wheel very difficult to turn, and to turn it 90 degrees – with the latter move piling their test vehicle into a ditch.

This could be done at speed and has the potential to kill an unsuspecting driver. The emergency brake was also vulnerable.

The pair were able to permanently lock the brake on and said it would be non-trivial to fix. Miller suggested that a mechanic would probably have to replace the entire braking system, as that would be easier than trying to fix it. Chrysler appears relaxed about this year's hacking, since it requires physical access to the car to work – you basically have to break in and fit a gizmo to the diagnostics port to disrupt the vehicle's operations, rather than hack it anonymously from the other side of the internet. On the other hand, you could sneak the device into the OBD-II with some wireless comms and take over the car while it's being driven. All these issues could be stopped if only car manufacturers built a basic intrusion detection system into their cars, such as the Can-no hackalator 3000 that the two built in 2014. When asked by The Register why this wasn't being done, Miller said he didn't know, but as cars are typically designed over five-year cycles, the automakers may be working on one and just haven't rolled it out yet.
In the meantime, he isn't worried about mass car hacking. "It's hard, so I'm not worried about it," he said. "It's not like one day someone is going to make all the cars in the world stop.

But we can't ignore it just because I'm not worried about it." This will be the last time Miller and Valasek demonstrate their car hacking skills.
Valasek said they had been doing this for four years now and it was time to pass the baton to other researchers.
In the meantime, the two are concentrating on their day jobs: developing automated car software for Uber. ® Sponsored: 2016 Cyberthreat defense report

Meet The BlackHat NOC People Who Let Malware Roam Free

It's not cool to kill a demo, but you can watch all the pr0n you want Black Hat Neil Wyler and Bart Stump are responsible for managing what is probably the world’s most-attacked wireless network. The two friends, veterans among a team of two dozen, are at the time of writing knee deep in the task of running the network at Black Hat, the security event where the world reveals the latest security messes. The event kicks off with three days of training, then unleashes tempered anarchy as the conference proper gets under way. Wyler, better known as Grifter (@grifter801), heads the network operations centre (NoC) at Black Hat, an event he has loved since he was 12 years old. “I literally grew up among the community,” he says. Bart (@stumper55) shares the job. Wyler's day job is working for RSA's incident response team while Stumper is an engineer with Optiv, but their Black Hat and DEF CON experience trumps their professional status. Wyler has worked with Black Hat for 14 years and DEF CON for 17 years, while Stump has chalked up nine years with both hacker meets. Together with an army of capable network engineers and hackers they operate one of the few hacker conference networks that delegates and journalists are officially advised to avoid. Rightly so; over the next week the world’s talented hacker contingent will flood Las Vegas for Black Hat and DEF CON, the biggest infosec party week of the year.

The diverse talents – and ethics – of the attending masses render everything from local ATMs to medical implants potentially hostile and not-to-be-trusted. Some 23 network and security types represent the network operations centre (NoC) and are responsible for policing the Black Hat network they help create.

Come August each member loosens the strict defensive mindset they uphold in their day jobs as system administrators and security defenders to let the partying hackers launch all but the nastiest attacks over their network. “We will sit back and monitor attacks as they happen," Wyler tells The Register from his home in the US. "It's not your average security job." The Black Hat NoC.
Image: supplied. The crew operates with conference din as a background, sometimes due to cheers as speakers pull off showy hacks or offer impressive technical demos in rotating shifts.
In the NoC, some laugh, some sleep, and all work in a pitch broken by the glow of LEDs and computer screens.

Their score is a backdrop of crunching cheese Nachos, old hacker movies, and electronic music. "Picture it in the movies, and that's what it's like," Stump says, commiserating with your Australia-based scribe's Vegas absence; "it'll be quite a sight, you'll be missing something". Delegates need not.

The NoC will again be housed in The Fish Bowl, a glass den housing the crew and mascots Lyle the stuffed ape and Helga the inflatable sheep.

Delegates are welcome to gawk. Risky click The NoC operators at Black Hat and DEF CON need to check their defensive reflexes at the door in part to allow a user base consisting almost entirely of hackers to pull pranks and spar, and in part to allow presenters to legitimately demonstrate the black arts of malware. When you see traffic like that, you immediately go into mitigation mode to respond to that threat," Wyler says. "Black Hat is a very interesting network because you can't do that - we have to ask if we are about to ruin some guy's demonstration on stage in front of 4000 people". Stump recalls intruding on a training session in a bid to claim the scalp of a Black Hat found slinging the infamous Zeus banking trojan. "The presenter says 'it's all good, we are just sending it up to AWS for our labs' and we had a laugh; I couldn't take the normal security approach and simply block crazy shit like this." Flipping malware will get you noticed and monitored by one of the NoC's eager operators who will watch to see if things escalate beyond what's expected of a normal demonstration. If legitimate attacks are seeping out of a training room, the sight of Wyler, Stump, or any other NoC cop wordlessly entering with a walkie-talkie clipped to hip and a laptop under arm is enough for the Black Hat activity to cease. "It is part of the fun for us," Wyler says. "Being able to track attacks to a location and have a chat." Targeting the Black Hat network itself will immediately anger the NoC, however. The team has found all manner of malware pinging command and control servers over its network, some intentional, and some from unwittingly infected delegates. "We'll burst in and say anyone who's MAC address ends with this, clean up your machine," Stump says. $4000 smut-fest Training is by far the most expensive part of a hacker conference. Of the 71 training sessions running over the weekend past ahead of the Black Hat main conference, each cost between US$2500 (£1887, A$3287) and US$5300 (£4000, A$6966) with many students having the charge covered by generous bosses. Bart and the blow up doll cameo on CNN Money. So it was to this writer's initial incredulity that most of the sea of "weird porn" flowing through the Black Hat pipes stems from randy training students. "It is more than it should ever be," Wyler says of the Vegas con's porn obsession. "While you are at a training class - I mean it's not even during lunch." The titillating tidbit was noticed when one NoC cop hacked together a script to pull and project random images from the network traffic on Fish Bowl monitors.

A barrage of flesh sent the shocked operators into laughing fits of ALT-TAB.

Another moment was captured when Stump was filmed for on CNN Money and a shopper's blow up doll appeared with perfect timing. Balancing act Black Hat's NoC started as an effective but hacked-together effort by a group of friends just ahead of the conference.

Think Security Onion, intrusion detection running on Kali, and Openbsd boxes. Now they have brought on security and network muscle, some recruited from a cruise through the expo floor, including two one-gigabit pipes from CenturyLink with both running about 600Mbps on each. "We were used to being a group of friends hanging out where a lot of stuff happened on site, and now we've brought in outsiders," Stump says. Ruckus Wireless, Fortinet, RSA and CenturyLink are now some of the vendors that help cater to Black Hat's more than 70 independent networks. "It's shenanigans," Wyler says. "But we love it." The pair do not and cannot work on the DEF CON networks since they are still being built during Black Hat, but they volunteer nonetheless leading and helping out with events, parties, and demo labs.
I feel a responsibility to give back to the community which feeds me," Wyler says. "That's why we put in the late nights." ® Sponsored: 2016 Cyberthreat defense report