6 C
London
Sunday, November 19, 2017
Home Tags Alarm system

Tag: alarm system

The Nest Cam IQ Outdoor and Nest Hello add outdoor facial-recognition features.
The thermostat might even lack the current version's trademark metal ring.
The concept of a connected car, or a car equipped with Internet access, has been gaining popularity for the last several years.

By using proprietary mobile apps, it is possible to get some useful features, but if a car thief were to gain access to the mobile device that belongs to a victim that has the app installed, then would car theft not become a mere trifle?
What a classically-trained guitarist with a Master's Degree in counseling brings to the table as head of cybersecurity and privacy at one of the world's major healthcare organizations. Bishop Fox’s Vincent Liu sat down recently with GE Healthcare Cybersecurity and Privacy General Manager Richard Seiersen in a wide-ranging chat about security decision making, how useful threat intelligence is, critical infrastructure, the Internet of Things, and his new book on measuring cybersecurity risk. We excerpt highlights below. You can read the full text here. Fourth in a series of interviews with cybersecurity experts by cybersecurity experts. Vincent Liu: How has decision making played a part in your role as a security leader? Richard Seiersen:  Most prominently, it’s led me to the realization that we have more data than we think and need less than we think when managing risk.
In fact, you can manage risk with nearly zero empirical data.
In my new book “How to Measure Anything in Cybersecurity Risk,” we call this “sparse data analytics.” I also like to refer to it as “small data.” Sparse analytics are the foundation of our security analytics maturity model. The other end is what we term “prescriptive analytics.” When we assess risk with near zero empirical data, we still have data, which we call “beliefs.” Consider the example of threat modeling. When we threat model an architecture, we are also modeling our beliefs about threats. We can abstract this practice of modeling beliefs to examine a whole portfolio of risk as well. We take what limited empirical data we have and combine it with our subject matter experts’ beliefs to quickly comprehend risk. VL: If you’re starting out as a leader, and you want to be more “decision” or “measurement” oriented, what would be a few first steps down this road? RS: Remove the junk that prevents you from answering key questions. I prefer to circumvent highs, mediums, or lows of any sort, what we call in the book “useless decompositions.” Instead, I try to keep decisions to on-and-off choices. When you have too much variation, risk can be amplified. Most readers have probably heard of threat actor capability.

This can be decomposed into things like nation-state, organized crime, etc. We label these “useless decomposition” when used out of context. Juxtapose these to useful decompositions, which are based on observable evidence.

For example, “Have we or anyone else witnessed this vulnerability being exploited?” More to the point, what is the likelihood of this vulnerability being exploited in a given time frame? If you have zero evidence of exploitability anywhere, your degree of belief would be closer to zero. And when we talk about likelihood, we are really talking about probability. When real math enters the situation, most reactions are, “Where did you get your probability?” My answer is usually something like, “Where do you get your 4 on a 1-to-5 scale, or your ‘high’ on a low, medium, high, critical scale?” A percentage retains our uncertainty.
Scales are placebos that make you feel as if you have measured something when you actually haven’t. This type of risk management based on ordinal scales can be worse than doing nothing.   VL: My takeaway is the more straightforward and simple things are, the better.

The more we can make a decision binary, the better.

Take CVSS (Common Vulnerability Scoring System). You have several numbers that become an aggregate number that winds up devoid of context. RS: The problem with CVSS is it contains so many useless decompositions.

The more we start adding in these ordinal scales, the more we enter this arbitrary gray area. When it comes to things like CVSS and OWASP, the problem also lies with how they do their math. Ordinal scales are not actually numbers. For example, let’s say I am a doctor in a burn unit.
I can return home at night when the average burn intensity is less than 5 on a 1-to-10 ordinal scale.
If I have three patients with burns that each rank a 1, 3, and 10 respectively, my average is less than a 5. Of course, I have one person nearing death, but it’s quitting time and I am out of there! That makes absolutely no sense, but it is exactly how most industry frameworks and vendor implement security risk management.

This is a real problem.

That approach falls flat when you scale out to managing portfolios of risk. VL: How useful is threat intelligence, then? RS: We have to ask—and not to be mystical here—what threat intelligence means.
If you’re telling me it is an early warning system that lets me know a bad guy is trying to steal my shorts, that’s fine.
It allows me to prepare myself and fortify my defenses (e.g., wear a belt) at a relatively sustainable cost. What I fear is that most threat intelligence data is probably very expensive, and oftentimes redundant noise. VL: Where would you focus your energy then? RS: For my money, I would focus on how I design, develop, and deploy products that persist and transmit or manage treasure.

Concentrate on the treasure; the bad guys have their eyes on it, and you should have your eyes directed there, too. This starts in design, and not enough of us who make products focus enough on design. Of course, if you are dealing with the integration of legacy “critical infrastructure”-based technology, you don’t always have the tabula rasa of design from scratch. VL: You mean the integration of critical infrastructure with emerging Internet of Things technology, is that correct? RS: Yes; we need to be thoughtful and incorporate the best design practices here.

Also, due to the realities of legacy infrastructure, we need to consider the “testing in” of security.
Ironically, practices like threat modeling can help us focus our testing efforts when it comes to legacy.
I constantly find myself returning to concepts like the principle of least privilege, removing unnecessary software and services.
In short, focusing on reducing attack surface where it counts most. Oldies, but goodies! VL: When you’re installing an alarm system, you want to ensure it is properly set up before you worry about where you might be attacked. Reduce attack surface, implement secure design, execute secure deployments. Once you’ve finished those fundamentals, then consider the attackers’ origin. RS:  Exactly! As far as the industrial IoT (IIoT) or IoT is concerned, I have been considering the future of risk as it relates to economic drivers...

Connectivity, and hence attack surface, will naturally increase due to a multitude of economic drivers.

That was true even when we lived in analog days before electricity. Now we have more devices, there are more users per device, and there are more application interactions per device per user.

This is an exponential growth in attack surface. VL: And the more attack surface signals more room for breach. RS: As a security professional, I consider what it means to create a device with minimal attack surface but that plays well with others.
I would like to add [that] threat awareness should be more pervasive individually and collectively. Minimal attack surface means less local functionality exposed to the bad guy and possibly less compute on the endpoint as well. Push things that change, and or need regular updates, to the cloud. Plays well with others means making services available for use and consumption; this can include monitoring from a security perspective.

These two goals seem at odds with one another. Necessity then becomes the mother of invention.

There will be a flood of innovation coming from the security marketplace to address the future of breach caused by a massive growth in attack surface.  Richard Seiersen, General Manager of Cybersecurity and Privacy, GE Healthcare PERSONALITY BYTES First career interest: Originally a classical musician who transitioned into teaching music. Start in security: My master’s degree capstone project was focused on decision analysis.
It was through this study that I landed an internship at a company called TriNet, which was then a startup. My internship soon evolved into a risk management role with plenty of development and business intelligence. Best decision-making advice for security leaders: Remove the junk that prevents you from answering key questions. Most unusual academic credential: Earned a Master in Counseling with an emphasis on decision making ages ago.
I focused on a framework that combined deep linguistics analysis with goal-setting to model effective decision making. You could call it “agile counseling” as opposed to open-ended soft counseling. More recently, I started a Master of Science in Predictive Analytics. My former degree has affected how I frame decisions and the latter brings in more math to address uncertainty.

Together they are a powerful duo, particularly when you throw programming into the mix. Number one priority since joining GE: A talent-first approach in building a global team that spans device to cloud security. Bio: Richard Seiersen is a technology executive with nearly 20 years of experience in information security, risk management, and product development.

Currently he is the general manager of cybersecurity and privacy for GE Healthcare. Richard now lives with his family of string players in the San Francisco Bay Area.
In his limited spare time he is slowly working through his MS in predictive analytics at Northwestern. He should be done just in time to retire. He thinks that will be the perfect time to take up classical guitar again. Related Content: Vincent Liu (CISSP) is a Partner at Bishop Fox, a cyber security consulting firm providing services to the Fortune 500, global financial institutions, and high-tech startups.
In this role, he oversees firm management, client matters, and strategy consulting.
Vincent is a ...
View Full Bio More Insights
Integrated into the front grille of the Cadillac CT6 is a surveillance camera that the driver can secretly activate.

There's one on the rear trunk lid, too.
If the alarm system is triggered, these two cameras activate, and two others on the door-mounted rearview mirrors do as well.

Footage is stored on a removable SD card in the trunk.Cadillac When Ars first saw the new Cadillac CT6 at the New York International Auto Show last year, we remarked that it "may well be the company’s most convincing home-grown rival to the mighty German super-sedans like Audi’s A8, BMW’s 7-Series, and Mercedes-Benz’s S-Class." But one feature we missed was that the $53,000-plus machine doubles as surround-view, gas-powered camcorder on wheels. Sure, vehicles like police cars have dash cams, and there was even a valet cam in the 2015 Corvette.

But the Cadillac CT6 has four cameras secretly offering surround-view video-recording outside the vehicle.
It's an industry first and a new source for capturing YouTube moments, scenic drives, or even other affairs like police stops. "Cadillac expects the surround-vision video recording system to be used by CT6 owners to record events such as a memorable drive, for security in the case of a vehicle being tampered with, or to record an incident," General Motors said of the feature. According to the automaker, here's how it works: The CT6 utilizes four of the vehicles’ seven exterior cameras to provide recorded video of the CT6’s surroundings.

The cameras are strategically placed without compromising the sculpted exterior—one in each door-mounted rearview mirror, one integrated into the front grille and one mounted on the rear trunk lid. When the video recording system is activated, the cameras can capture video in one of two modes: using the front and rear cameras during vehicle operation or using all four cameras in a round-robin fashion when the vehicle security system is armed.

The latter mode will only record video once the CT6 has been disturbed.

The same cameras are also used to provide a 360-degree display around the vehicle on the CUE screen to aid in vehicle maneuvering. Cadillac said the non-HD footage is stored on an a removable 32-gig SD card in the trunk, allowing for around 32 hours of filming. On a good day of visibility, the cameras can record several hundred feet away.

The system does not capture audio. Internally, the automaker is debating the future of this technology. "It's definitively something I know all of our engineering and product planning teams are talking about," Donny Nordlicht, a Cadillac spokesman, told Ars in a telephone interview. Nordlicht said the video-recording feature has been in the vehicles since they hit the market earlier this year, but Cadillac just began hyping the filming technology days ago as part of its marketing strategy. "We're just sort of going piece by piece talking about features of the car," he said.
Software for simple gear given way too much control over locks and sensors Samsung has said there's nothing for owners of its SmartThings home security gear to worry about – after researchers showed numerous ways to commandeer devices and disable locks. The three researchers from the University of Michigan, who were partially sponsored by Microsoft, have demonstrated that with the help of malicious apps, electronically activated door locks can be opened, alarms set off, and settings fiddled with for smartish homes – all remotely. The research, to be presented at the Proceedings of 37th IEEE Symposium on Security and Privacy later this month, looked at SmartThings because it has the largest third-party app ecosystem and covered a wide range of home automation products. Essentially, owners of Samsung SmartThings Hubs can buy gadgets like motion sensors, water leak sensors, remote-controlled power outlets, coffee makers and door locks, and wirelessly connect the gizmos to their hub.

These devices are then controlled via SmartApps, which are little widgets that run on your smartphone and are installed via Samsung's SmartThings app for iOS, Android and Windows Phone. It was found that these individual SmartApps can wield quite a lot of power, far more than they should over a home's equipment.

They can control hardware completely unlike the gadgets they should be managing.
Samsung claims it would never allow a malicious SmartApp to appear in its SmartThings market place. "Our key findings are twofold.

First, although SmartThings implements a privilege separation model, we found that SmartApps can be overprivileged.

That is, SmartApps can gain access to more operations on devices than their functionality requires," they said. "Second, the SmartThings event subsystem, which devices use to communicate asynchronously with SmartApps via events, does not sufficiently protect events that carry sensitive information such as lock PIN codes." The researchers found that 55 per cent of SmartThings apps ask for the access rights to many more SmartThings devices than is needed.
In one of the demonstrated hacks, a SmartThings device battery monitoring app could be used to reset a programmable door lock, potentially granting a miscreant easy access to a victim's home. In addition, 42 per cent of apps can gain extra access to SmartThings functions without ever asking the user for permission.
In another example, the team showed off how to set off a false carbon monoxide alert after writing a malicious SmartApp that impersonated the gas detector. The code to carry out these hacks is now up on GitHub but those with a larcenous bent can forget about using it in real world attacks: Samsung has fixed the issues that allow the mischievous apps to work.
Samsung told The Reg that the researchers involved had been in contact well before publication of their findings, and the chaebol has fixed the issues. "The potential vulnerabilities disclosed in the report are primarily dependent on two scenarios – the installation of a malicious SmartApp or the failure of third party developers to follow SmartThings guidelines on how to keep their code secure," a spokeswoman said. "Regarding the malicious SmartApps described, these have not and would not ever impact our customers because of the certification and code review processes SmartThings has in place to ensure malicious SmartApps are not approved for publication.

To further improve our SmartApp approval processes and ensure that the potential vulnerabilities described continue not to affect our customers, we have added additional security review requirements for the publication of any SmartApp." That the IoT industry is woefully bad at security is nothing new – we've been covering these stories for years now.

At least Samsung isn’t as egregious as the SimpliSafe "smart" alarm system, which can be hacked without having to install malicious apps and seems to be unfixable. But it does show that – for all the warnings – things aren't improving fast enough and relying on simple mechanical locks and unconnected alarm systems could be the way forward for the time being. ® Sponsored: Rise of the machines