14.1 C
Thursday, November 23, 2017
Home Tags Synchronization

Tag: Synchronization

Today’s consumers are highly reliant on their mobile applications. If apps don’t work, users won’t use them—it’s that simple. To require an Internet connection for mobile applications is to live in the past. If apps rely on a connection, odds are high that the experience will be sluggish and unpredictable.[ There’s more than one way to build a mobile app. See 25 simple tools for building mobile apps fast. | Keep up with the hottest topics in programming with InfoWorld’s App Dev Report newsletter. ]To avoid reliance on the network, providers of databases and cloud services have added synchronization and offline capabilities to their mobile offerings. Solutions like Couchbase’s Couchbase Mobile, Microsoft’s Azure Mobile Services, Amazon’s Cognito, and Google’s Firebase offer the all-important sync that enables apps to work both online and offline. To read this article in full or to leave a comment, please click here
As the buzz over the Internet of Things (IoT) ripples across industries, companies from small startups to industry behemoths rush to launch their IoT products.

The dramatic advances in Internet infrastructure, cloud computing, connection bandwidth, and mobile devices over the years have all helped make IoT real.

Given the abundance of the ever evolving computing technologies, there are many choices of computational models and platforms for the design and implementation of an IoT product.Dating back to the 1970s, the actor modelnbsp;didn't gain too much attention until recently. The model revolves around a universal primitive called actor for concurrent and distributed computation.
It provides an idiomatic alternative to the more conventional concurrency model that relies on synchronization of shared mutable state using locks.
In particular, the message-driven style of non-blocking interactions via immutable messages among actors meshes well with contemporary programming approaches on complex distributed platforms.To read this article in full or to leave a comment, please click here
Sometimes ransomware developers make mistakes in their code.

These mistakes could help victims regain access to their original files after a ransomware infection.

This article is a short description of several errors, which were made by the WannaCry ransomware developers.
RISELab, the successor to the U.C.

Berkeley group that created Apache Spark, is hatching a project that could replace Spark—or at least displace it for key applications.Ray is a distributed framework designed for low-latency real-time processing, such as machine learning.

Created by two doctoral students at RISELab, Philipp Moritz and Robert Nishihara, it works with Python to run jobs either on a single machine or distributed across a cluster, using C++ for components that need speed.[ Jump into Microsoft’s drag-and-drop machine learning studio: Get started with Azure Machine Learning. | The InfoWorld review roundup: AWS, Microsoft, Databricks, Google, HPE, and IBM machine learning in the cloud. ]The main aim for Ray, according to an article at Datanami, is to create a framework that can provide better speeds than Spark.
Spark was intended to be faster than what it replaced (mainly, MapReduce), but it still suffers from design decisions that make it difficult to write applications with “complex task dependencies” because of its internal synchronization mechanisms.To read this article in full or to leave a comment, please click here
OK, we’re kidding a bit.

Chrome is great.

Google did a wonderful job with it—and continues improving it every day.

The marketplace recognizes this, and many surveys show Chrome is the most popular browser by far. It’s not hard to see why.

Chrome is stable, in part because its architects made a smart decision to put each web page in a separate process.
It has excellent HTML5 standards support, loads of extensions, synchronization across computers, and tight integration with Google’s cloud services.

All of these reasons and more make Chrome the popular choice.To read this article in full or to leave a comment, please click here
Concurrent collections in .Net are contained inside the System.Collections.Concurrent namespace and provide lock-free and thread-safe implementations of the collection classes.

Thread safe collections were first introduced in .Net 4, and collections...
The good news: Microsoft has finally brought SharePoint file synchronization to OneDrive.

The bad news: Some details may confuse you and your users about how it all works. Businesses have very different approaches to data sharing among users: Some love the idea of a single portal to shared files, while others hate it. Plus, SharePoint can be more than a project file repository (it’s also meant to support discussions and workflow via project websites), and other value becomes invisible when accessed via OneDrive.[ The cloud storage security gap—and how to close it. | How to make document sharing really work in Office 365. ] Microsoft’s goal is to consolidate its various file managers into one, says Seth Patton, Microsoft’s general manager for OneDrive and SharePoint.

That way, OneDrive and SharePoint stores are treated the same as network and local drives for both the operating system and applications.

That’s exactly the right goal.To read this article in full or to leave a comment, please click here
Dismissed hacker calls US Govt buddy to nix exposed database A Pentagon subcontractor has exposed the names, locations, Social Security Numbers, and salaries of Military Special Operations Command (SOCOM) healthcare professionals. The 11Gb cleartext and openly accessible database also included names and locations of at least two Special Forces analysts with Top Secret government clearance. It exposed pay scales, living quarters, and residences of psychologists and other SOCOM healthcare workers. MacKeeper researcher Chris Vickery found the breach, reporting it to Protomac Healthcare Solutions. He says the company has fixed the exposure, but did not initially appear to take his disclosure seriously. "It is not presently known why an unprotected remote synchronization (rsync) service was active at an IP address tied to Potomac," Vickery says. "I do know that when I called one of the company’s CEOs to report the exposure, he did not seem to take me seriously. "It shouldn’t take over an hour to contact your IT guy and kill an rsync daemon." . The files were taken down 30 minutes after Vickery called a US Government department contact informing them of the exposure at Protomac Healthcare Solutions. "It’s not hard to imagine a Hollywood plotline in which a situation like this results in someone being kidnapped or blackmailed for information," he says. "Let’s hope that I was the only outsider to come across this gem." The breach also included financial and accounting information on Protomac Healthcare Solutions. ® Sponsored: Customer Identity and Access Management
For software patent defenders, Planet Blue's patent on lip synchronization in animated characters was their last, great hope.McRO, Inc. reader comments 6 Share this story In 2014, the US Supreme Court dealt a major blow to software patents. In their 9-0 ruling in Alice Corp v. CLS Bank, the justices made it clear that just adding fancy-sounding computer language to otherwise ordinary aspects of business and technology isn't enough to deserve a patent. Since then, district court judges have invalidated hundreds of patents under Section 101 of the US patent laws, finding they're nothing more than abstract ideas that didn't deserve a patent in the first place. The great majority of software patents were unable to pass the basic test outlined by the Supreme Court. At the beginning of 2016, the nation's top patent court had heard dozens of appeals on computer-related patents that were challenged under the Alice precedent. DDR Holdings v. Hotels.com was the only case in which a Federal Circuit panel ruled in favor of a software patent-holder. The Alice ruling certainly didn't mean all software patents were dead on arrival—but it was unclear what a software patent would need to survive. Even DDR Holdings left a teeny-tiny target for patent owners to shoot at. That all changed in 2016. Judges on the US Court of Appeals for the Federal Circuit found three more cases in which they believe that software patents were wrongly invalidated. What once looked like a small exception to the rule now looks like three big ones. The results of those cases could portend a coming year that will be friendlier to patent owners than the past few have been. As 2016 winds down, let's take a closer look at the details of these three software patent battles and how patent-holders kept their patents alive through the appeals court. Enfish LLC v. Microsoft Decided: May 12, 2016 Panel: Circuit Judges Kimberly Moore, Richard Taranto, Todd Hughes. In 1993, Enfish Corporation was founded in Pasadena, California, by a former Gemstar executive who wanted to find a better way to track and sort e-mail, files, and other data. By 2000, when Enfish founder Louise Wannier was profiled in the Los Angeles Times, the company had 45 employees and had raised $20 million in capital. But Enfish still wasn't profitable. The "Enfish Find" desktop search tool, and other company products, got positive write-ups in PC World and were downloaded by more than 200,000 users. In the end, though, it wasn't enough. By 2005, Enfish was out of business. The Enfish patents, though, lived on. By 2012, Wannier had formed Enfish LLC and decided to sue several huge software companies: Microsoft, Inuit, Sage Software, and financial tech heavyweights Fiserv and Jack Henry & Associates. The Enfish lawsuit (PDF) claimed that Microsoft's .NET Framework infringed two patents, numbered 6,151,604 and 6,163,775. Enfish claimed to have built a new type of "self-referential" database, with a priority date stretching back to 1995. The district court judge disagreed, though. He said a table is just a table. Emphasizing terms like "non-contiguous memory" (a ubiquitous method for computer storage) and "indexing" wasn't going to save the Enfish patents. In his 2014 order, US District Judge George Wu wrote: For millennia, humans have used tables to store information. Tables continue to be elementary tools used by everyone from school children to scientists and programmers.... the fact that the patents claim a "logical table" demonstrates abstractness... Humans engaged in this sort of indexing long before this patent. In May 2016, a Federal Circuit panel reversed Judge Wu. Software improvements are not "inherently abstract," the judges ruled (PDF). The Bilski and Alice cases were directed at processes "for which computers are merely invoked as a tool." Those cases didn't rule out a patent on a "specific asserted improvement in computer capabilities." The Enfish patent claims were "directed to a specific improvement to the way computers operate, embodied in the self-referential table." The self-referential table "is a specific type of data structure designed to improve the way a computer stores and retrieves memory," and thus deserves a patent. The panel also shot down Wu's finding that the Enfish invention was rendered doubly invalid by Microsoft Excel 5.0, a database product that was in public use more than a year earlier than the Enfish patent was filed. With all five of the patent claims now patent-eligible again, the case was sent back to the lower court. Discovery is underway and a trial is scheduled for 2018. For patent lawyers, the Enfish breakthrough was "like a ray of light at the end of a long dark tunnel," wrote one attorney at Fish & Richardson, the nation's biggest IP law firm, who analyzed the decision in a blog post. "Reaction by the patent bar was swift. Notices of additional authority and requests for reconsideration were submitted to district courts around the country." McRO v. Bandai Namco Games America Decided: September 13, 2016 Panel: Circuit Judges Jimmie Reyna, Richard Taranto, Kara Stoll. For patent system defenders, the next case was clearly a hill to die upon. In its opening Federal Circuit brief, patent-holder McRO Inc., which does business under the name Planet Blue, told the judges that the district court's ruling against it "violates supreme court precedent and threatens all software patents." Planet Blue was founded in 1988 by Maury Rosenfeld, a computer graphics and visual effects designer who worked for shows like Star Trek: The Next Generation, Max Headroom, and Pee Wee's Playhouse, according to his Federal Circuit brief (PDF). Rosenfeld's firm was hired by several video game companies "to work on animation and lip-synchronization projects." But, at some point, they clearly had a falling out. Beginning in 2012, Planet Blue sued more than a dozen big video game companies, including Namco Bandai (PDF), Sega, Electronic Arts, Activision, Square Enix, Disney, Sony, Blizzard, and LucasArts. Several of those big players had been Rosenfeld clients before the lawsuits. The complaints said the companies infringed two Rosenfeld patents, US Patents No. 6,307,576 and 6,611,278, which describe a method of lip-synching animated characters. Earlier methods of animating lip synchronization and facial expressions, said Planet Blue lawyers, were too laborious and expensive. US District Judge George Wu ruled against Planet Blue in September 2014. He acknowledged that Rosenfeld may have been an innovator, but his patents were nonetheless invalid because they claimed an abstract idea. The patents would have preempted any lip synchronization that used a "rules-based morph target approach." On appeal, the case was immediately seen as one to watch, in part because Rosenfeld was seen as a real innovator in his field. "The patents utilize complex and seemingly specific computer-implemented techniques," wrote Patently-O blogger Prof. Dennis Crouch. "An initial read of the claims in the Planet Blue patents seems to be a far cry from basic method claims." BSA—aka the Software Alliance, a trade group that includes Microsoft and other big software companies—weighed in on the case by filing an amicus brief (PDF) in favor of Planet Blue. The asserted claims weren't abstract, BSA argued. The district court judge, BSA said, had "imported" questions about obviousness into his analysis when he should have only considered a strict Section 101 analysis about abstraction. In BSA's view, the big swath of patents being thrown out in the post-Alice era should "not include claims directed to technological problems specific to the digital environment." The battle was joined on the other side, too. The Electronic Frontier Foundation and Public Knowledge jointly submitting a brief (PDF) arguing in favor of the video-game defendants. "The claims embody nothing more than the concept of applying numerical rules—that is, equations—to numerical inputs to obtain numerical outputs," wrote Public Knowledge attorney Charles Duan. "Since the law rejects Appellant’s theories of patentability, Appellant resorts to whitewashing its broad claims by extensively discussing the specification and implementing software. This is totally irrelevant," as it is the claims that are important. When the Federal Circuit found in Planet Blue's favor in September, it was the biggest win yet for software patentees in the post-Alice era. The three-judge panel held that the claims were "limited to rules with specific characteristics." Quoting the specification, they held that "the claimed improvement here is allowing computers to produce 'accurate and realistic lip synchronization and facial expressions in animated characters,' which could previously only have been produced by human animators." The judges didn't buy the defense argument that there was nothing new in having computer-based rules for animation. "Defendants concede an animator's process was driven by subjective determination rather than specific, limited mathematical rules." BSA was jubilant about Planet Blue's win. BSA President Victoria Espinel wrote: The Federal Circuit’s opinion reaffirms that software is worthy of patent protection just as any other field of technology. Software is a major component of today’s greatest innovations, and it is imperative that our patent system continues to encourage innovators in all fields of technology. Today’s Federal Circuit’s decision is a step in the right direction. McRO v. Namco Bandai is now back in Wu's Los Angeles courtroom, awaiting a scheduling conference for the litigation to go forward. Amdocs v. Openet Telecom Decided: November 1, 2016. Panel: Circuit Judges S. Jay Plager, Pauline Newman, Jimmie Reyna (dissenting). Israel-based Amdocs went to US courts to sue (PDF) an Irish company, Openet Telecom, in 2010. Amdocs asserted that four patents related to online accounting and billing methods were all derived from the same original application: Nos. 7,631,065; 7,412,510;  6,947,984; and 6,836,797.  The patents all describe the same system, which allows network operators to account and bill for internet protocol (IP) traffic. Claim 1 of the '065 patent claims computer code for "receiving... a network accounting record," then correlating the record with other accounting information, then computer code that uses that information to "enhance the first network accounting record." The district court found that Amdocs' claim wasn't much more than the abstract idea of correlating two networks. The court tossed the patent. And the Federal Circuit majority recognized that, in other cases, "somewhat... similar claims" had been thrown out under § 101—but then the Circuit majority went on to say that, despite that, the patent should have been allowed. "[T]his claim entails an unconventional  technological solution (enhancing data in a distributed fashion) to a technological problem (massive record flows that previously required massive databases)," wrote US Circuit Judge S. Jay Plager. "The components needed were "arguably generic" but had been used in such an "unconventional manner" that they led to "an improvement in computer functionality." The Amdocs saga isn't back in the lower courts quite yet. Defendant Openet has filed a petition for rehearing by the whole court. However it turns out, these three decisions mean that anyone seeking to enforce a software patent will come into 2017 in a far better position than they were a year ago. The Federal Circuit is continuing to debate the patent-eligibility of software. The random draw of judges on a Federal Circuit panel is increasingly looking like the most important factor on whether a patent prevails or dies at the appeals court. As Crouch notes in his analysis, two of the three judges that made up the majority in the Amdocs case can be seen as being in the minority of the court as a whole, since they pushed against the Alice patent. How such a split will be reconciled isn't clear. Crouch points out there may be two or three vacancies on the Federal Circuit during Trump's first term, and the Supreme Court has shown a continued interest in taking up patent cases. But looking back at the key decisions of 2016, anyone wanting to enforce software patents is in a far better position than they were a year ago, thanks to the three decisions above. 2016 may go down in history as the year that saved software patents.
Microsoft's latest Security Intelligence Report says cyber-criminals are compromising virtual machines in the cloud as a way to vastly increase the scale of Distributed Denial of Service Attacks. Microsoft warned of several new cyber risks faced by IT organizations, including the "weaponization of the cloud" in the company's latest Security Intelligence Report covering the first half of 2016.IT security professionals today are well-versed in the dangers of botnets, particularly now that vulnerable internet of things (IoT) devices are being enlisted by the millions to launch and sustain debilitating attacks.
It was only a matter of time before attackers roped in cloud computing resources to do their bidding.In the new report, released Dec. 14, Microsoft says hackers have learned how to marshal compromised virtual machines running in the cloud to launch massive cyber-attacks."In the cloud weaponization threat scenario, an attacker establishes a foothold within a cloud infrastructure by compromising and taking control of a few virtual machines," stated the report. "The attacker can then use these virtual machines to attack, compromise, and control thousands of virtual machines—some within the same public cloud service provider as the initial attack, and others inside other public cloud service providers." Once those virtual machines fall under the control of cyber-criminals, they're managed and controlled much like a botnet consisting of IoT devices and infected desktops.

Attackers can issue commands to launch distributed denial-of-service (DDoS) attacks that cripple online services and websites or flood the internet with spam. On its own Azure cloud computing platform, Microsoft witnessed attempts to exploit the cloud to establish communications with malicious IP addresses and brute force RDP, the Remote Desktop Protocol used by Microsoft to allow users to access their desktops over a network, representing percent 41 percent and 25.5 percent of all outbound attacks, respectively.
Spam followed at just over 20 percent and DDoS attempts made up 7.6 percent of attacks.Microsoft is also warning IT administrators to be on the lookout for targeted threats aimed at taking "control of an email account that has a high probability of containing credentials that can be used to gain access to the public cloud administrator portal." If successful, the threats may open both their on-premises and cloud infrastructures to attack."After logging into the administrator portal, an attacker can gather information and make changes to gain access to other cloud-based resources, execute ransomware, or even pivot back to the on-premises environment," cautioned Microsoft's security researchers.

Attackers are also keeping tabs on GitHub and other public code repositories in the hopes that developers will accidentally publish secret keys that can potentially grant access to cloud accounts and services.Microsoft further warned of "Man in the Cloud" (MitC) attacks, a term coined by security specialist Imperva.
In this scenario, victims are duped into downloading and installing malware, typically with an email containing a malicious link.Once active, the malware searches for a cloud storage folder and replaces the victim's synchronization token with that of the attacker's.
In this spin on a "man in the middle" attack, each time a user adds a file to their cloud storage accounts, a copy is delivered to the attacker. Making matters worse, the new token remains even after the malware is found and removed.

Time is running out for NTP

There are two types of open source projects: those with corporate sponsorship and those that fall under the “labor of love” category.

Actually, there’s a third variety: projects that get some support but have to keep looking ahead for the next sponsor. Some open source projects are so widely used that if anything goes wrong, everyone feels the ripple effects. OpenSSL is one such project; when the Heartbleed flaw was discovered in the open source cryptography library, organizations scrambled to identify and fix all their vulnerable networking devices and software. Network Time Protocol (NTP) arguably plays as critical a role in modern computing, if not more; the open source protocol is used to synchronize clocks on servers and devices to make sure they all have the same time. Yet, the fact remains that NTP is woefully underfunded and undersupported. NTP is more than 30 years old—it may be the oldest codebase running on the internet.

Despite some hiccups, it continues to work well.

But the project’s future is uncertain because the number of volunteer contributors has shrunk, and there’s too much work for one person—principal maintainer Harlan Stenn—to handle. When there is limited support, the project has to pick and choose what tasks it can afford to complete, which slows down maintenance and stifles innovation. “NTF’s NTP project remains severely underfunded,” the project team wrote in a recent security advisory. “Google was unable to sponsor us this year, and currently, the Linux Foundation’s Core Internet Initiative only supports Harlan for about 25 percent of his hours per week and is restricted to NTP development only.” Last year, the Linux Foundation renewed its financial commitment to NTP for another year via the Core Infrastructure Initiative, but it isn’t enough. The absence of a sponsor has a direct impact on the project. One of the vulnerabilities addressed in the recently released ntp-4.2.8p9 update was originally reported to the project back in June.
In September, the researcher who discovered the flaw, which could be exploited with a single, malformed packet, asked for a status update because 80 days had passed since his initial report.

As the vulnerability had already existed for more than 100 days, Magnus Studman was concerned that more delays gave “people with bad intentions” more chances to also find it. Stenn’s response was blunt. “Reality bites—we remain severely under-resourced for the work that needs to be done. You can yell at us about it, and/or you can work to help us, and/or you can work to get others to help us,” he wrote. Researchers are reporting security issues, but there aren’t enough developers to help Stenn fix them, test the patches, and document the changes.

The Linux Foundation’s CII support doesn’t cover the work on new initiatives, such as the Network Time Security (NTS) and the General Timestamp API, or on standards and best practices work currently underway.

The initial support from CII covers “support for developers as well as infrastructure support.” NTS, currently in draft version with the Internet Engineering Task Force (IETF), would give administrators a way to add security to NTP, as it would secure time synchronization.

The mechanism uses Datagram Transport Layer Security (DTLS) to provide cryptographic security for NTP.

The General Timestamp API would develop a new time-stamp format containing more information than date and time, which would be more useful.

The goal is to also develop an efficient and portable library API to use those time stamps. Open source projects and initiatives struggle to keep going when there isn’t enough support, sponsorship, financial aid, and manpower.

This is why open source security projects frequently struggle to gain traction among organizations. Organizations don’t want to wind up relying on a project when future support is uncertain.
In a perfect world, open source projects that are critical parts of core infrastructure should have permanent funding. NTP is buried so deeply in the infrastructure that practically everyone reaps the project’s benefits for free. NTP needs more than simply maintaining the codebase, fixing bugs, and improving the software. Without help, the future of the project remains uncertain. NTP—or the Network Time Foundation established to run the project—should not have to struggle to find corporate sponsors and donors. “If accurate, secure time is important to you or your organization, help us help you: Donate today or become a member,” NTP’s project team wrote.
The approximately 1.7 million people who use Opera's synchronization service were potentially at risk. However, the breach was blocked quickly. Browser vendor Opera, the victim of a breach last week, reported on Aug. 26 that the attack was blocked quickly.

The breach affected the Opera sync system, which provides users with the ability to synchronize settings and passwords across multiple devices."Although we only store encrypted [for synchronized passwords] or hashed and salted [for authentication] passwords in this system, we have reset all the Opera sync account passwords as a precaution," Opera developer Tarquin Wilton-Jones wrote in a blog post.Opera has reset end-user passwords for accessing Opera sync to help limit risk.

Additionally Opera is advising its sync users to reset passwords used for third-party sites that they used with Sync as well. Opera estimates that approximately1.7 million people use the sync service.It's unclear who attacked Opera and what the root cause of breach is. Security experts eWEEK contacted were not entirely surprised by Opera's breach. "There are no real surprises with this disclosure by Opera as it was only a matter of time before a browser-based password vault was compromised," Joseph Carson, head of global strategic alliances at Thycotic, told eWEEK.Since the Opera browser's market share is relatively low, the impact of the breach isn't all that large, Carson said. However, he expects the breach will motivate other browser vendors that have greater market share and provide similar functionality to review their security controls and ensure they are not the next victim. Andrew McDonnell, vice president, security solutions at AsTech Consulting, noted that Opera's reset of user passwords and suggestion that stored passwords be changed indicates that Opera is not necessarily confident in the hashing or encryption they used to protect the passwords. Opera uses security controls, such as hashing passwords in its sync service, making it harder for an attacker to use a stolen password database."Hashing and encryption should be implemented such that the data is useless to an attacker without the passwords or keys, as any sufficiently attractive server can be breached," McDonnell told eWEEK. "When LastPass' servers were breached last year, password experts like Jeremi Gosney didn't even change their master passwords because the underlying encryption was so strong."LastPast, a password management vendor, disclosed in June 2015 that it was the victim of a breach.Rob Sadowski, director of marketing at RSA, EMC's security division, commented that, overall, password managers are making the use of longer, more complex and (theoretically) more secure passwords easier and more convenient."Unfortunately, it also creates a new risk of the password manager being hacked, which is made more risky if that password manager stores passwords for multiple users in the cloud rather than locally, as it did in this case," Sadowski told eWEEK. "A better approach would be to push for more pervasive, easy-to-use two-factor authentication technologies for websites, which would obviate the need for extremely complex password schemes."Sean Michael Kerner is a senior editor at eWEEK and InternetNews.com.

Follow him on Twitter