How to speed up apache with mod_pagespeed and Memcached on Ubuntu 15.10

This tutorial shows how to improve the page load times of your website by using the Google mod_pagespeed module for Apache in conjunction with the fast in-memory cache Memcached. Pagespeed is an Apache 2 module that optimizes and caches the content of a website before it gets delivered to the browser, the result is that the page loads faster, the system load on your server gets lower and the server will be able to deliver more pages per second.

Original URL:

Original article

Hackers love Microsoft’s PowerShell


PowerShell, a scripting language inherent to Microsoft operating systems, is largely used to launch cyber-attacks, a new report suggests.

The Unified Threat Research report, released by next-generation endpoint security (NGES) firm Carbon Black, says that 38 percent of incidents reported by Carbon Black partners used PowerShell.

During investigations last year, 68 percent of the company’s responding partners encountered PowerShell, and almost a third (31 percent) reported getting no security alerts before the investigation of incidents related to the scripting language.

The majority of attacks (87 percent) were clic-fraud, fake antivirus programs and ransomware, but social engineering techniques are still the favorite.

“PowerShell is a very powerful tool that offers tremendous benefit for querying systems and executing commands, including on remote machines”, said Ben Johnson, Carbon Black’s chief security strategist and cofounder.

“However, more recently we’re seeing bad guys exploiting it for malicious purposes it because it falls under the radar of traditional endpoint security products. This often causes tension between the IT and security professionals. PowerShell gives the bad guys a lot of power because it’s part of the native Windows operating system, which makes it difficult for security teams. On the other hand, PowerShell helps IT guys automate various tasks. The two departments need to come together and strike a balance between IT automation and security”.

Published under license from, a Net Communities Ltd Publication. All rights reserved.

Original URL:

Original article

Google launches distributed version of its TensorFlow machine learning system

TensorFlow_--_an_Open_Source_Software_Library_for_Machine_Intelligence Google today announced the launch of version 0.8 of TensorFlow, its open source library for doing the hard computation work that makes machine learning possible. Normally, a small point update like this wouldn’t be all that interesting, but with this version, TensorFlow can now run the training processes for building machine learning models across hundreds of machines in parallel.… Read More

Original URL:

Original article

Announcing TensorFlow 0.8 – now with distributed computing support

Posted by Derek Murray, Software Engineer

Google uses machine learning across a wide range of its products. In order to continually improve our models, it’s crucial that the training process be as fast as possible. One way to do this is to run TensorFlow across hundreds of machines, which shortens the training process for some models from weeks to hours, and allows us to experiment with models of increasing size and sophistication. Ever since we released TensorFlow as an open-source project, distributed training support has been one of the most requested features. Now the wait is over.

Today, we’re excited to release TensorFlow 0.8 with distributed computing support, including everything you need to train distributed models on your own infrastructure. Distributed TensorFlow is powered by the high-performance gRPC library, which supports training on hundreds of machines in parallel. It complements our recent announcement of Google Cloud Machine Learning, which enables you to train and serve your TensorFlow models using the power of the Google Cloud Platform.

To coincide with the TensorFlow 0.8 release, we have published a distributed trainer for the Inception image classification neural network in the TensorFlow models repository. Using the distributed trainer, we trained the Inception network to 78% accuracy in less than 65 hours using 100 GPUs. Even small clusters—or a couple of machines under your desk—can benefit from distributed TensorFlow, since adding more GPUs improves the overall throughput, and produces accurate results sooner.

TensorFlow can speed up Inception training by a factor of 56, using 100 GPUs.

The distributed trainer also enables you to scale out training using a cluster management system like Kubernetes. Furthermore, once you have trained your model, you can deploy to production and speed up inference using TensorFlow Serving on Kubernetes.

Beyond distributed Inception, the 0.8 release includes new libraries for defining your own distributed models. TensorFlow’s distributed architecture permits a great deal of flexibility in defining your model, because every process in the cluster can perform general-purpose computation. Our previous system DistBelief (like many systems that have followed it) used special “parameter servers” to manage the shared model parameters, where the parameter servers had a simple read/write interface for fetching and updating shared parameters. In TensorFlow, all computation—including parameter management—is represented in the dataflow graph, and the system maps the graph onto heterogeneous devices (like multi-core CPUs, general-purpose GPUs, and mobile processors) in the available processes. To make TensorFlow easier to use, we have included Python libraries that make it easy to write a model that runs on a single process and scales to use multiple replicas for training.

This architecture makes it easier to scale a single-process job up to use a cluster, and also to experiment with novel architectures for distributed training. As an example, my colleagues have recently shown that synchronous SGD with backup workers, implemented in the TensorFlow graph, achieves improved time-to-accuracy for image model training.

The current version of distributed computing support in TensorFlow is just the start. We are continuing to research ways of improving the performance of distributed training—both through engineering and algorithmic improvements—and will share these improvements with the community on GitHub. However, getting to this point would not have been possible without help from the following people:

  • TensorFlow training libraries – Jianmin Chen, Matthieu Devin, Sherry Moore and Sergio Guadarrama
  • TensorFlow core – Zhifeng Chen, Manjunath Kudlur and Vijay Vasudevan
  • Testing – Shanqing Cai
  • Inception model architecture – Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, Jonathon Shlens and Zbigniew Wojna
  • Project management – Amy McDonald Sandjideh
  • Engineering leadership – Jeff Dean and Rajat Monga

Original URL:

Original article

Ubuntu 16.04 LTS Will Bring Snap Packages For Up-To-Date, More Secure Apps

An anonymous reader points us to a report on Neowin: Canonical, Ubuntu’s parent company, has announced that Ubuntu 16.04 LTS (Long Term Support) will come with support for the snap packaging format and tools. As a result, end users will get more up-to-date apps, something that proved tricky in the past due âoethe complexity of packaging and providing updates,â which prevented updates to some apps being delivered. Snaps will make the Ubuntu platform more unified, developers will more easily be able to create software for PC, Server, Mobile, or IoT devices. The other major benefit of snaps is that that they’re more secure than software installed through deb packages. Snaps are isolated from the rest of the system, meaning that malware packaged with a snap won’t be able to affect your Ubuntu installation.

Share on Google+

Read more of this story at Slashdot.

Original URL:

Original article

Facebook’s React Native gets backing from Microsoft and Samsung

react-native-samsung-msft1 React Native was originally developed by Facebook to allow its developers to take React, a framework for helping developers build single-page apps the company developed in-house, and allow them to use these same skills to build native mobile apps for iOS and Android. As the company announced at its F8 developer conference today, React Native has now been used by more than 500 companies… Read More

Original URL:

Original article

React Native on the Universal Windows Platform

Today, Microsoft and Facebook announced at Facebook’s developer conference, F8 2016, that we’re adding Universal Windows Platform (UWP) support to React Native. This is provided as an open source, community-supported framework. The new UWP support extends the reach of these native apps to a new market of 270 million active Windows 10 devices, and the opportunity to reach beyond mobile devices, to PCs, and even the Xbox One and HoloLens. For Windows app developers, it also means an opportunity to embed React Native components into their existing UWP apps and to leverage the developer tools and programming paradigms that React Native offers.

In addition to this work on the core framework support, Microsoft is also providing open source tools and services to help developers create React Native apps.  The React Native extension for Visual Studio Code brings an intuitive, productive environment to author and debug React Native apps. Coupled with CodePush, an open source service that can push updates directly to users, Microsoft is helping the React Native community build and deploy apps faster than ever.

For those unfamiliar, React Native is the fastest growing open source project of 2015, amassing over 30,000 stars on GitHub. As opposed to a “write once, run everywhere” kind of framework, React Native expects each platform to differentiate with distinct features and capabilities that apps can, and should, uniquely capture. Instead, they use the phrase “learn once, write everywhere” to capture the fact that React Native is as much about the programming model and developer tools that populate its ecosystem as it is about sharing code.  The same goes for React Native on UWP; an app written for UWP with React Native should feel just as natural as an app written directly in XAML.

As an example, let’s look at the F8 conference schedule app, which shows off many of the modules that are available on React Native for Windows. The app looks and performs great on both the Windows 10 mobile and desktop device families.



Under the hood, React Native enables app builders to declare their UI using JavaScript and React, and the framework translates the React DOM from JavaScript into method calls to view managers on the native platform, allowing developers to proxy direct calls to native modules through JavaScript function invocations. In the case of React Native on UWP, the view managers and native modules are implemented in C#, and the view managers instantiate and operate on XAML elements. We use Chakra for the JavaScript runtime, which can be consumed by any UWP app without any additional binaries being added to the app package.

Today’s announcement and releases are just the beginning. This release provides initial platform support in a standalone GitHub repository. Moving forward, we will work to add additional capabilities and bring our implementation into alignment with the original project.

You can learn more about reference implementations and our experience in building and publishing the F8 Developer Conference app for Windows 10 using React Native at the Decoded Conf in Dublin on May 6th. Come out and meet the team – we’ll be there discussing the project in detail. We invite developers to check out the implementation, to get involved, and to follow us on GitHub.

Written by Eric Rozell, Software Engineer, Microsoft Developer Experience

Original URL:

Original article

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑

%d bloggers like this: