Linux Kernel 4.5.4 Is Out, Brings AMDGPU, ARM, Intel i915, Wireless, x86 Updates

While not many GNU/Linux operating systems have adopted Linux kernel 4.5, its development cycle continues at a fast pace, introducing more and more improvements, security patches, and new capabilities


Original URL: http://feedproxy.google.com/~r/linuxtoday/linux/~3/9vtV9926o44/linux-kernel-4.5.4-is-out-brings-amdgpu-arm-intel-i915-wireless-x86-updates-160511140419.html

Original article

OpenThread, an open-source implementation of the Thread networking protocol

Palo Alto, California — May 11, 2016 — Nest Labs, Inc. (www.nest.com), architect of the thoughtful home, today released OpenThread, an open source implementation of the Thread networking protocol. With OpenThread, Nest is making the technology used in Nest products more broadly available to accelerate the development of products for the connected home. As more silicon providers adopt Thread, manufacturers will have the option of using a proven networking technology rather than creating their own, and consumers will have a growing selection of secure and reliable connected products to choose from.

“Thread makes it possible for devices to simply, securely, and reliably connect to each other and to the cloud,” said Greg Hu, Head of Nest Platform and Works with Nest. “And because Thread is an IPv6 networking protocol built on open standards, millions of existing 802.15.4 wireless devices on the market can be easily updated to run Thread. OpenThread will significantly accelerate the deployment of Thread in these devices, establishing Thread as one of the key networking technology standards for connected products in the home”

Along with Nest, ARM, Atmel, a subsidiary of Microchip Technology, Dialog Semiconductor, Qualcomm Technologies, Inc., a subsidiary of Qualcomm Incorporated and Texas Instruments Incorporated are contributing to the ongoing development of OpenThread. In addition, OpenThread can run on Thread-capable radios and corresponding development kits from silicon providers like NXP Semiconductors and Silicon Labs.

“Nest products set the bar for how connected devices should work so it’s exciting that Nest is releasing OpenThread to the open-source community,“ said Jeffery Torrance, vice president, business development, Qualcomm Technologies, Inc. “As a company with a longstanding history of actively supporting and contributing to open technologies, OpenThread allows us to work with other like-minded corporations and individuals to deliver a best-in-class implementation of Thread that can be widely used for the advancement of a connected and secure home.”

Simple, Secure, Reliable Connectivity for the Home

Designed to connect products in and around the home into low-power, wireless mesh networks, Thread is backed by industry-leading companies including ARM, Big Ass Solutions, Nest Labs, NXP Semiconductors, OSRAM, Qualcomm, Samsung Electronics, Schneider Electric, Silicon Labs, Somfy, Tyco and Yale Security. Existing popular application protocols and IoT platforms like Nest Weave and ZigBee can run over Thread networks to deliver interoperable, end-to-end connectivity.

Since opening membership in October 2014, the Thread Group has grown to more than 230 members with over 30 products submitted and awaiting Thread certification. In addition to Nest products, a number of devices – including the OnHub, a router from Google – are shipping with Thread-ready radios.

OpenThread Distribution

The initial version of OpenThread is being distributed by Nest on GitHub at https://github.com/openthread/openthread. OpenThread users are welcome to submit Pull Requests. Users will also have access to sample code, the ability to file issues on GitHub, and support on Stack Overflow as well as Nest’s discussion forum.

A demo of OpenThread will be available at Google I/O from May 18th to May 20th in the Nest Sandbox.

About Nest


Nest’s mission is to create a home that’s thoughtful – one that takes care of itself and the people inside it. The company focuses on simple, beautiful and delightful hardware, software and services. The Nest Learning Thermostat™ and Nest Energy Services keep you comfortable and address home energy consumption. The Nest Protect™ smoke and carbon monoxide alarm helps keep you safe and Nest Safety Rewards lets you save money through participating home insurance providers, while Nest Cam™ keeps an eye on what matters most in your home. Nest products are sold in the U.S., U.K., Canada, France, Belgium, Ireland and the Netherlands and are installed in more than 190 countries. The Nest Learning Thermostat has helped save approximately seven billion kWh of energy to date. Through the Works with Nest program, third-party products can securely connect with Nest devices to make homes safer, more energy efficient, and more aware. For more information, visit www.nest.com.


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/zBMuJAUBfGE/

Original article

Cache-Control: immutable

About one year ago our friends at Facebook brought an interesting issue to the IETF HTTP Working Group – a lot (20%!) of their transactions for long lived resources (e.g. css, js) were resulting in 304 Not Modified. These are documents that have long explicit cache lifetimes of a year or more and are being revalidated well before they had even existed for that long. The problem remains unchanged.

After investigation this was attributed to people often hitting the reload button. That makes sense on a social networking platform – show me my status updates! Unfortunately, when transferring updates for the dynamic objects browsers were also revalidating hundreds of completely static resources on the same page. While these do generate 304’s instead of larger 200’s, this adds up to a lot of time and significant bandwidth. It turns out it significantly delays the delivery of the minority content that did change.
Facebook, like many sites, uses versioned URLs – these URLs are never updated to have different content and instead the site changes the subresource URL itself when the content changes. This is a common design pattern, but existing caching mechanisms don’t express the idea and therefore when a user clicks reload we check to see if anything has been updated.

IETF standards activity is probably premature without data or running code – so called hope based standardization is generally best avoided. Fortunately, HTTP already provides a mechanism for deploying experiments: Cache-Control extensions.

I put together a test build of Firefox using a new extended attribute – immutable. immutable indicates that the response body will not change over time. It is complementary to the lifetime cachability expressed by max-age and friends.

Cache-Control: max-age=365000000, immutable

When a client supporting immutable sees this attribute it should assume that the resource, if unexpired, is unchanged on the server and therefore should not send a conditional revalidation for it (e.g. If-None-Match or If-Modified-Since) to check for updates. Correcting possible corruption (e.g. shift reload in Firefox) never uses conditional revalidation and still makes sense to do with immutable objects if you’re concerned they are corrupted.

This Makes a Big Difference

The initial anecdotal results are encouraging enough to deploy the experiment. This is purely performance, there is no web viewable semantic here, so it can be withdrawn at any time if that is the appropriate thing to do.


For the reload case, immutable saves hundreds of HTTP transactions and improves the load time of the dynamic HTML by hundreds of milliseconds because that no longer competes with the multitude of 304 responses.

Facebook reload without immutable

Facebook reload with immutable

Next Steps

I will land immutable support in Firefox 49 (track the bug). I expect Facebook to be part of the project as we move forward, and any content provider can join the party by adding the appropriate cache-control extension to the response headers of their immutable objects. If you do implement it on the server side drop me a note at mcmanus@ducksong.com with your experience. Clients that aren’t aware of extensions must ignore them by HTTP specification and in practice they do – this should be safe to add to your responses. Immutable in Firefox is only honored on https:// transactions.

If the idea pans out I will develop an Internet Draft and bring it back in the standards process – this time with some running code and data behind it.


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/NTN-e1RJPwY/cache-control-immutable.html

Original article

Qt Creator 4.0.0 released

We are happy to announce the release of Qt Creator 4.0.0. Starting with this release, we are making the Clang static analyzer integration, extended QML profiler features and auto test integration (experimental) available under open source. The previously commercial-only connection editor and path editor of Qt Quick Designer were already open sourced with Qt Creator 3.6.0. Qt Creator is now available under commercial license and GPLv3 (with exceptions). The exceptions ensure that there are no license restrictions on generated code, and that bridging to 3rd party code is still possible. You can read more about this change in the blog post announcing it.

New Theme and QML Flame Graph

New Flat Theme and QML Flame Graph

Users of CMake will find that we improved the workflow for CMake-based projects. CMake is now triggered automatically when necessary, and kit settings like the used Qt version or tool chain are automatically configured. Projects mode now features a UI to change the CMake configuration for a build directory. You can also change the CMake configuration that is common to all projects that use the same kit. Qt Creator will no longer create CMake build directories before the project is built. This makes for a much tidier work environment, especially when users only want to study source code using the CMake build system.

The Clang code model is now automatically used if the (experimental) plugin is turned on. We added customizable configurations for warnings, which you can also specify per project.

On the debugging side we fixed multiple issues that appeared with the new LLDB included in Xcode 7.3 on OS X. You’ll also find more pretty printers for standard types, as well as many bug fixes.

If you wonder where Analyze mode has gone: It was merged with Debug mode. In the new, unified Debug mode you now find the Debugger, Clang Static Analyzer, Memcheck, Callgrind and QML Profiler tools. The QML Profiler adds a new visualization of statistics: The Flamegraph. In this view, the horizontal bars show the amount of time all invocations of a function took, and vertical nesting on top shows which functions were called by which other ones, making for a very concise overview.

In Qt Quick Designer you can now move the canvas by dragging with the left mouse button while the space key is pressed. It adds support for the new Qt Quick Controls 2 and received many bug fixes. Please also head over to the post and video about Qt Quick Designer and Qt Quick Controls 2 that we recently published on our blog.

Qt Creator now also has a new, flat theme, which is based on the concept that Diana presented a year ago. It is default for everyone who hasn’t ever changed the theme in Qt Creator. The old theme is also still available as “Classic” in Tools > Options > Environment > Interface.

You find a more detailed list of improvements in our change log.

The opensource version is available on the Qt download page, and you find commercially licensed packages on the Qt Account Portal. Please post issues in our bug tracker. You can also find us on IRC on #qt-creator on chat.freenode.net, and on the Qt Creator mailing list.


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/RBn6mAwpnwc/

Original article

Mozilla Launches Test Pilot, A Firefox Add-On For Trying Experimental Features

An anonymous reader writes: Mozilla today launched Test Pilot, a program for trying out experimental Firefox features. To try the new functionality Mozilla is offering for its browser, you have to download a Firefox add-on from testpilot.firefox.com and enable an experiment. The main caveat is that experiments are currently only available in English (though Mozilla promises to add more languages “later this year”). Test Pilot was first introduced for Firefox 3.5, but the new program has been revamped since then, featuring three main components: Activity Stream, Tab Center and Universal Search. Activity Stream is designed to help you navigate your browsing history faster, surfacing your top sites along with highlights from your browsing history and bookmarks. Tab Center displays open tabs vertically along the side of your screen. Mozilla says Universal Search “combines the Awesome Bar history with the Firefox Search drop down menu to give you the best recommendations so you can spend less time sifting through search results and more time enjoying the web.”


Share on Google+

Read more of this story at Slashdot.


Original URL: http://rss.slashdot.org/~r/Slashdot/slashdot/~3/1I4viyopbp4/mozilla-launches-test-pilot-a-firefox-add-on-for-trying-experimental-features

Original article

Facebook Open-Sources Capture the Flag Competition Platform As It Encourages Students

An anonymous reader writes: Facebook announced today that it is making its gamified security training platform called Capture the Flag (CTF) open source in an effort to encourage students and developers to learn about online security and bugs. The platform, which is popular at hacker conventions such as Def Con, pits different teams of hackers against one another. The social juggernaut itself has run CTF competitions at events across the world.”By open sourcing our platform, schools, student groups, and organizations across all skill levels can now host competitions, practice sessions, and conferences of their own to teach computer science and security skills,” wrote Gulshan Singh, a software engineer on Facebook’s threat infrastructure team. “We’re also releasing a small repository of challenges that can be used immediately upon request (to prevent cheating).”


Share on Google+

Read more of this story at Slashdot.


Original URL: http://rss.slashdot.org/~r/Slashdot/slashdot/~3/C36s6qmA56U/facebook-open-sources-capture-the-flag-competition-platform-as-it-encourages-students

Original article

How Fastly coded their own routing layer for scaling CDN

This post is the first in a series detailing the evolution of network software at Fastly. We’re unique amongst our peers in that from inception, we’ve always viewed networking as an integral part of our product rather than a cost center. We rarely share what we do with the wider networking community however, in part because we borrow far more from classic systems theory than contemporary networking practice.

Building abstractions

Before delving further, it is important to disambiguate that while we write software for networks, we shy away from using the term software-defined networking (SDN) to describe what we do.

For one, SDN perpetuates the misconception that computer networking was ever about anything other than software — “running code and rough consensus” was an IETF mantra long before networking vendors reduced it to a sound bite. The term itself has since been adopted and reinterpreted in order to legitimize specific approaches to network management. In the process, SDN has evolved into the most effective of buzzwords: a portmanteau so devoid of meaning you begin to question your own understanding of the problem.

The fact that the wider networking industry is rediscovering the relevance of software is inevitable given the sheer scale of modern networks. With the successive waves of virtualization and then containerization, it’s not uncommon for a single data center rack to contain more uniquely addressable endpoints than the entire internet had just three decades ago. The complexity of managing so many devices has understandably shifted much of the community’s attention towards the need for automation in order to reduce spiralling costs in operating infrastructure.

While this is an important development, automation is not our end goal. Our primary focus is in uncovering effective abstractions for building network applications.

If the delineation between both feels subtle, consider the challenge of propagating the mapping between hostnames and addresses to all nodes in a network. There are numerous ways one could automate this process — from something as simple as a cron job to emailing nodes with updates1. Instead, the solution adopted by early internet pioneers was the Domain Name System (DNS), a hierarchical decentralized naming system. Over time, the layer of indirection provided by DNS has been crucial to the development of a number of value-added services such as content delivery networks. Automation saves time, abstractions make time work for you.

Scaling out

The expertise of our core founding team lay primarily in high-performance systems, which turns out to be essential to offering low latency, distributed caching as a service. Bootstrapping a content delivery network however is inherently tricky, given incumbents start with a larger geographic footprint. Our initial offering therefore focused on what our industry had failed to provide — unprecedented visibility and control over how content gets delivered at the edge.

During our formative years our lack of experience in networking was largely irrelevant — our typical point of presence (POP) was composed of two hosts directly connected to providers over Border Gateway Protocol (BGP). By early 2013 we had grown enough that this number of hosts was no longer enough. Scaling our topology by connecting more caches directly to our providers, as shown in figure 1a, was not an option. Providers are reluctant to support this due to the cost of ports and the configuration complexity of setting additional BGP sessions.

Figure 1: impact of scaling network topology

An obvious solution to this problem would be to install a network device, as shown in figure 1b, which neatly decouples the increase in number of devices from the increase in number of providers. This network device would typically be a router, which is a highly specialized device for forwarding traffic with a price tag to match. While this would be an acceptable compromise if the overall volume of devices were low, the nature of a content delivery network is to constantly expand both in geographic reach and volume of traffic. Today, our smallest POP has at least two network devices, as shown in figure 1c.

An overview of how such a network would work with a router is shown in figure 2. A router receives routes directly from providers over BGP, and inserts them into the Forwarding Information Base (FIB), the lookup table implemented in hardware used for route selection. Hosts then forward traffic to the router, which forwards packets to the appropriate next hop according to the resulting lookup in the device FIB.

Figure 2: network topology using a router

The larger the FIB, the more routes a device can hold. Unfortunately, the relationship between FIB size and cost is not linear. Border routers must be able to hold the full internet routing table, which now exceeds over 600,000 entries2. The hardware required to support this space is the primary cost associated with routers.

Routing without routers

In traditional cloud computing environments, the cost of border routers is quickly dwarfed by the sheer volume of servers and switches they are intended to serve. For CDNs however, the cost is much more than mere inconvenience. In order to place content closer to end users, CDNs must have a large number of vantage points from which they serve content. As a result, network devices can represent a significant amount of the total cost of infrastructure.

The idea of dropping several millions of dollars on overly expensive networking hardware wasn’t particularly appealing to us. As systems engineers we’d much rather invest the money in commodity server hardware, which directly impacts how efficiently we can deliver content.

Our first observation was that we didn’t need most of the features provided by routers, because we were not planning on becoming a telco any time soon. Switches seemed like a much more attractive proposition, but lacked the one feature that made routers useful to us in the first place: FIB space. At the time, switches could typically only hold tens of thousands of routes in FIB, which is orders of magnitude less than we needed. By 2013, hardware vendors such as Arista had begun to provide a feature that could overcome this physical limitation: they would allow us to run our own software on switches.

Freed from the shackles of having to obey the etiquette of sensible network design, our workaround took form relatively quickly. Instead of relying on FIB space in a network device, we could push routes out towards the hosts themselves. BGP sessions from our providers would still be terminated at the switch, but from there the routes would be reflected down to the hosts.

Figure 3: BGP route reflection

This approach is presented in figure 3. An external BGP (eBGP) session is terminated in a userspace BGP daemon, such as BIRD, which runs on our switch. The routes received are then pushed over internal BGP (iBGP) sessions down to a BIRD instance running on the hosts, which then injects routes directly into the host kernel.

This solves our immediate problem of bypassing the switch FIB entirely, but it doesn’t entirely solve the problem of how to send packets back towards the internet. A FIB entry is composed of a destination prefix (where a packet is going) and a nexthop address (where it’s going through). In order to forward a packet to a nexthop, a device must know the nexthop’s physical address on the network. This mapping is stored in the Address Resolution Protocol (ARP) table.

Figure 3 illustrates that the switch has the appropriate ARP information for our providers, since it is directly connected to them. The hosts however do not, and therefore cannot resolve any of the nexthops they have been provided over BGP.

Figure 4: ARP propagation using Silverton

Silverton: a distributed routing agent

This was the starting point for Silverton, our custom network controller which orchestrates route configuration within our POPs. We realized that we could simply run a daemon on the switch which subscribed to changes to the ARP table through the API provided on Arista devices. Upon detecting a change to a provider’s physical MAC address, Silverton could then disseminate this information throughout the network, and clients on the hosts would reconfigure our servers with information on how to directly reach our providers.

For a given provider IP and MAC address, the first step performed by the client-side agent of Silverton is to fool the host into believing the IP is reachable directly over an interface, or link local. This can be achieved by configuring the provider IP as a peer on the interface, and is easily replicated on Linux by using iproute:

$ ip addr add  peer 10.0.0.1 dev eth0

If the host believes the provider IP is link local, it will be forced to look up the MAC address for that IP in its ARP table. We can manipulate that too:

$ ip neigh replace 10.0.0.1 lladdr aa:aa:aa:aa:aa:aa nud permanent dev eth0

Now every time a route lookup for a destination returns the nexthop 10.0.0.1, it will end up sending traffic to aa:aa:aa:aa:aa:aa directly. The switch receives data frames from the host towards a physical MAC address which is known to be directly connected. It can inspect which interface to forward the frame along by inspecting its local MAC address table, which maintains a mapping between a destination MAC address and the outbound interface.

While this entire process may seem convoluted in order to merely forward packets out of a POP, our first iteration of Silverton contained less than 200 lines of code, and yet instantly saved us hundreds of thousands of dollars for every POP we deployed. Importantly, unlike hardware, software can also be incrementally refined. Over time, Silverton has grown to encompass all of our dynamic network configuration, from labelling description fields to manipulating routing announcements and draining BGP sessions.

More than saving money however, Silverton provided us with a valuable abstraction. It maintained the illusion that every host is directly connected to every provider, which was our starting point (figure 1a). By maintaining multiple routing tables in the kernel and selecting which table to look up on a per-packet basis, we were able to build tools and applications on top of Silverton which can override route selection. An example of this is an internal utility called st-ping, which pings a destination over all connected providers:

Figure 5: ping over all transit providers in a POP. Delay is shown for a single destination IP, and therefore not representative of global provider performance.

Pushing path selection all the way to the application allowed us to develop far greater introspection into network behavior, which we then used to drive content delivery performance at the edge.

Going forward: what else can we get away with?

Silverton served as a reminder that nothing is stronger than an idea whose time has come.

Had we attempted to implement Silverton two years earlier, we would have hit a wall: no vendor on the market would have provided us with programmatic access to the core networking components we needed. Fortunately we found ourselves looking for switches at a time when Arista were starting to formalize access to the internal APIs on their devices.

Had we attempted to implement Silverton today, we would have long since bought into the collective delusion that you need routers to do routing. As it turns out, routers are about as expensive to get rid of as they are to buy, since the people you hire to configure them are just as specialized as the hardware they maintain. By avoiding routers entirely we were able to build a networking team with a different mindset on how you operate a network, and we’ve been reaping the benefits ever since.

Upon validating the first proof of concept for Silverton in early 2013, a natural question arose: what else can we get away with? The next post in this series will explore how we applied the same principles of deception in order to handle inbound traffic and perform seamless load balancing.

1RFC849: Suggestions for improved host table distribution (https://tools.ietf.org/html/rfc849)

2Growth of the BGP Table http://bgp.potaroo.net/


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/tUoakZJweB0/building-and-scaling-fastly-network-part-1-fighting-fib

Original article

IDG Contributor Network: How-to: Configuring Linux usage limits with Docker and AWS ECS

Linux has become a dominant OS for application back ends and micro-services in the cloud. Usage limits (aka ulimits) are a critical Linux application performance tuning tool. Docker is now the leading mechanism for application deployment and distribution and AWS ECS is one of the top Docker container services. It’s more important than ever for developers to understand ulimits and how to use them in Linux, Docker and a service like AWS ECS.

The purpose of ulimits is to limit a program’s resource utilization to prevent a run-away bug or security breach from bringing the whole system down. It is easy for modern applications to exceed default open file limits very quickly.

To read this article in full or to leave a comment, please click here


Original URL: http://www.computerworld.com/article/3067303/cloud-computing/how-to-configuring-linux-usage-limits-with-docker-and-aws-ecs.html#tk.rss_all

Original article

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑

%d bloggers like this: