React-Boilerplate v3: The “JS Fatigue Antivenin” Edition

RBP v3: The “JS Fatigue Antivenin” Edition

React Boilerplate (RBP) v3.0.0 is out, and it’s a complete rewrite! :tada:

We’ve focused on becoming a rock-solid foundation to start your next project
with, no matter what its scale. You get to focus on writing your app because we
focus on making that as easy as pie.



  • Scaffolding: Thanks to @somus, you can now run $ npm run generate in your
    terminal and immediately create new components, containers, sagas, routes and
    selectors! No more context switching, no more “Create new file, copy and paste
    that boilerplate structure, bla bla”: just npm run generate and go.

    Oh… and starting a project got a whole lot easier too: npm run setup. Done.

  • Revamped architecture: Following the incredible discussion in #27 (thanks
    everybody for sharing your thoughts), we now have a weapons-grade, domain-driven
    application architecture.

    “Smart” containers are now isolated from stateless and/or generic components,
    tests are now co-located with the code that they validate.

  • New industry-standard JS utilties We’re now making the most of…

    • ImmutableJS
    • reselect
    • react-router-redux
    • redux-saga
  • Huge CSS Improvements

    • CSS Modules: Finally, truly modular, reusable
    • Page-specific CSS: smart Webpack configuration means that only the CSS
      your components need is served
    • Standards rock: Nothing beats consistent styling so we beefed up the
      quality checks with stylelint to help ensure
      that you and your team stay on point.
  • Performance

    • Code splitting: splitting/chunking by route means the leanest, meanest
      payload (because the fastest code is the code you don’t load!)
    • PageSpeed Metrics are built right in with npm run pagespeed
  • Testing setup: Thanks to @jbinto’s herculean efforts, testing is now a
    first-class citizen of this boilerplate. (the example app has 99% test coverage!)
    Karma and enzyme take care of unit testing, while ngrok tunnels your local
    server for access from anywhere in the world – perfect for testing on different
    devices in different locations.

  • New server setup: Thanks to the mighty @grabbou, we now use express.js to
    give users a production-ready server right out of the box. Hot reloading is
    still as available as always, but adding a custom API or a non-React page to
    your application is now easier than ever :smile:

  • Cleaner layout: We’ve taken no prisoners with our approach to keeping your
    code the star of the show: wherever possible, the new file layout keeps the
    config in the background so that you can keep your focus where it needs to be.

  • Documentation: Thanks to @oliverturner, this boilerplate has some of the best
    documentation going. Not just clearly explained usage guides, but easy-to-follow
    removal guides for most features too. RBP is just a launchpad: don’t want to
    use a bundled feature? Get rid of it quickly and easily without having to dig
    through the code.

  • Countless small improvements: Everything, from linting pre-commit (thanks
    @okonet!) to code splitting to cross-OS compatibility is now tested and ready
    to go:

    • We finally added a CoC
    • Windows compatibility has improved massively

Original URL:

Original article

Improving Docker with Unikernels: Introducing HyperKit, VPNKit and DataKit

We’ve been working hard to build native Docker for Mac and Windows apps to ensure that your Docker experience  is as seamless as possible on the most popular developer operating systems. Docker for Mac and Windows include everything required to spin up a Linux Docker container that efficiently bridges storage and networking from the host into the Docker containers. They work transparently on both MacOS X and Windows, and require no other third party software.

Docker has always been built on open-source foundations: Solomon Hykes is presenting a keynote today at OSCON 2016 about the incremental revolution that the firehose of collaborative open source development has enabled throughout Docker’s history.  Today, we are adding to our existing open source contributions by open sourcing the core technology that powers the Docker for Mac and Windows desktop applications!

Building Docker for Mac and Windows has required integrating hardware virtualization, embedded operating systems and unikernel technology, all without exposing this magic to the end user. Let’s take a look under the hood of our applications to understand what some of this source code does, and give you a better of idea of how to contribute to it or use it in your own projects.

When you run Docker for Mac, it spins up a lightweight hypervisor that exists solely to run a single, embedded Linux instance that includes the latest stable release of Docker Engine. Unlike most hypervisors, this requires no special admin privileges since it uses the included Hypervisor Framework (available since OSX 10.10). The Docker application also bundles libraries that supply the Docker VM with host networking and storage capabilities that map intelligently between Linux and OSX/Windows semantics.

Screen Shot 2016-05-18 at 7.19.27 AM.png
Today, we are excited to announce the open-sourcing of these discrete components, the same source code we use in the release builds of Docker for Mac and Windows. The new components are:

  • HyperKit: A lightweight virtualization toolkit on OSX
  • DataKit: A modern pipeline framework for distributed components
  • VPNKit: A library toolkit for embedding virtual networking

Each of these kits can be used independently or together to form a complete product such as Docker for Mac or Windows.  This is just the beginning: we will open more components in the future as they mature (e.g. the filesystem framework).  They all have a set of curated Pioneer Projects for beginners to take on: HyperKit, DataKit, and VPNKit.

Screen Shot 2016-05-18 at 7.19.47 AM.png


HyperKit is based around a lightweight approach to virtualization that is possible due to the Hypervisor framework being supplied with MacOS X 10.10 onwards. HyperKit applications can take advantage of hardware virtualization to run VMs, but without requiring elevated privileges or complex management tool stacks.

HyperKit is built on the xHyve and bHyve projects, with additional functionality to make it easier to interface with other components such as the VPNKit or DataKit. Since HyperKit is broadly structured as a library, linking it against unikernel libraries is straightforward. For example, we added persistent block device support that uses the MirageOS QCow libraries written in OCaml.

How can you contribute?

There are three great areas for contribution:

  • Support for booting more guest operating systems. Linux is the only “first class” operating system supported at the moment. FreeBSD does boot, but requires running the installer and so isn’t as seamless. Patches exist to add more BIOS support to boot Windows, OpenBSD, or NetBSD, but require more testing.
  • Support for more high-level language bindings. Because the HyperKit is structured as a library, it can be interfaced with high-level languages using their normal foreign function interfaces.
  • Hypervisor features. Several traditional hypervisor features such as suspend/resume, live relocation and support for hardware performance counters are not supported. These need to be added in the same library style as the rest of the codebase, in order to ensure that HyperKit remains lightweight and easy to embed.

We will ensure that any contributions are structured such that they can be submitted to their respective upstream projects.

How else can you use it?

Any applications that need to spin up specialised or short-lived virtual machines can benefit from linking against HyperKit. These could be conventional operating systems such as Linux, or some of the unikernel projects once they have been ported to HyperKit.


DataKit is a toolkit to coordinate processes with a git-compatible filesystem interface. It revisits the UNIX pipeline concept and the Plan9 9P protocol, but with a modern twist: streams of tree-structured data instead of raw text. DataKit lets you define complex workflows between loosely coupled processes using something as simple as shell scripts interacting with a version controlled file-system.

DataKit is a rethinking of application architecture around data flows, bringing back the wisdom of Plan 9’s “everything is a file”, in the git era where “everything is a versioned file”. Since we are making use of DataKit and 9P heavily in Docker for Mac and Windows, we are also open sourcing go-p9p, a modern, performant 9P library for Go.

How else can you use it?

There is a sample project using DataKit to create a Continuous Integration system in 50 lines of shell scripts in this repository:

The README also covers DataKit integration with GitHub. DataKit can be used in any situation where you need to coordinate processes around data, and shines when it is around versioned data.

How can you contribute?

GitHub PR support in DataKit is still quite basic, this is an area that could use additional contributions. DataKit could be used for a very broad set of use cases: share how you use it in your projects.


The VPNKit is a networking library that translates between raw Ethernet network traffic and their equivalent socket calls in MacOS X or Windows. It is based on the MirageOS TCP/IP unikernel stack, and is a library written in OCaml. VPNKit is useful when you need fine-grained control over networking protocols in user-space, with the additional convenience of being extensible in a high-level language.

How can you contribute?

VPNKit provides an interception point for all container traffic going through Docker for Mac or Windows. It could be extended with support for packet capture and inspection, protocol proxying to filter for particular traffic patterns, or even HTTP protocol visualisation for debugging web applications.

How else can you use it?

If VPNKit had support for more endpoint types, it could also be used to test network traffic without the overhead of actually generating and transmitting it.  It could also be used to build lightweight overlay networks between application components.

Next Steps

While the VPNKit and DataKit started life as quite specialised components in Docker for Mac and Windows, we are excited by the possibilities enabled by open sourcing them. The ideas here are by no means exhaustive, and we are looking forward to hearing about your own projects. Please file issues in their respective bug trackers as you come across them, or if you wish to discuss a particular idea.

And if you are at OSCON please come meet and collaborate with the maintainers of these projects in our OSCON Contribute session on Thursday 3 to 6 PM in Meeting Room 6. You can find more details about the internals of Docker for Mac and Windows in the slides for the talk I gave yesterday at OSCON.

If you haven’t already, please sign up for the Docker for Mac and Windows beta and send us feedback to make it better as we head towards general availability.  Finally, we would once again like to thank all of the open source efforts that made this release possible. The Docker for Mac and Windows acknowledgements list the hundreds of contributions that we use directly in our product, and we hope that you will also be able to check out and benefit from today’s releases in your own creations.

Docker for Mac and Windows Beta

An integrated, easy-to-deploy environment for building, assembling, and shipping applications.

Learn More about Docker

docker for mac, docker for windows, open source, OSCON

Original URL:

Original article

117 million LinkedIn emails and passwords from a 2012 hack just got posted online

linkedin-logo A LinkedIn hack from back in 2012 is still causing problems for its users. The company announced this morning that another data set from the hack, which contains over 100 million LinkedIn members’ emails and passwords, has now been released. In response to this new data dump, LinkedIn says it’s working to validate the accounts and contact affected users so they can reset their… Read More

Original URL:

Original article

Nvidia brings its Grid virtual desktop to the masses

Nvidia is introducing a new graphics card option for its Grid virtual desktop system, promising to cut the costs of streaming graphics-intensive applications to employees.

The new card, the Tesla M10, includes 4 GPUs and 32GB of memory, or enough compute power to stream desktop apps to 64 end users, according to Nvidia.

Customers buy the graphics hardware in Grid servers from partners such as Hewlett Packard Enterprise, Dell, Cisco Systems and Nutanix, along with virtualization software such as VMware Horizon, Citrix XenApp and Citrix XenDesktop.

Tesla M10Nvidia

Proponents say running apps centrally and streaming them to end users can reduce hardware and management costs. Users can get by with cheaper PCs that don’t have enough compute power to run graphics-heavy programs. It can also make workers more mobile, because the streamed apps can be accessed from anywhere and on almost any client, including a tablet.

To read this article in full or to leave a comment, please click here

Original URL:

Original article

AWS X1 instances – 1.9 TB of memory

Many AWS customers are running memory-intensive big data, caching, and analytics workloads and have been asking us for EC2 instances with ever-increasing amounts of memory.

Last fall, I first told you about our plans for the new X1 instance type. Today, we are announcing availability of this instance type with the launch of the x1.32xlarge instance size. This instance has the following specifications:

  • Processor: 4 x Intel™ Xeon E7 8880 v3 (Haswell) running at 2.3 GHz – 64 cores / 128 vCPUs.
  • Memory: 1,952 GiB with Single Device Data Correction (SDDC+1).
  • Instance Storage: 2 x 1,920 GB SSD.
  • Network Bandwidth: 10 Gbps.
  • Dedicated EBS Bandwidth: 10 Gbps (EBS Optimized by default at no additional cost).

The Xeon E7 processor supports Turbo Boost 2.0 (up to 3.1 GHz), AVX 2.0AES-NI, and the very interesting (to me, anyway) TSX-NI instructions. AVX 2.0 (Advanced Vector Extensions) can improve performance on HPC, database, and video processing workloads; AES-NI improves the speed of applications that make use of AES encryption. The new TSX-NI instructions support something cool called transactional memory. The instructions allow highly concurrent, multithreaded applications to make very efficient use of shared memory by reducing the amount of low-level locking and unlocking that would otherwise be needed around each memory access.

If you are ready to start using the X1 instances in the US East (Northern Virginia), US West (Oregon), Europe (Ireland), Europe (Frankfurt), Asia Pacific (Tokyo), Asia Pacific (Singapore), or Asia Pacific (Sydney) Regions, please request access and we’ll get you going as soon as possible. We have plans to make the X1 instances available in other Regions and in other sizes before too long.

3-year Partial Upfront Reserved Instance Pricing starts at $3.970 per hour in the US East (Northern Virginia) Region; see the EC2 Pricing page for more information. You can purchase Reserved Instances and Dedicated Host Reservations today; Spot bidding is on the near-term roadmap.

Here are some screen shots of an x1.32xlarge in action. lscpu shows that there are 128 vCPUs spread across 4 sockets:

On bootup, the kernel reports on the total accessible memory:

The top command shows a huge number of running processes and lots of memory:

Ready for Enterprise-Scale SAP Workloads
The X1 instances have been certified by SAP for production workloads. They meet the performance bar for SAP OLAP and OLTP workloads backed by SAP HANA.

You can migrate your on-premises deployments to AWS and you can also start fresh. Either way, you can run S/4HANA, SAP’s next-generation Business Suite, as well as earlier versions.

Many AWS customers are currently running HANA in scale-out fashion across multiple R3 instances. Many of these workloads can now be run on a single X1 instance. This configuration will be simpler to set up and less expensive to run. As I mention below, our updated SAP HANA Quick Start will provide you with more information on your configuration options.

Here’s what SAP HANA Studio looks like when run on an X1 instance:

You have several interesting options when it comes to disaster recovery (DR) and high availability (HA) when you run your SAP HANA workloads on an X1 instance. For example:

  • Auto Recovery – Depending on your RPO (Recovery Point Objective) and RTO (Recovery Time Objective), you may be able to use a single instance in concert with EC2 Auto Recovery.
  • Hot Standby – You can run X1 instances in 2 Availability Zones and use HANA System Replication to keep the spare instance in sync.
  • Warm Standby / Manual Failover – You can run a primary X1 instance and a smaller secondary instance configured to persist only to permanent storage.  In the event that a failover is necessary, you stop the secondary instance, modify the instance type to X1, and reboot. This unique, AWS-powered option will give you quick recovery while keeping costs low.

We have updated our HANA Quick Start as part of today’s launch. You can get SAP HANA running in a new or existing VPC within an hour using a well-tested configuration:

The Quick Start will help you to configure the instance and the associated storage, install the requisite operating system packages, and to install SAP HANA.

We have also released a SAP HANA Migration Guide. It will help you to migrate your existing on-premises or AWS-based SAP HANA workloads to AWS.


Original URL:

Original article

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑

%d bloggers like this: