American Library Association (ALA) Reference and User Services Association (RUSA): Best Free Reference Websites List

The list includes:

Please see:

RUSA’s Emerging Technologies Section selects annual list of Best Free Reference Websites

For information about the American Library Association (ALA) and its Reference and User Services Association (RUSA) , please see here and here.

Cross-posted at Law Library Blog.

Original URL:  

Original article

Build a WebKit browser

Linux User & Developer: In this tutorial, we will show you how to get WebKit set up on your system, along with some pointers on how to construct a browser using the source so that you can then go ahead and begin creating a custom project

Original URL:  

Original article

Drupal 8.0.3 and 7.42 released

Drupal 8.0.3 and Drupal 7.42, maintenance releases with numerous bug fixes (no security fixes), are now available for download.

See the Drupal 8.0.3 release notes and Drupal 7.42 release notes for full lists of included fixes.

Upgrading your existing Drupal 8 and 7 sites is recommended. There are no major nor non-backwards-compatible features in these releases. For more information about the Drupal 8.x release series, consult the Drupal 8 overview. More information on the Drupal 7.x release series can be found in the Drupal 7.0 release announcement.

Security information

We have a security announcement mailing list and a history of all security advisories, as well as an RSS feed with the most recent security advisories. We strongly advise Drupal administrators to sign up for the list.

Drupal 8 includes the built-in Update Manager module, which informs you about important updates to your modules and themes.

There are no security fixes in these releases of Drupal core.

Bug reports

Drupal 8.0.x and 7.x actively maintained, so more maintenance releases will be made available, according to our monthly release cycle.

Change log

Drupal 8.0.3 contains bug fixes and documentation and testing improvements only. The full list of changes between the last 8.0.x patch release and the 8.0.3 release can be found by reading the 8.0.3 release notes. A complete list of all changes in the stable 8.0.x branch can be found in the git commit log.

Drupal 7.42 contains bug fixes and minor new features. The full list of changes between the last 7.x patch release and the 7.42 release can be found by reading the 7.42 release notes. A complete list of all changes in the stable 7.x branch can be found in the git commit log.

Update notes

See the 8.0.3 release notes and 7.42 release notes for details on important changes in these releases.

Known issues

See the 8.0.3 release notes and 7.42 release notes for known issues.

Front page news: 
Drupal version: 

Original URL:  

Original article

Scientist: Measure Twice, Cut Over Once

Today we’re releasing Scientist 1.0 to help you rewrite critical code with confidence.

As codebases mature and requirements change, it is inevitable that you will need to replace or rewrite a part of your system. At GitHub, we’ve been lucky to have many systems that have scaled far beyond their original design, but eventually there comes a point when performance or extensibility break down and we have to rewrite or replace a large component of our application.


A few years ago when we were faced with the task of rewriting one of the most critical systems in our application — the permissions code that controls access and membership to repositories, teams, and organizations — we began looking for a way to make such a large change and have confidence in its correctness.

There is a fairly common architectural pattern for making large-scale changes known as Branch by Abstraction. It works by inserting an abstraction layer around the code you plan to change. The abstraction simply delegates to the existing code to begin with. Once you have the new code in place, you can flip a switch in the abstraction to begin substituting the new code for the old.

Using abstractions in this way is a great way to create a chokepoint for calls to a particular code path, making it easy to switch over to the new code when the time comes, but it doesn’t really ensure that the behavior of the new system will match the old system — just that the new system will be called in all places where the old system was called. For such a critical piece of our system architecture, this pattern only fulfilled half of the requirements. We needed to ensure not only that the new system would be used in all places that the old system was, but also that its behavior would be correct and match what the old system did.

Why tests aren’t enough

If you want to test correctness, you just write some tests for your new system, right? Well, not quite. Tests are a good place to start verifying the correctness of a new system as you write it, but they aren’t enough. For sufficiently complicated systems, it is unlikely you will be able to cover all possible cases in your test suite. If you do, it will be a large, slow test suite that slows down development considerably.

There’s also a more concerning reason not to rely solely on tests to verify correctness: Since software has bugs, given enough time and volume, your data will have bugs, too. Data quality is the measure of how buggy your data is. Data quality problems may cause your system to behave in unexpected ways that are not tested or explicitly part of the specifications. Your users will encounter this bad data, and whatever behavior they see will be what they come to rely on and consider correct. If you don’t know how your system works when it encounters this sort of bad data, it’s unlikely that you will design and test the new system to behave in the way that matches the legacy behavior. So, while test coverage of a rewritten system is hugely important, how the system behaves with production data as the input is the only true test of its correctness compared to the legacy system’s behavior.

Enter Scientist

We built Scientist to fill in that missing piece and help test the production data and behavior to ensure correctness. It works by creating a lightweight abstraction called an experiment around the code that is to be replaced. The original code — the control — is delegated to by the experiment abstraction, and its result is returned by the experiment. The rewritten code is added as a candidate to be tried by the experiment at execution time. When the experiment is called at runtime, both code paths are run (with the order randomized to avoid ordering issues). The results of both the control and candidate are compared and, if there are any differences in that comparison, those are recorded. The duration of execution for both code blocks is also recorded. Then the result of the control code is returned from the experiment.

From the caller’s perspective, nothing has changed. But by running and comparing both systems and recording the behavior mismatches and performance differences between the legacy system and the new one, you can use that data as a feedback loop to modify the new system (or sometimes the old!) to fix the errors, measure, and repeat until there are no differences between the two systems. You can even start using Scientist before you’ve fully implemented the rewritten system by telling it to ignore experiments that mismatch due to a known difference in behavior.

The diagram below shows the happy path that experiments follow:

scientist control flow

Happy paths are only part of a system’s behavior, though, so Scientist can also handle exceptions. Any exceptions encountered in either the control or candidate blocks will be recorded in the experiments observations. An exception in the control will be re-raised at the end of the experiment since this is the “return value” of that block; exceptions in candidate blocks will not be raised since that would create an unexpected side-effect of the experiment. If the candidate and control blocks raise the same exception, this is considered a match since both systems are behaving the same way.


Let’s say we have a method to determine whether a repository can be pulled by a particular user:

class Repository
  def pullable_by?(user)

But the is_collaborator? method is very inefficient and does not perform well, so you have written a new method to replace it:

class Repository
  def has_access?(user)

To declare an experiment, wrap it in a science block and name your experiment:

def pullable_by?(user)
  science "repository.pullable-by" do |experiment|

Declare the original body of the method to be the control branch — the branch to be returned by the entire science block once it finishes running:

def pullable_by?(user)
  science "repository.pullable-by" do |experiment|
    experiment.use { is_collaborator?(user) }

Then specify the candidate branch to be tried by the experiment:

def pullable_by?(user)
  science "repository.pullable-by" do |experiment|
    experiment.use { is_collaborator?(user) }
    experiment.try { has_access?(user) }

You may also want to add some context to the experiment to help debug potential mismatches:

def pullable_by?(user)
  science "repository.pullable-by" do |experiment|
    experiment.context :repo => id, :user =>
    experiment.use { is_collaborator?(user) }
    experiment.try { has_access?(user) }


By default, all experiments are enabled all of the time. Depending on where you are using Scientist and the performance characteristics of your application, this may not be safe. To change this default behavior and have more control over when experiments run, you’ll need to create your own experiment class and override the enabled? method. The code sample below shows how to override enabled? to enable each experiment a percentage of the time:

class MyExperiment
  include ActiveModel::Model
  include Scientist::Experiment
  attr_accessor :percentage

  def enabled?
    rand(100) < percentage

You’ll also need to override the new method to tell Scientist create new experiments with your class rather than the default experiment implementation:

module Scientist::Experiment
  def name)

Publishing results

Scientist is not opinionated about what you should do with the data it produces; it simply makes the metrics and results available and leaves it up to you to decide how and whether to store it. Implement the publish method in your experiment class to record metrics and store mismatches. Scientist passes an experiment’s result to this method. A Scientist::Result contains lots of useful information about the experiment such as:

  • whether an experiment matched, mismatched, or was ignored
  • the results of the control and candidate blocks if there was a difference
  • any additional context added to the experiment
  • the duration of the candidate and control blocks

At GitHub, we use Brubeck and Graphite to record metrics. Most experiments use Redis to store mismatch data and additional context. Below is an example of how we publish results:

class MyExperiment
  def publish(result)
    name = result.experiment_name

    $stats.increment "science.#{name}.total"
    $stats.timing "science.#{name}.control", result.control.duration
    $stats.timing "science.#{name}.candidate", result.candidates.first.duration

    if result.mismatched?
      $stats.increment "science.#{name}.mismatch"

 def store_mismatch_data(result)
   payload = {
     :name            => name,
     :context         => context,
     :control         => observation_payload(result.control),
     :candidate       => observation_payload(result.candidates.first),
     :execution_order =>

   Redis.lpush "science.#{name}.mismatch", payload


By publishing this data, we get graphs that look like this:

scientist mismatches graphscientist performance graph

And mismatch data like:

    repo: 3
    user: 1
  name: "repository.pullable-by"
  execution_order: ["candidate", "control"]
    duration: 0.0015689999999999999
    exception: nil
    value: true
    duration: 0.000735
    exception: nil
    value: false

Using the data to correct the system

Once you have some mismatch data, you can begin investigating individual mismatches to see why the control and candidate aren’t behaving the same way. Usually you’ll find that the new code has a bug or is missing a part of the behavior of the legacy code, but sometimes you’ll find that the bug is actually in the legacy code or in your data. After the source of the error has been corrected, you can start the experiment again and repeat this process until there are no more mismatches between the two code paths.

Finishing an experiment

Once you are able to conclude with reasonable confidence that the control and candidate are behaving the same way, it’s time to wrap up your experiment! Ending an experiment is as simple as disabling it, removing the science code and control implementation, and replacing it with the candidate implementation.

def pullable_by?(user)


There are a few cases where Scientist is not an appropriate tool to use. The most important caveat is that Scientist is not meant to be used for any code that has side-effects. A candidate code path that writes to the same database as the control, invalidates a cache, or otherwise modifies data that affects the original, production behavior is dangerous and incorrect. For this reason, we only use Scientist on read operations.

You should also be mindful that you take a performance hit using Scientist in production. New experiments should be introduced slowly and carefully and their impact on production performance should be closely monitored. They should run for just as long as is necessary to gain confidence rather than being left to run indefinitely, especially for expensive operations.


We make liberal use of Scientist for a multitude of problems at GitHub. This development pattern can be used for something as small as a single method or something as large as an external system. The Move Fast and Fix Things post is a great example of a short rewrite made easier with Scientist. Over the last few years we’ve also used Scientist for projects such as:

  • a large, multi-year-long rewrite and clean up of our permission code
  • switching to a new code search cluster
  • optimizing queries — this allows us to ensure not only better performance of the new query, but that it is still correct and doesn’t unintentionally return more or less or different data
  • refactoring risky parts of the codebase — to ensure no unintentional changes have been introduced

If you’re about to make a risky change to your Ruby codebase, give the Scientist gem a try and see if it can help make your work easier. Even if Ruby isn’t your language of choice, we’d still encourage you to apply Scientist’s experiment pattern to your system. And of course we would love to hear about any open source libraries you build to accomplish this!

Original URL:  

Original article

Show HN: Lightweight Microservices Architecture for the Internet of Things

Lelylan is an iot cloud platform based on a lightweight microservices architecture.

Lelylan platform is both hardware-agnostic and platform-agnostic. This means you can connect any hardware, from the ESP8622 to the most professional embedded hardware solution and everything in between – and it can run on any public cloud or in your own private datacenter, or even a hybrid environment, whether virtualized or on bare metal.

Lelylan Logo

Why Lelylan

Research in the Internet of Things is global and growing fast, but lacks standard tools. Many companies are building their own solution. By sharing what we have learned during the years, we want to create a shared code base with a clear focus to developers. To see Lelylan in action checkout the tutorials in the dev center.



Lelylan is tested against the techs below.

  • Ruby MRI ~1.9.3
  • Node ~0.8.8
  • MongoDB ~2.6
  • Redis ~2.6

Remember to run MongoDB and Redis.

$ mongod
$ redis-server



Lelylan is composed by different microservices.
Follow the installation guidelines for each of them to setup the platform in development.

If everything works ok, access Lelylan APIs from You can now connect your hardware.


In production every microservice needs to set the following environment variables. Remember to change them to your own microservices, mongodb, redis and cache values.

Environment Variable Description
RACK_ENV=production Production rack environment
RAILS_ENV=production Production rails environment
NODE_ENV=production Production node environment OAuth 2.0 microservice URL Devices API microservice URL Types API microservice URL Subs. API microservice URL Profiles API microservice URL
MONGOLAB_PEOPLE_URL=mongodb://:@:/ OAuth 2.0 MongoDB URL
MONGOLAB_DEVICES_URL=mongodb://:@:/ Devices API MongoDB URL
MONGOLAB_TYPES_URL=mongodb://:@:/ Types API MongoDB URL
MONGOLAB_JOBS_URL=mongodb://:@:/ Event Bus MongoDB URL
MEMCACHIER_USERNAME= Cache server username
MEMCACHIER_PASSWORD= Cache server password
REDIS_URL Background Job Redis URL
REDIS_RATE_LIMIT_URL=redis://:@:/ Late Limit Redis URL

We are studying solutions like Docker, Mesos, and Ansible to simplify the installation process. If you are experimenting in the same area, get in touch with us.


The Roadmap provides description of items that the project decided to prioritize. This should
serve as a reference point for Lelylan contributors to understand where the project is going, and
help determine if a contribution could be conflicting with some longer terms plans.

The fact that a feature isn’t listed here doesn’t mean that a patch for it will automatically be
refused (we also miss important things). We are always happy to receive patches for new cool features we haven’t
thought about, or didn’t judge priority. However understand that such patches might take longer for us
to review.

Checkout the roadmap to see our near future goals.

Contributing to Lelylan

This Contributing document tries to define a contributor’s guide explaining how to contribute to one or more Lelylan Microservice. It contains information about reporting issues as well as some tips and guidelines useful to experienced open source contributors.

Checkout the contributing to help us with Lelylan.


Use the available communication channels to communicate your ideas, problems or suggestions.


Lelylan is licensed under the Apache License, Version 2.0.

Original URL:  

Original article

Dell Edge Gateway 5000 to support natively flashing UEFI firmware under Linux

(Published on behalf of Mario Limonciello, OS Architect of Dell Client Solutions Group’s Linux Engineering team.)

I’m happy to announce that starting with the Dell Edge Gateway 5000 we will be introducing support to natively flash UEFI firmware under Linux.  To achieve this we’re supporting the standards based UEFI capsule functionality from UEFI version 2.5.  Furthermore, the entire tool chain used to do this is open source.

Red Hat has developed the tools that enable this functionality: fwupd, fwupdate, & ESRT support in the Linux kernel.  For the past year we have been working closely with Red Hat, Intel, & Canonical to jointly fix hundreds of issues related to the architecture, tools, process, and metadata on real hardware. 

Dell will be publishing BIOS updates to the Red Hat created Linux Vendor Firmware Service (LVFS).  Red Hat provides LVFS as a central OS agnostic repository for OEMs to distribute firmware to all Linux customers.

Dell will be shipping the Dell Edge Gateway 5000 with Ubuntu Snappy and Intel Wind River IDP.  Both will include the tools natively in our preloaded factory image.  The best part of choosing a standards based solution however is that the tools will work on any modern Linux distribution.  If you choose not to use our preloaded OS, you will still be able to install these tools and take advantage of this functionality.  They’re already available in Fedora 23, Debian Unstable, and Ubuntu 15.04+.

This work is a continuation of Dell’s continued commitment to work in the EFI tools in Debian for over 2 years.  From this effort, the Debian EFI team was formed to ensure that the entire Debian/Ubuntu UEFI flashing toolchain is rock-solid and can support firmware updates out of the box on Dell hardware. 

As I mentioned, the Dell Edge Gateway 5000 is just the first system we’ll be supporting this technology with.  We are looking forward to expanding it to other Dell hardware.  If you can help fill out an anonymous survey, it would help us direct our next platforms to focus on.

If you would like to learn more about this technology, here are the relevant pieces of the toolchain and a high level overview of what they do:

Original URL:  

Original article

Clever New GitHub Tool Lets Coders Build Software Like Bridges

Clever New GitHub Tool Lets Coders Build Software Like Bridges

GitHub’s new tool lets coders rebuild old software from scratch without ever turning off the switch.

The post Clever New GitHub Tool Lets Coders Build Software Like Bridges appeared first on WIRED.

Original URL:  

Original article

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑

%d bloggers like this: