Google’s New Interactive E-Books Would Be Impossible to Print

Google’s New Interactive E-Books Would Be Impossible to Print

Google and Visual Editions created a new kind of interactive mobile book.

The post Google’s New Interactive E-Books Would Be Impossible to Print appeared first on WIRED.

Original URL:

Original article

LL.M. in International Human Rights and Humanitarian Law Welcomes New Students to Hybrid Program

The Academy on Human Rights and Humanitarian Law has welcomed new students to its LL.M. programs this semester. The Academy began the English LL.M. in International Human Rights and Humanitarian Law in the Spring of 2015 and will be graduating its
first cohort of students this summer. The Spanish program launched this year, contains the same structure and content as the English program. MORE

Original URL:

Original article

Allowing Matlab to Talk to Rust

As part of my PhD I write a lot of Matlab.
I am not particularly fond of Matlab but because I need to collaborate and work
on older code bases I don’t have much choice in the matter.
All that aside, sometimes in Matlab you’ll find that things can get a little slow.
In my field of computational electrodynamics this is typically around the time
we need to calculate a matrix of inductance contributions with 100 million elements.
Because code like this is hard to vectorise we tend to write code in other languages
like C to speed up that section of code. In Matlab land this is typically referred
to as writing a Mex file. Mex files can be written in C, C++, Fortran and possibly

As a fan of Rust I wanted to see what I could do to get Rust and Matlab
talking to one another. It took some time to work out how it could all fit together
but I have ported one of the older C Mex files to Rust and successfully linked them
to Matlab. While I can’t share the specifics of that code here I will work through
a working example of sending data from Matlab to Rust and back again. Let’s get

My Setup

Just in case something falls apart with updates this is my current setup:

  • Matlab R2015b
  • Mac OSX 10.11.3
  • rustc 1.5.0
  • cargo 0.7.0

Step 1 – Writing a static library in Rust

This step is fairly well documented but for completeness I will start from the
beginning. I assume you have all of the components above installed and Matlab
knows what C compiler you will be using.

So let’s create an empty Rust project.

cargo new rustlab
cd rustlab

Open up your Cargo.toml file and add the following lines to the end.



We ask for libc so we can send data between Matlab and Rust, we also declare this project as a library and say we want to build a static library. As it stands we need to create a static library which will be linked to a C file which wraps the Matlab Mex calling interface, more on that later.

Now to write our functions. In this example we will be writing a function which takes two vectors and does an element wise multiplication on their elements. Something Matlab already does, and does quickly, but enough of an example to show the process of getting data sent both ways.

In our Rust code we will be writing two functions.
One, which we will call multiply_safe is where we will use only Rust variables and do the actual computation.
The other multiply will be a wrapper, and our interface to C which takes C types, populates Rust variables with them, calls our multiply_safe function and then passes the result back as a C type.
This way we keep our Rust code and glue separate as much as possible.
Allowing us to test our Rust code on it’s own without having to set up all the glue each time.

Step 2 – A simple function

So let’s get going with implementing multiply_safe and writing a simple test to ensure it is doing what we want.

First open up and add the following function to the file:

fn multiply_safe(a : Vec<f64>, b : Vec<f64>) -> Vec<f64> {
    if a.len() != b.len() {
        panic!("The two vectors differ in length!");

    let mut result : Vec<f64> = vec![0f64; a.len()];
    for i in 0..a.len() {
        result[i] = a[i]*b[i];

    return result;

Nice and simple, there should be no surprises here. Because it is good practice we will also write a test to ensure the results are indeed what we expect. Edit the default it_works test to look like the following:

fn it_works() {
    let a : Vec<f64> = vec![1f64, 2f64, 3f64];
    let b : Vec<f64> = vec![3f64, 2f64, 1f64];
    let c : Vec<f64> = multiply_safe(a, b);
    let expected : Vec<f64> = vec![3f64, 4f64, 3f64];

    assert!(c.len() == expected.len());
    for i in 0..c.len() {
        assert!(c[i] == expected[i]);

Now if you run cargo test you should see our test passing. The other thing we now have is a file called librustlab.a in the target/debug folder. It wont do anything right now because we haven’t written the multiply function but this is were our library will end up.

Step 3 – A C wrapper

Now let’s look at the function that makes the link between C and Rust. As I mentioned before we will be writing a C file which acts as a middleman between Matlab and Rust. This is clearly not ideal but in currently it appears to be the most straightforward way to get things working.

Before we write this new function lets add the libc requirements to the top of the file.

extern crate libc;
use libc::{c_double, c_long};

The C function will be passing us double pointers so we use rust’s c_double
type and c_long to pass the length of the two arrays. Next to create our exported function add the following.

pub extern fn multiply(a_double : *mut c_double, 
                    b_double : *mut c_double, 
                    c_double : *mut c_double,
                    elements : c_long) {
    let size : usize = elements as usize;
    let mut a : Vec<f64> = vec![0f64; size];
    let mut b : Vec<f64> = vec![0f64; size];

    for i in 0..size {
        unsafe {
            a[i] = *(a_double.offset(i as isize)) as f64;
            b[i] = *(b_double.offset(i as isize)) as f64;

    let c : Vec<f64> = multiply_safe(a, b);

    for i in 0..size {
        unsafe {
            *c_double.offset(i as isize) = c[i];

Some things to note here. First up #[no_mangle] this tells rust to keep the
function name the same so we can link to it, typically it gets mangled to
reduce the risk of two things having the same name. Next pub extern is needed
to make sure the function is publicly exported. The last thing worth noting is
the unsafe block. We use one of these to mark where potentially unsafe things
might happen (like dereferencing pointers). Hopefully the rest of the function
is clear enough. Build the code again and we will move on to writing the final
piece of the puzzle.

Part 4 – Some C code

Create a new file rustlab.c and add the following code. If you have used the
mex interface before it should be pretty straightforward.

#include "mex.h"

// Multiplies a and b element wise, and puts the result in c
extern void multiply(double* a, double* b, double* c, long elements);

void mexFunction(int nlhs, mxArray *plhs[], 
        int nrhs, const mxArray *prhs[]) {
    double* a;
    double* b;
    double* c;

    mwSize elements;

    if (nrhs != 2) {
        mexErrMsgTxt("Wrong number of input args");

    if (nlhs != 1) {
        mexErrMsgTxt("Wrong number of output args");

    a = mxGetPr(prhs[0]);
    b = mxGetPr(prhs[1]);
    elements = mxGetM(prhs[0]);

    plhs[0] = mxCreateDoubleMatrix(elements, 1, mxREAL);
    c = mxGetPr(plhs[0]);

    multiply(a, b, c, elements);

Now that both of these are done, open up Matlab and move to a folder that
contains both librustlab.a and rustlab.c. While in Matlab run the following.

mex rustlab.c librustlab.a
a = 1:10;
c = rustlab(a', a');

Woohoo! You did it. Hopefully this made things clear and helps in some way.
While I am probably not considered an expert in rust I am happy to help out
if you get in touch with me via twitter @smitec.

I plan over time to move more of my mex files to Rust. It offers a lot in terms
of safety and high level concepts that can be more difficult to achieve in C.

All of the code related to this post is available on Github: rustlab and
so if things are broken on your system feel free to post an issue or pull request.

Original URL:

Original article

Dragula: Drag and drop so simple it hurts



Travis CI Slack Status Flattr Patreon

Drag and drop so simple it hurts

Browser support includes every sane browser and IE7+. (Granted you polyfill the functional Array methods in ES5)

Framework support includes vanilla JavaScript, Angular, and React.


Try out the demo!

Have you ever wanted a drag and drop library that just works? That doesn’t just depend on bloated frameworks, that has great support? That actually understands where to place the elements when they are dropped? That doesn’t need you to do a zillion things to get it to work? Well, so did I!

  • Super easy to set up
  • No bloated dependencies
  • Figures out sort order on its own
  • A shadow where the item would be dropped offers visual feedback
  • Touch events!
  • Seamlessly handles clicks without any configuration

You can get it on npm.

npm install dragula --save

Or bower, too.

bower install dragula --save

Or a CDN.

<script src='$VERSION/dragula.min.js'></script>

If you’re not using either package manager, you can use dragula by downloading the files in the dist folder. We strongly suggest using npm, though.

Including the JavaScript

There’s a caveat to dragula. You shouldn’t include it in the of your web applications. It’s bad practice to place scripts in the , and as such dragula makes no effort to support this use case.

Place dragula in the , instead.

Including the CSS!

There’s a few CSS styles you need to incorporate in order for dragula to work as expected.

You can add them by including dist/dragula.css or dist/dragula.min.css in your document. If you’re using Stylus, you can include the styles using the directive below.

@import 'node_modules/dragula/dragula'

Dragula provides the easiest possible API to make drag and drop a breeze in your applications.

dragula(containers?, options?)

By default, dragula will allow the user to drag an element in any of the containers and drop it in any other container in the list. If the element is dropped anywhere that’s not one of the containers, the event will be gracefully cancelled according to the revertOnSpill and removeOnSpill options.

Note that dragging is only triggered on left clicks, and only if no meta keys are pressed.

The example below allows the user to drag elements from left into right, and from right into left.

dragula([document.querySelector('#left'), document.querySelector('#right')]);

You can also provide an options object. Here’s an overview of the default values.

dragula(containers, {
  isContainer: function (el) {
    return false; // only elements in drake.containers will be taken into account
  moves: function (el, source, handle, sibling) {
    return true; // elements are always draggable by default
  accepts: function (el, target, source, sibling) {
    return true; // elements can be dropped in any of the `containers` by default
  invalid: function (el, target) {
    return false; // don't prevent any drags from initiating by default
  direction: 'vertical',             // Y axis is considered when determining where an element would be dropped
  copy: false,                       // elements are moved by default, not copied
  copySortSource: false,             // elements in copy-source containers can be reordered
  revertOnSpill: false,              // spilling will put the element back where it was dragged from, if this is true
  removeOnSpill: false,              // spilling will `.remove` the element, if this is true
  mirrorContainer: document.body,    // set the element that gets mirror elements appended
  ignoreInputTextSelection: true     // allows users to select input text, see details below

You can omit the containers argument and add containers dynamically later on.

var drake = dragula({
  copy: true

You can also set the containers from the options object.

var drake = dragula({ containers: containers });

And you could also not set any arguments, which defaults to a drake without containers and with the default options.

The options are detailed below.


Setting this option is effectively the same as passing the containers in the first argument to dragula(containers, options).


Besides the containers that you pass to dragula, or the containers you dynamically push or unshift from drake.containers, you can also use this method to specify any sort of logic that defines what is a container for this particular drake instance.

The example below dynamically treats all DOM elements with a CSS class of dragula-container as dragula containers for this drake.

var drake = dragula({
  isContainer: function (el) {
    return el.classList.contains('dragula-container');


You can define a moves method which will be invoked with (el, source, handle, sibling) whenever an element is clicked. If this method returns false, a drag event won’t begin, and the event won’t be prevented either. The handle element will be the original click target, which comes in handy to test if that element is an expected “drag handle”.


You can set accepts to a method with the following signature: (el, target, source, sibling). It’ll be called to make sure that an element el, that came from container source, can be dropped on container target before a sibling element. The sibling can be null, which would mean that the element would be placed as the last element in the container. Note that if options.copy is set to true, el will be set to the copy, instead of the originally dragged element.

Also note that the position where a drag starts is always going to be a valid place where to drop the element, even if accepts returned false for all cases.


If copy is set to true (or a method that returns true), items will be copied rather than moved. This implies the following differences:

Event Move Copy
drag Element will be concealed from source Nothing happens
drop Element will be moved into target Element will be cloned into target
remove Element will be removed from DOM Nothing happens
cancel Element will stay in source Nothing happens

If a method is passed, it’ll be called whenever an element starts being dragged in order to decide whether it should follow copy behavior or not. Consider the following example.

copy: function (el, source) {
  return el.className === 'you-may-copy-us';


If copy is set to true (or a method that returns true) and copySortSource is true as well, users will be able to sort elements in copy-source containers.

copy: true,
copySortSource: true


By default, spilling an element outside of any containers will move the element back to the drop position previewed by the feedback shadow. Setting revertOnSpill to true will ensure elements dropped outside of any approved containers are moved back to the source element where the drag event began, rather than stay at the drop position previewed by the feedback shadow.


By default, spilling an element outside of any containers will move the element back to the drop position previewed by the feedback shadow. Setting removeOnSpill to true will ensure elements dropped outside of any approved containers are removed from the DOM. Note that remove events won’t fire if copy is set to true.


When an element is dropped onto a container, it’ll be placed near the point where the mouse was released. If the direction is 'vertical', the default value, the Y axis will be considered. Otherwise, if the direction is 'horizontal', the X axis will be considered.


You can provide an invalid method with a (el, target) signature. This method should return true for elements that shouldn’t trigger a drag. Here’s the default implementation, which doesn’t prevent any drags.

function invalidTarget (el, target) {
  return false;

Note that invalid will be invoked on the DOM element that was clicked and every parent up to immediate children of a drake container.

As an example, you could set invalid to return false whenever the clicked element (or any of its parents) is an anchor tag.

invalid: function (el) {
  return el.tagName === 'A';


The DOM element where the mirror element displayed while dragging will be appended to. Defaults to document.body.


When this option is enabled, if the user clicks on an input element the drag won’t start until their mouse pointer exits the input. This translates into the user being able to select text in inputs contained inside draggable elements, and still drag the element by moving their mouse outside of the input — so you get the best of both worlds.

This option is enabled by default. Turn it off by setting it to false. If its disabled your users won’t be able to select text in inputs within dragula containers with their mouse.


The dragula method returns a tiny object with a concise API. We’ll refer to the API returned by dragula as drake.


This property contains the collection of containers that was passed to dragula when building this drake instance. You can push more containers and splice old containers at will.


This property will be true whenever an element is being dragged.


Enter drag mode without a shadow. This method is most useful when providing complementary keyboard shortcuts to an existing drag and drop solution. Even though a shadow won’t be created at first, the user will get one as soon as they click on item and start dragging it around. Note that if they click and drag something else, .end will be called before picking up the new item.


Gracefully end the drag event as if using the last position marked by the preview shadow as the drop target. The proper cancel or drop event will be fired, depending on whether the item was dropped back where it was originally lifted from (which is essentially a no-op that’s treated as a cancel event).


If an element managed by drake is currently being dragged, this method will gracefully cancel the drag action. You can also pass in revert at the method invocation level, effectively producing the same result as if revertOnSpill was true.

Note that a “cancellation” will result in a cancel event only in the following scenarios.

  • revertOnSpill is true
  • Drop target (as previewed by the feedback shadow) is the source container and the item is dropped in the same position where it was originally dragged from


If an element managed by drake is currently being dragged, this method will gracefully remove it from the DOM.

drake.on (Events)

The drake is an event emitter. The following events can be tracked using drake.on(type, listener):

Event Name Listener Arguments Event Description
drag el, source el was lifted from source
dragend el Dragging event for el ended with either cancel, remove, or drop
drop el, target, source, sibling el was dropped into target before a sibling element, and originally came from source
cancel el, container, source el was being dragged but it got nowhere and went back into container, its last stable parent; el originally came from source
remove el, container, source el was being dragged but it got nowhere and it was removed from the DOM. Its last stable parent was container, and originally came from source
shadow el, container, source el, the visual aid shadow, was moved into container. May trigger many times as the position of el changes, even within the same container; el originally came from source
over el, container, source el is over container, and originally came from source
out el, container, source el was dragged out of container or dropped, and originally came from source
cloned clone, original, type DOM element original was cloned as clone, of type ('mirror' or 'copy'). Fired for mirror images and when copy: true


Removes all drag and drop events used by dragula to manage drag and drop between the containers. If .destroy is called while an element is being dragged, the drag will be effectively cancelled.


Dragula uses only four CSS classes. Their purpose is quickly explained below, but you can check dist/dragula.css to see the corresponding CSS rules.

  • gu-unselectable is added to the mirrorContainer element when dragging. You can use it to style the mirrorContainer while something is being dragged.
  • gu-transit is added to the source element when its mirror image is dragged. It just adds opacity to it.
  • gu-mirror is added to the mirror image. It handles fixed positioning and z-index (and removes any prior margins on the element). Note that the mirror image is appended to the mirrorContainer, not to its initial container. Keep that in mind when styling your elements with nested rules, like .list .item { padding: 10px; }.
  • gu-hide is a helper class to apply display: none to an element.

See contributing.markdown for details.

There’s now a dedicated support channel in Slack. Visit this page to get an invite. Support requests won’t be handled through the repository anymore.


Original URL:

Original article

High Salaries Haunt Some Job Hunters

Feb. 4, 2016 8:07 p.m. ET

After more than 20 years as an electronics engineer, Pete Edwards reached the low six-figure pay level. Now, as he looks for a job following a layoff, he finds that salary success a burden.

Although his experience includes the sought-after field of 3-D printing, the 53-year-old hasn’t been able to land a permanent full-time job. Time and again, he says, employers seem to lose interest after he answers a question that they ask early on: “What was your last salary?”

That question comes up sooner than ever nowadays. Hiring managers used to broach salary history or requirements only in later stages, after applicants had a chance to make an impression and state their case.

Today, pay increasingly is mentioned early in the process, either as a required field in online applications—which are used more often—or during initial interviews, say recruiters, compensation consultants and job seekers.

The shift is vexing applicants, mostly those of a certain age and pay level, who are concerned that a salary they worked to attain now gets in the way of having a job at all. “I’m unemployable now as a result of getting to the top of the tree,” Mr. Edwards lamented.

Josh Rock, a recruiter at Fairview Health Services, a 20,000-employee health system in Minnesota, said that during the last recession, recruiters used compensation queries as a quick way to cull the large numbers of candidates for open jobs. The habit has stuck, he said. “Why not figure out what’s going on sooner in the process than doing a dance?”

Human-resources executives say asking about pay right off the bat helps contain compensation costs, ensures that candidates have reasonable expectations and spares recruiters from chasing prospects they can’t afford.

“Unfortunately, some clients use salary as a pre-screening question,” said Susan Vitale, chief marketing officer at iCIMS Inc., a provider of recruiting software in Matawan, N.J. “So if the role tops out at $55,000 and they say they want $60,000, it might knock the candidate out of consideration” even if the person would be open to salary negotiations.

Screening candidates this way may be a factor in wage stagnation, some analysts suggest. Average hourly earnings rose 2.5% in 2015, modest by historical standards. Wage growth has averaged only about 2% for the past five years.

Focusing on compensation history “holds down wages because now the jobs are being filled by people with lower salary expectations,” said Thomas Kochan, a professor of employment research at the Massachusetts Institute of Technology’s Sloan School of Management. “We have a whole generation of people who are permanently adversely affected.”

Though hiring tactics have received little attention in the economic debate about wage stagnation, Mr. Kochan said they could have profound effects: “The decisions of firms individually are…creating collectively this macro phenomenon of stagnation,” yet are hard to measure because they are shrouded in secrecy.

U.S. employers continue to hold the line on wages despite six years of economic recovery and an unemployment rate of 5%. Finance chiefs are “probably looking ahead and saying they want to keep the escalation of labor costs from going up in a way that will put pressure on earnings,” said Ajit Kambil, global research director of Deloitte’s CFO Program.

In Deloitte’s most recent quarterly survey, 47% of chief financial offers said they plan to work to lower or control labor costs this year, by taming compensation growth, reducing benefit costs or other means. Moreover, employers may feel they can lowball applicants because they believe there is still a surplus of qualified candidates.

“Workers are still a little discounted” in most fields, said Linda Barrington, executive director of the Institute for Compensation Studies at Cornell University’s ILR School. “Employers won’t pay what the last person in the job was paid because labor is now on sale.”

Steve Carpinelli recently applied for a public-relations position with a nonprofit organization in Washington, D.C. The role called for a minimum of five-to-seven years of experience. He has more than 14.

Mr. Carpinelli’s pay reached high five figures before the 45-year-old switched to the generally lower-paying field of nonprofits. While preparing for a phone interview with the Washington organization, he discovered that the last person in the job earned $101,000. So when asked early on about his salary expectations, he put his range squarely around what the last employee earned, seeking $85,000 to $110,000.

“After that, the conversation was very robotic, not a two-way conversation about what they’re truly looking for,” Mr. Carpinelli said. “I definitely got the impression that I’d priced myself out.”

In his experience, “there has been a definite shift or emphasis on beginning the conversation with: ‘What is your salary range?’” Mr. Carpinelli said. “I was always told you never talk about salary until you’re given an offer. But I’ve noticed the salary-range question comes up far earlier in the conversation.”

The organization ultimately hired a young woman with five years’ experience. Mr. Carpinelli is still looking for a permanent job.

Older job seekers sometimes see such outcomes as evidence of bias. But “employers can make financial decisions and it’s not necessarily age discrimination,” said Raymond Peeler, a senior attorney-advisor at the Equal Employment Opportunity Commission. “What an employee would have to prove…is that the employer is using the salary level as a proxy to disqualify all the older applicants.”

Businessolver Inc., a benefits-administration firm in West Des Moines, Iowa, recently hired more than 100 people for its Denver office. Human-resources staffers ask about compensation in the middle of a six-step hiring process and use the answers to gauge applicants’ “level of reality,” said Marcy Klipfel, senior vice president of HR. “A lot of times what you’re looking at is are we going to waste time and get to the end of the process, and it turns out the person is way out of our range?”

A majority of workers take a salary cut when they get a new job after a stretch of unemployment, but those over 45 usually take a bigger hit than workers under 35 years of age, according to research from Ms. Barrington and a Cornell colleague, Hassan Enayati.

A survey by AARP last year found that of job seekers between 45 and 70 years old who found work after a spell of unemployment, nearly half earned less than before.

Some employers hesitate to hire at far below a past salary, concerned that the employee would resent earning so much less. “If someone wants $100,000 and settles for $75,000, they’re not going to be happy,” said Steve Gross, a compensation specialist and senior partner at consulting firm Mercer.

Workers, however, say they would like the chance to decide for themselves.

“The presumption that I would walk into a job and get $150,000 is not there,” said Rosemary Lynch Kelleher, a baby boomer who has earned at that level during her 25-year career in international trade policy, and has been looking for a permanent job for several years.

“I realize very clearly that it’s not there. And I would take something for $100,000 or $75,000.”

In Austin, a woman who lost her six-figure position as a data architect in 2014 but recently landed a job, said she had been tempted to say she earned $60,000 to improve her chances of getting hired.

While she was searching, the 63-year-old said: “I hate putting down what I want” in salary. “If you put down too much, they think you’re expensive. If you don’t put down enough, they think you’re undervaluing yourself.”

Much of this ambiguity could be avoided if employers published a pay range for positions, but they don’t want to tip their hands. So experts suggest job seekers research market rates for particular positions and try to finesse salary questions.

“Say, ‘I’m open to a salary commensurate with the job,” recommended Blake Nations, a former recruiter who was laid off and then founded “And if they keep going, ask: ‘What do you expect to pay someone with my experience and education for this position?’ ”

Some applicants, faced with a salary-history question they fear would exclude them from the start, have toyed with putting a bogus number in a required field in an online form.

Mr. Edwards, the electronics engineer, says he tried that once. Not hearing back from the company, he contacted its HR department and was told he was too expensive. That baffled him because he had listed $1,000 as his previous pay. It turned out HR had changed that to $100,000, assuming it was a mistake.

Write to Lauren Weber at

Original URL:

Original article

How to boot a USB key in VirtualBox

usb boot

VirtualBox is an amazing virtualization tool, ideal for all kinds of software testing situations — unless they involve booting from USB, where there’s no direct support at all.

There’s a workaround which will sort-of solve the problem, no additional software required, but it’s awkward and inflexible. Virtual Machine USB Boot is an interesting alternative, an open-source portable tool which makes it much easier to boot USB keys in both VirtualBox and QEMU.

Setup is simple. We clicked “Add”, entered a name for our project, chose the VirtualBox VM to be launched (these were automatically detected and presented in a list), and the USB drive to boot from.

There’s no need to do anything else, although the program does offer one or two other tweaks — alternative load methods, run minimized/ full screen options, CPU priority choices — for anyone interested.

Every boot item you create is added to a list. Double-click one and it dismounts the USB key from your PC, adjusts the VM’s settings to include it, then launches the VM for you.

This worked perfectly for us, with our test VM correctly booting from the USB drive. This wasn’t then available to our host operating system – you can access it from one system, or the other, not both at the same time — but once we closed the VM, Virtual Machine USB Boot mounted the key and we were able to use it again.

You might not be so lucky, depending on how your VM is set up. Virtual Machine USB Boot tries to add your key to the first available port in the VirtualBox storage controller, for instance, but if there’s no port available this will fail. It also won’t work if there’s a prior port with an HDD, or your VM isn’t set up to boot from HDD.

Don’t let that put you off, though — the program won’t harm your existing setup, it should work just fine with most VMs, and if you do run into any problems then they’re likely to be easily resolved.

Virtual Machine USB Boot is an open source tool for Windows XP — 8.1.

Original URL:

Original article

What’s new on – January 2016

Look at our Roadmap highlighting how this work falls into our priorities set by the Drupal Association staff with the direction from the Board and collaboration with the community. Updates

Following the Conversation

One of the most requested features from a wide swath of the community has been a better way to follow content on and receive email notifications. The issue queues have had this follow functionality for some time, but the implementation was quite specific to issues, and not easily extensible to the rest of the site.

Because of the volume of content on we have to be careful that our implementation will scale well. We now use a notification system based on the Message stack which functions much more generically and therefore can be applied to many content types on

Follow functionality is now available for comments on Forum topics, Posts (like this one), Case Studies, and documentation Book Pages.

In the future we intend to extend this follow functionality to include notification of new revisions (for relevant content types, particularly documentation).

Community Elections for the Board

Nominations for the position of At-Large Director from the community are now open. There are two of these positions on the board, each elected on alternating years. For this year’s elections process we’ve made several small refinements:

  • Candidates are now no longer required to display their real names on their candidate profile. We will now default to the username.
  • Candidates do not have to provide a photo, we will default to a generic avatar.
  • There is now an elections landing page with complete details about the elections process.

We encourage members of the community to nominate themselves! Enhancements

A number of smaller enhancements made it into the January sprints as well. One of the key ones was the ability to configure an arbitrary one-off test in the issue queues against a custom branch. This is a small step towards ensuring that the DrupalCI testing framework will support the wider testing matrix required for feature branching, so that Drupal can always be shippable.

We also spent some time in January reviewing the results of the documentation survey that was placed on all existing documentation pages on the site. This information is helping to inform the next big item on the roadmap – improved Documentation section on

Finally, we’ve continued our battle against spam with the help of Technology Supporter, Distil Networks. We’ve seen some very promising results in initial trials to prevent spam account registrations from happening in the first place, and will continue to work on refining our integration.

Sustaining support and maintenance

DrupalCon New Orleans Full -Site Launched!

In January we also launched the full -site for DrupalCon New Orleans with registration and the call for papers. As part of this launch, now supports multiple, simultaneous event registrations with multiple currencies, payment processors, and invoice formats. This was a significant engineering lift, but has made even more robust.

DrupalCon New Orleans is happening from May 9-13th, and will be the first North American DrupalCon after the release of Drupal 8!

DrupalCon Dublin

The next European DrupalCon will also be here before you know it, and we’ve been working with the local community and our designer to update the DrupalCon Dublin splash page with a new logo that we will carry through into the design for the full-site once that is ready to launch.

Permissions for Elevated Users

In January we also focused on auditing the users with elevated privileges on, both to ensure that they had the permissions they needed, and to enforce our principle of least-access. Users at various levels of elevated privileges were contacted to see if they were still needed, and if not those privileged roles were removed.

The following privileges were also fixed or updated: webmasters can now view a user’s’ public ssh keys; content moderators can administer comments and block spam users without user profile editing privileges. We also fixed taxonomy vocabulary access and now both content moderators and webmasters have access to edit tags in various vocabularies such as Issue tags, giving more community members access to clean those up and fight duplicates or unused tags.

Updates traffic now redirects to HTTPS

SSL is now the default for FTP traffic from and for itself. This helps to enforce a best practice of using SSL wherever possible, and helps to address an oblique attack surface where a man-in-the-middle could potentially hijack an update for someone running their Drupal installation on an unprotected network (i.e. development environments on a personal laptop in a coffee shop).

Devwww2 Recovery pre-production environments were affected by some instability in January, particulary the devwww2 server. A combination of a hard restart due to losing a NIC on the machine and some file-system level optimizations in the database containers lead to corruption on the dev site databases. infrastructure engineers restored the system and recovered the critical dev sites, and while some instability continues the system has been recovering more cleanly as they work to resolve the issue permanently.


As always, we’d like to say thanks to all the volunteers who work with us, and to the Drupal Association Supporters, who made it possible for us to work on these projects.

Follow us on Twitter for regular updates: @drupal_org, @drupal_infra

Original URL:

Original article

Docker 1.10: New Compose file, improved security, networking and much more

We’re pleased to announce Docker 1.10, jam-packed with stuff you’ve been asking for.

It’s now much easier to define and run complex distributed apps with Docker Compose. The power that Compose brought to orchestrating containers is now available for setting up networks and volumes. On your development machine, you can set up your app with multiple network tiers and complex storage configurations, replicating how you might set it up in production. You can then take that same configuration from development, and use it to run your app on CI, on staging, and right through into production. Check out the blog post about the new Compose file to find out more.

animals-august2015As usual, we’ve got a load of security updates in this release. All the big features you’ve been asking for are now available to use: user namespacing for isolating system users, seccomp profiles for filtering syscalls, and an authorization plugin system for restricting access to Engine features. Check out the blog post for all the details.

Another big security enhancement is that image IDs now represent the content that is inside an image, in a similar way to how Git commits represent the content inside commits. This means you can guarantee that the content you’re running is what you expect by just specifying that image’s ID. When upgrading to Engine 1.10, there is a migration process that could take a long time, so take a read of the documentation if you want to prevent downtime.

Networking gets even better

We added a new networking system in the previous version of Docker Engine. It allowed you to create virtual networks and attach containers to them so you could create the network topology that was best for your application. In addition to the support in Compose, we’ve added some other top requested features:

  • Use links in networks: Links work in the default bridge network as they have always done, but you couldn’t use them in networks that you created yourself. We’ve now added support for this so you can define the relationships between your containers and alias a hostname to a different name inside a specific container (e.g. --link db:production_postgres)
  • Network-wide container aliases: Links let you alias a hostname for a specific container, but you can now also make a container accessible by multiple hostnames across an entire network.
  • Internal networks: Pass the --internal flag to network create to restrict traffic in and out of the network.
  • Custom IP addresses: You can now give a container a custom IP address when running it or adding it to a network.
  • DNS server for name resolution: Hostname lookups are done with a DNS server rather than /etc/hosts, making it much more reliable and scalable. Read the feature proposal and discussion.
  • Multi-host networking on all supported Engine kernel versions: The multi-host overlay driver now works on older kernel versions (3.10 and greater).

Engine 1.10

Apart from the new security and networking features, we’ve got a whole load of new stuff in Engine:

  • Content addressable image IDs: Image IDs now represent the content that is inside an image, in a similar way to how Git commit hashes represent the content inside commits. This means you can guarantee that the content you’re running is what you expect by just specifying that image’s ID. This is an improvement upon the image digests in Engine 1.6. There is a migration process for your existing images which might take a long time, so take a read of the documentation if you want to prevent downtime.
  • Better event stream: The docker events command and events API endpoint has been improved and cleaned up. Events are now consistently structured with a resource type and the action being performed against that resource, and events have been added for actions against volumes and networks. Full details are in the docs.
  • Improved push/pull performance and reliability: Layers are now pushed in parallel, resulting in much faster pushes (as much as 3x faster). Pulls are a bit faster and more reliable too – with a streamlined protocol and better retry and fallback mechanisms.
  • Live update container resource constraints: When setting limits on what resources containers can use (e.g. memory usage), you had to restart the container to change them. You can now update these resource constraints on the fly with the new docker update command.
  • Daemon configuration file: It’s now possible to configure daemon options in a file and reload some of them without restarting the daemon so, for example, you can set new daemon labels and enable debug logging without restarting anything.
  • Temporary filesystems: It’s now really easy to create temporary filesystems by passing the --tmpfs flag to docker run. This is particularly useful for running a container with a read-only root filesystem when the piece of software inside the container expects to be able to write to certain locations on disk.
  • Constraints on disk I/O: Various options for setting constraints on disk I/O have been added to docker run: --device-read-bps, --device-write-bps, --device-read-iops, --device-write-iops, and --blkio-weight-device.
  • Splunk logging driver: Ship container logs straight to the Splunk logging service.
  • Start linked containers in correct order when restarting daemon: This is a little thing, but if you’ve run into it you’ll know what a headache it is. If you restarted a daemon with linked containers, they sometimes failed to start up if the linked containers weren’t running yet. Engine will now attempt to start up containers in the correct order.

Check out the release notes for the full list. There are a few features being deprecated in this release, and we’re ending support for Fedora 21 and Ubuntu 15.04, so be sure to check the release notes in case you’re affected by this. If you have written a volume plugin, there’s also a change in the volume plugin API that you need to be aware of.

Big thanks to all of the people who made this release happen – in particular to Qiang Huang, Denis Gladkikh, Dima Stopel, and Liron Levin.

The easiest way to try out Docker in development is by installing Docker Toolbox. For other platforms, check out the installation instructions in the documentation.

docker_swarmSwarm 1.1

Docker Swarm is native clustering for Docker. It makes it really easy to manage and deploy to a cluster of Engines. Swarm is also the clustering and scheduling foundation for the Docker Universal Control Plane, an on-premises tool for deploying and managing Docker applications and clusters.

Back in November we announced the first production-ready version of Swarm, version 1.0. This release is an incremental improvement, especially adding a few things you’ve been asking us for:

  • Reschedule containers when a node fails: If a node fails, Swarm can now optionally attempt to reschedule that container on a healthy node to keep it running. This is an experimental feature, so don’t expect it to work perfectly, but please do give it a try!
  • Better node management: If Swarm fails to connect to a node, it will now retry instead of just giving up. It will also display the status of this and any error messages in docker info, making it much easier to debug. Take a look at the feature proposal for full details.

Check out the release notes for the full list and the documentation for how to get started.

And save the date for Swarm Week starting Monday, Feb 29th!

If you are new to Swarm or are familiar and want to know more, Swarm Week is the place for you get ALL your Swarm information in a single place. We will feature a different Swarm related topic each day.

Bookmark the Docker blog for Monday the 29th for the start of #SwarmWeek!

docker_machineMachine 0.6

Machine is at the heart of Docker Toolbox, and a big focus of Machine 0.6 has been making it much more reliable when you’re using it with VirtualBox and running it on Windows. This should make the experience of using Toolbox much better.

There have also been a couple of new features for Machine power users:

  • No need to type “default”: Commands will now perform actions against the “default” VM if you don’t specify a name.
  • New provision command: This is useful for re-running the provisioning on hosts where it failed or the configuration has drifted.

For full details, check out the release notes. The easiest way to install Machine is by installing Docker Toolbox. Other installation methods are detailed in the documentation.

docker_registryRegistry 2.3

In Registry 2.3, we’ve got a bunch of improvements to performance and security. It has support for the new manifest format, and makes it possible for layers to be shared between different images, improving the performance of push for layers that already exist on your Registry.

Check out the full release notes here and see the documentation for how to install or upgrade.

Watch this video overview on the new features in the Docker 1.10


Additional Resources on Docker 1.10

Learn More about Docker

Original URL:

Original article

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑

%d bloggers like this: