September: Netflix Will ‘Become Exclusive US Pay TV Home of Films From Disney, Marvel, Lucasfilm and Pixar’

An anonymous reader writes: The licensing deal between Netflix and Disney for the rights to all new films that hit movie theaters in 2016 is nothing new. What is new is when exactly the deal will come into effect. “From September onwards, Netflix will become the exclusive U.S. pay TV home of the latest films from Disney, Marvel, Lucasfilms and Pixar,” said Netflix content chief Ted Sarandos in a blog post. This will only apply to new theatrical releases because separate licensing deals are in place for other Disney content. The exclusive partnership with Disney does also extend into original programming. Netflix’s partnership with Disney is part of a bigger plan to host more unique content that rival services do not offer.


Share on Google+

Read more of this story at Slashdot.


Original URL: http://rss.slashdot.org/~r/Slashdot/slashdot/~3/FHaEsdeHbfY/september-netflix-will-become-exclusive-us-pay-tv-home-of-films-from-disney-marvel-lucasfilm-and-pixar

Original article

New AWS Quick Start Reference Deployment – Standardized Architecture for PCI DSS

If you build an application that processes credit card data, you need to conform to PCI DSS (Payment Card Industry Data Security Standard). Adherence to the standard means that you need to meet control objectives for your network, protect cardholder data, implement strong access controls, and more.

In order to help AWS customers to build systems that conform to PCI DSS, we are releasing a new Quick Start Reference Deployment. The new Standardized Architecture for PCI DSS on the AWS Cloud (PDF or HTML) includes a AWS CloudFormation template that deploys a standardized environment that falls in scope for PCI DSS compliance (version 3.1).

The template describes a stack that deploys a multi-tiered Linux-based web application in about 30 minutes. It makes use of child templates, and can be customized as desired. It launches a pair of Virtual Private Clouds (Management and Production) and can accommodate a third VPC for development:

The template sets up the IAM items (policies, groups, roles, and instance profiles), S3 buckets (encrypted web content, logging, and backup), a Bastion host for troubleshooting and administration, an encrypted RDS database instance running in multiple Availability Zones, and a logging / monitoring / alerting package that makes use of AWS CloudTrail, Amazon CloudWatch, and AWS Config Rules. The architecture supports a wide variety of AWS best practices (all of which are detailed in the document) including use of multiple Availability Zones, isolation using public and private subnets, load balancing, auto scaling, and more.

You can use the template to set up an environment that you can use for learning, as a prototype, or as the basis for your own template.

The Quick Start also includes a Security Controls Reference. This document maps the security controls called out by PCI DSS to the relevant architecture decisions, features, and configurations.


Jeff;

PS – Check out our other AWS Enterprise Accelerator Quick Starts!

 

 


Original URL: http://feedproxy.google.com/~r/AmazonWebServicesBlog/~3/xBLXJSuTSs8/

Original article

Google’s ‘Science Journal’ App Turns Your Android Device Into A Laboratory

An anonymous reader writes about Google’s latest ‘Science Journal’ app that was released at the end of Google I/O last week: Google has launched its ‘Science Journal’ app that can essentially turn your Android device into a tricorder of sorts. The app uses the sensors in your smartphone to gather, graph and visualize data. For example, you can use Google’s Science Journal app to measure sound in a particular area over a particular period of time, or the movement of the device’s internal accelerometers. The app is fairly basic to start, but Google is working to expand its functionality. It’s even partnering with San Francisco’s Exploratorium to develop external kits that can be used with the app — which includes various microcontrollers and other sensors. As part of its Google Field Trip Days initiative, which allows students from underserved communities to attend a local museum for no cost and includes transportation and lunch, Google sent out 120,000 kits to local science museums. They also sent out 350,000 different pairs of safety glasses to schools, makerspaces, and Maker Faires worldwide, to ultimately help young students work on even bigger projects. You can download the app from the Play Store and start experimenting here.


Share on Google+

Read more of this story at Slashdot.


Original URL: http://rss.slashdot.org/~r/Slashdot/slashdot/~3/G_IQ3wI1jJk/googles-science-journal-app-turns-your-android-device-into-a-laboratory

Original article

Visual Basic Turns 25

vb25

That’s right! Today marks the 25th (“Silver Anniversary”) since VB first debuted to the world. It seems like just yesterday I’d only been at Microsoft a little over a year when VB turned 20. Looking back at the progress of 5 years—a complete revamp of the IDE and debugger, a trove of new language features, and millions of lines of code—I’m humbled. And that’s just coming from the Roslyn team. But I’m even more humbled looking back at a history FIVE times longer than that with hundreds of innovative productivity features spanning 14 releases and scores of team members. So much history could hardly be given justice in one blog post on a Friday afternoon.

So, in honor of this momentous occasion I’m announcing the…

(booming loudspeaker voice)

VISUAL BASIC SILVER ANNIVERSARY CELEBRATIATHON

(end booming loudspeaker voice)

That’s right. A Celebratiathon (a perfectly cromulent word). A celebration so extensive it’s also a marathon. What does it really mean, though?

It means that the party don’t stop here. In anticipation of this magnificent milestone we’ve been reaching out to members of the Visual Basic team stretching all the way back to the beginning to get a retrospective on the juggernaut that is VB across every era, from VB 1.0 to VB6 to the early days of VB.NET to Roslyn and I’m going to be your tour guide as we look behind the scenes at history from the perspective of the programming pioneers who lived it. Whatever your first or favorite version we’re going to be talking to the women and men who built it. The trials, the triumphs, the passion, THE FUN! Visual Basic has always been more than just “a language”. It’s a legacy of generation after generation of people pouring themselves into making an experience with the goal of empowering people and touching lives. So to really celebrate 25 years of VB is to celebrate over 25 years of individuals, within Microsoft and in the community, contributing with love to the VB story every step along the way.

But, don’t worry, it won’t all be backward looking. In the midst of all the “deleted scenes” and “director’s commentary” I’ll also be opening up about the really cool features we’re looking at for VB “15” and beyond. And of course I’ve got some surprises planned too. So you’ll want to keep checking back here on the blog starting next week to geek out on language design on features like tuples and pattern matching and other features (*mischievous grin*), and video interviews with industry legends, and other VB themed stuff (*shifty eyes*). It’s going to be great!

Now this is a party, so feel free to be interactive. If you’re a user and you’ve always been curious, leave a comment. And tell your friends! If you’re a team member, past or present, leave a comment. If you know a team member forward them this link and tell them to leave a comment. Or if you just have a favorite memory of VB or a personal pet project you’re writing in VB, leave a comment. And everyone, always feel free to shout at me on twitter @ThatVBGuy

Next week we’re talking to members from the original VB 1.0 team (and talking about tuples). Until then join me in wishing my all-time favorite programming language a VERY HAPPY 25th BIRTHDAY (AND MANY MORE)!

Jubilations,

-ADG


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/OlT5eXOzVgo/

Original article

GitLab Container Registry

Yesterday we released GitLab 8.8, super powering GitLab’s built-in continuous integration. With it, you can build a pipeline in GitLab, visualizing your builds, tests, deploys and any other stage of the lifecycle of your software. Today (and already in GitLab 8.8), we’re releasing the next step: GitLab Container Registry.

GitLab Container Registry is a secure and private registry for Docker images. Built on open source software, GitLab Container Registry isn’t just a standalone registry; it’s completely integrated with GitLab.

GitLab is all about having a single, integrated experience and our registry is no exception. You can now easily use your images for GitLab CI, create images specific for tags or branches and much more.

Our container registry is actually the first Docker registry that is fully-integrated with git repository management and comes out of the box with GitLab 8.8. So if you’ve upgraded, you already have it! This means our integrated Container Registry requires no additional installation. It allows for easy upload and download of images from GitLab CI. And it’s free.

Docker Basics

The main component of a Docker-based workflow is an image, which contains everything needed to run an application. Images are often created automatically as part of continuous integration so they are updated whenever code changes. When images are built to be shared between developers and machines, they need to be stored somewhere, and that’s where a container registry comes in. The registry is the place to store and tag images for later use. Developers may want to maintain their own registry for private, company images, or for throw-away images used only in testing. Using GitLab Container Registry means you don’t need to set up and administer yet another service, or use a public registry.

Tight Integration

GitLab Container Registry is fully-integrated with GitLab making it easy for developers to code, test, and deploy Docker container images using GitLab CI and other Docker-compatible tooling.

  • User authentication is from GitLab itself, so all the user and group definitions are respected.
  • There’s no need to create repositories in the registry; the project is already defined in GitLab.
  • Projects have a new tab, Container Registry, which lists all images related to the project.
  • Every project can have an image repository, but this can be turned off per-project.
  • Developers can easily upload and download images from GitLab CI.
  • There’s no need to download or install additional software.

Simplify your workflow

GitLab Container Registry is seamless and secure. Here are some examples of how GitLab Container Registry can simplify your development and deployment workflows.

  • easily build Docker images with the help of GitLab CI and store them in the GitLab Container Registry,
  • easily create images per branches, tags, or any other way suitable to your workflow, and with little effort, store them on GitLab,
  • use your own build images, stored in your registry to test your applications against these images, allowing you to simplify the docker-based workflow,
  • let the team easily contribute to the images, using the same workflow they are already accustomed to, with the help of GitLab CI you can automatically rebuild images that inherit from your’s allowing you to easily deliver fixes and a new features to a base image used by your teams,
  • have a full Continuous Deployment and Delivery workflow by pointing your CaaS to use images directly from GitLab Container Registry, you’ll be able to perform automated deployments of your applications to the cloud (Docker Cloud, Docker Swarm, Kubernetes and others) when you build and test your images.

Start using it

First, ask your system administrator to enable GitLab Container Registry following the administration documentation.

After that, you will be allowed to enable Container Registry for your project.

To start using your brand new Container Registry you first have to login:

docker login registry.example.com

Then you can simply build and push images to GitLab:

docker build -t registry.example.com/group/project .
docker push registry.example.com/group/project

GitLab also offers simple Container Registry management. Go to your project and click Container Registry. This view will show you all tags in your repository and will easily allow you to delete them.

Use with GitLab CI

You can use GitLab’s integrated CI solution to build, push, and deploy your Container Images.

Note: This feature requires GitLab Runner 1.2.

Here’s an example GitLab CI configuration file (.gitlab-ci.yml) which builds an image, runs tests, and if the tests are successful, tags the build and uploads the build to the container registry.

build_image:
  image: docker:git
  services:
  - docker:dind
  script:
    - docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.example.com
    - docker build -t registry.example.com/my-group/my-project .
    - docker run registry.example.com/my-group/my-project /script/to/run/tests
    - docker push registry.example.com/my-group/my-project:latest
  only:
    - master

Here’s a more elaborate example that splits up the tasks into 4 stages, including two tests that run in parallel. The build is stored in the container registry and used by subsequent stages, downloading the image automatically when needed. Changes to master also get tagged as latest and deployed using an application-specific deploy script.

image: docker:git
services:
- docker:dind

stages:
- build
- test
- release
- deploy

variables:
  CONTAINER_TEST_IMAGE: registry.example.com/my-group/my-project:$CI_BUILD_REF_NAME
  CONTAINER_RELEASE_IMAGE: registry.example.com/my-group/my-project:latest

before_script:
  - docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.example.com

build:
  stage: build
  script:
    - docker build -t $CONTAINER_TEST_IMAGE .
    - docker push $CONTAINER_TEST_IMAGE

test1:
  stage: test
  script:
    - docker run $CONTAINER_TEST_IMAGE /script/to/run/tests

test2:
  stage: test
  script:
    - docker run $CONTAINER_TEST_IMAGE /script/to/run/another/test

release-image:
  stage: release
  script:
    - docker tag $CONTAINER_TEST_IMAGE $CONTAINER_RELEASE_IMAGE
    - docker push $CONTAINER_RELEASE_IMAGE
  only:
    - master

deploy:
  stage: deploy
  script:
    - ./deploy.sh
  only:
    - master

Summary

GitLab Container Registry is the latest addition to GitLab’s integrated set of tools for the software development lifecycle and comes with GitLab 8.8 and up. With GitLab Container Registry, testing and deploying Docker containers has never been easier. GitLab Container Registry is available on-premises in GitLab CE and GitLab EE at no additional cost and installs in the same infrastructure as the rest of your GitLab instance.

We’re working on getting GitLab Container Registry set up on GitLab.com (for free, of course) and will update this post when it’s ready.

Install GitLab on your own server in 2 minutes

Browse all posts

Get our GitLab newsletter twice monthly.


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/0SjMPj4PLQE/

Original article

SeamlessDocs raises $7 million Series B to help governments go digital

Screen Shot 2016-05-19 at 22.06.28 SeamlessDocs, the startup that helps governments move all their forms online, has today announced the close of a $7 million Series B funding round led by Motorola Solutions. Other participants in the round include existing investor Govtech Fund, as well as New York State Innovation Fund and 1776. SeamlessDocs operates under the premise that government is beautiful, which is both daunting… Read More


Original URL: http://feedproxy.google.com/~r/Techcrunch/~3/n9mBt_HRHSg/

Original article

(Intro To) Map, Reduce and Other Higher Order Functions

There are few things I have learned in my programming career that have paid off like higher order functions. Map, Reduce and Filter with their cousins, along with the concept of passing functions as data in general make code easier to reason about, easier to write, easier to test. I find myself evangelizing these concepts often, so I thought I would try to do my best to give an introduction of them, along with some real world examples of how they can improve your everyday programming life. These examples are in JavaScript, but the concepts are universal.

Terminology

Let’s go over just a bit of terminology before we begin to make sure everyone is on the same page. If you read this article and there are any other terms you don’t understand, or need a further/better explanation for, let me know in the comments and I will try to either expand on them here or link to better resources.

First Class Function

A language that has first class functions is one where you can assign a function to a variable and pass that function around like data. Without first class functions you cannot have higher order functions – usually if you have one you have the other as well.

Higher Order Functions

A higher order function is a function that either takes another function as an argument or returns a function. That’s pretty much it – it’s all about functions as data. Once you have a reference to a function in a variable and can pass it around or return it, many things become possible. Map and Reduce and the other higher order functions we discuss are certainly not the only ones – you can create your own higher order functions. These are just common examples of them that are also commonly baked into the standard libraries of many languages. These functions we will discuss are generally not special either, you could write your own version of them without too much trouble – they are notable because they are solutions to common problems in everyday programming, no matter what the domain.

Pure Functions

A pure function is one that given the same inputs will always return the same output, with no side effects. An example might be (a, b) => a + b;. No matter what you provide as a and b, if you give them again you will get the same answer. The function doesn’t do anything else either – no side effects means no writing to a database, or logging, or mutating state, just takes inputs, gives outputs, and what goes in deterministically directs what comes out.

There are many benefits to pure functions, most of which I will not go into, but some include cacheability, portability (reuse), self-documenting, testabilty. Pure functions are a unit tester’s dream! You can’t make an (interesting / useful) system with only pure functions, it wouldn’t really do anything – but you should strive to use pure functions wherever possible – sequester your state manipulation in as few places as possible.

Higher Order Functions do not require you to use pure functions, but that is what you will use most of the time and you should always try to use pure functions when possible.

Map

Map is one of the most common and most useful higher order functions, so let’s start there. Let’s say I have a collection of data, and I need to perform some sort of transformation on it. Common examples might be taking complex data and formatting it for output, or plucking individual parts of data out of a more complicated structure. Let’s take a look at an example.

var data = [
    {id: 1, firstName: "Ryan", lastName: "Guill", email: "[email protected]"},
    {id: 2, firstName: "John", lastName: "Doe", email: "[email protected]"},
    {id: 3, firstName: "Mary", lastName: "Smith", email: "[email protected]"}
];

Let’s say we wanted to get a list of everyone to put in the “to” field for an email, something like lastName, firstName . Traditionally we might write a loop:

var output = [];
for (var i = 0; i < data.length; i++) {
    output.push(data[i].firstName + " " + data[i].lastName + " <" + data[i].email + ">");
}
//["Ryan Guill <[email protected]>", "John Doe <[email protected]>", "Mary Smith <[email protected]>"]

Pretty standard stuff, but let’s take a look at how we might write the same thing with map:

data.map(function(item) {
    return item.firstName + " " + item.lastName + " <" + item.email + ">"; 
});
//["Ryan Guill <[email protected]>", "John Doe <[email protected]>", "Mary Smith <[email protected]>"]

The first thing to notice is that we don’t have the loop machinery, no chance to get any of those details wrong, no need to keep track of another variable for the index. The code is much closer to just the code that matters.

But let’s change it a bit to illustrate another point.

var toFieldFormat = function(item) {
    return item.firstName + " " + item.lastName + " <" + item.email + ">"; 
};
data.map(toFieldFormat);
//["Ryan Guill <[email protected]>", "John Doe <[email protected]>", "Mary Smith <[email protected]>"]

This makes it clearer that Map takes a function, one that has the following argument signature: function (element [, index] [, collection]). We won’t go into detail with the other parameters, you generally don’t have to use them. Index can be useful in some cases, but generally speaking if you are using the collection argument in a map call, you’re probably doing it wrong.

Back to the example, toFieldFormat is just a function (a pure one) that takes an input of an object and returns a string as output. That’s it. We could call it directly:

toFieldFormat({firstName: "Hindley", lastName: "Milner", email: "[email protected]"});
//Milner, Hindley <[email protected]>

and it would work just fine. We could also write our original loop to use this function:

var output = [];
for (var i = 0; i < data.length; i++) {
    output.push(toFieldFormat(data[i]));       
}

But now you can see that we have abstracted out the real logic that matters and all that’s left is the plumbing. I don’t know about you but im tired of writing loops for most things. The loop is not what is interesting about this code. When I come back to it in 6 months, I don’t want to read the loop to understand what is happening – I want to get to the meat of what the author is trying to achieve.

Also, toFieldFormat is easily testable! It is a pure function, and abstracted out we can easily pass in a variety of inputs and deterministically check their outputs.

Let’s look at another example – let’s say we have our array of data, but we want to get just an array of id’s. That’s easy with map:

var pluckId = function(item) {
    return item.id; 
};
data.map(pluckId); //[1,2,3]

So let’s talk a bit more about the intrinsic properties of a mapping operation.

  • Map translates data from one type to another. Doesn’t necessarily mean that the data type changes – but you are changing from one form to another.
  • Map will always return a collection the same size and order as its input. If you map an array of 3 things, you will get an array of 3 elements back, and the first item in the input will always correspond with the first item of the output.
  • The result of map is always a new collection – the input collection is not modified.
  • Map operations are inherently chainable. data.map(...).map(...).map(...) is a common pattern.
  • Mathematically, if using pure functions, data.map(a).map(b) is the same as data.map(b ⋅ a);. Don’t worry about this now, (and this is not valid JavaScript syntax, it’s mathematical) but what it means is that certain libraries can speed up your code by performing optimizations, knowing that they can transform your functions for performance and not change the meaning of your code.

We are only scratching the surface of the power and usefulness of map. If you take away nothing else from this article, learn and embrace at least this one function. There are more complicated but elegant abilities built on the foundations of map that are outside the scope of this particular article, but that I hope to revisit soon.

Editorial Note: some languages use collect or transform instead of map.

Filter

After map, filter is probably one of the most commonly used higher order functions. You give it some data and a function to use as a “test” for your data – if the element passes it is included in the resulting collection, if not it is excluded. Let’s look at an example:

var data = [
    {userID: 1, name: "Ryan", groups: ["dev", "ops", "qa", "employee"]},
    {userID: 2, name: "Bob", groups: ["ops", "employee"]},
    {userID: 3, name: "John", groups: ["qa", "employee"]},
    {userID: 4, name: "Paul", groups: ["dev", "qa", "employee"]}
];

var isDev = function (user) {
    return user.groups.includes("dev");
};
data.filter(isDev);
//[{userID: 1, name: "Ryan", groups: ["dev", "ops", "qa"]}, {userID: 4, name: "Paul", groups: ["dev", "qa"]}]

isDev is our “test” – it returns true or false, does the user’s group contain “dev”. That’s all there is to it – your test function needs to always return a boolean (undefined and null in js are considered false, but don’t do that). Your filter function could be much more complicated if you wanted though – as long as it returns a boolean.

var meetsArbitraryRestriction = function(user) {
    return user.groups.includes("qa") && user.name.charAt(0).toLowerCase() === "p";  
};    
data.filter(meetsArbitraryRestriction); //{userID: 4, name: "Paul", groups: ["dev", "qa"]}

Again, the testing function is a pure function, which means easy to test, and the function is easy to use in other parts of your application’s logic.

So properties of filter:

  • Filter will always return the same type of collection as the input, with fewer than or equal to the number of elements in the input.
  • Filter is used to remove items from a collection that fail the testing function.
  • The result of filter is always a new collection – the input collection is not modified.
  • Filter operations are also inherently chainable – you can do data.filter(...).filter(...) as much as you want, or, more commonly, data.filter(...).map(...)

Some and Every

A few languages use any instead of some, but some and every are aggregate functions, in that they take a collection, but they return a true or false. We aren’t going to spend much time on these, but let’s take a look:

var data = [
    {userID: 1, name: "Ryan", groups: ["dev", "ops", "qa", "employee"]},
    {userID: 2, name: "Bob", groups: ["ops", "employee"]},
    {userID: 3, name: "John", groups: ["qa", "employee"]},
    {userID: 4, name: "Paul", groups: ["dev", "qa", "employee"]}
];

var isDev = function (user) {
    return user.groups.includes("dev");
};

var isEmployee = function(user) {
    return user.groups.includes("employee");   
};

var isManager = function(user) {
    return user.groups.includes("manager");   
};

data.every(isDev); //false, not everyone is a dev
data.every(isEmployee); //true, everyone is an employee
data.some(isDev); //true, at least one user is a dev
data.some(isManager); //false, no user is a manager

Hopefully that is pretty straight forward. The only other thing I will mention is that these functions take advantage of their logic to generally return without having to look at every item in the collection – every can return as soon as it hits the first false result, some can return as soon as it hits the first true result.

Reduce

So now we get to the big one, reduce. For some reason reduce seems more opaque to the average developer than the others, and I think that stems from the fact that it is a bit of a swiss army knife of higher order functions. So let’s start high level – what is a reduction? All it really means is that it will iterate over the collection, building up a result that will be returned at the end. That’s it in a nutshell. But as we will see, that generic definition lends itself to many uses.

If you have ever used an aggregate function in SQL, you used a reduction. If you have ever used join() you’ve used a reduction.

Let’s take a look at what is probably the standard example to get our feet wet:

var data = [1,2,3,4,5];
data.reduce(function(prev, item) {
    return prev + item;
}, 0);
//15

We just totaled up all the items in our array. You’ll notice that reduce takes an extra argument, and that the function given to reduce also takes an extra argument. Here is the definition:

reduce(function(previousValue, item, index, collection) {...}, startingValue);

Reduce will iterate over the collection element by element, taking the result of the previous iteration and passing it to the next iteration. The first element will get the startingValue. This doesn’t mean that we can’t provide it with useful, reusable pure functions though. Consider this version of the same example:

var data = [1,2,3,4,5];
var sum = function(a, b) {
    return a + b;  
};    
data.reduce(sum, 0);
//15

Our sum function is perfectly generic, testible, reusable – pure.

Remember I said that join is a type of reduction? Let’s take a look at what a generic join might look like using reduce:

var data = [1,2,3,4,5];
var join = function(a, b) {
    if (typeof a === 'undefined') return b;
    return a + ',' + b;  
};    
data.reduce(join);
//"1,2,3,4,5"

In this case, we don’t pass a starting value, and have our join handle it appropriately. Now, the native string.prototype.join is much better at its job than this, but hopefully this was useful as an illustration.

Let’s look at another example, this time let’s see if we can find the smallest number out of a set.

var data = [100,250,50,300,450];
var min = function(a, b) {
    return Math.min(a, b);  
};    
data.reduce(min);
//50

So all of our examples so far have used reduce as an aggregation, getting down to a simpler value from an input collection – but reduce can return any kind of value – including a collection. In fact, it could even return a larger collection than the input.

Let’s say that we want to take our users array from earlier examples, and create an element for each user / group combination.

var data = [
    {userID: 1, name: "Ryan", groups: ["dev", "ops", "qa", "employee"]},
    {userID: 2, name: "Bob", groups: ["ops", "employee"]}
];

//just creating this for clarity later
function arrayMerge (arrayOne, arrayTwo) {
    Array.prototype.push.apply(arrayOne, arrayTwo);
    return arrayOne;
}

data.reduce(function(acc, user) {
    return arrayMerge(acc, user.groups.map(function(group) {
        return {name: user.name, group: group};
    }));
}, []);
/*
[{name:"Ryan",group:"dev"},{name:"Ryan",group:"ops"},{name:"Ryan",group:"qa"},{name:"Ryan",group:"employee"},{name:"Bob",group:"ops"},{name:"Bob",group:"employee"}]
*/

Hopefully this is still clear – as you can see we use map to take our array of groups and return a new array of the same size, but transforming them into objects with the name of the user and the group. Then we just merge our mapped groups into the array we are building user by user until we end up with the complete result.

So to further bolster the swiss-army-knife analogy, let me show you how all the other higher order functions we have talked about are actually just specialized reductions. Map, Filter, Some, Every – all of these can be achieved with reduce.

Here is our first Map example implemented using Reduce:

var data = [
    {id: 1, firstName: "Ryan", lastName: "Guill", email: "[email protected]"},
    {id: 2, firstName: "John", lastName: "Doe", email: "[email protected]"},
    {id: 3, firstName: "Mary", lastName: "Smith", email: "[email protected]"}
];

var toFieldFormat = function(item) {
    return item.firstName + " " + item.lastName + " <" + item.email + ">"; 
};
data.reduce(function(arr, item) {
   arr.push(toFieldFormat(item));
   return arr;
}, []).join(", ");  

Here is our Filter example:

var data = [
    {userID: 1, name: "Ryan", groups: ["dev", "ops", "qa", "employee"]},
    {userID: 2, name: "Bob", groups: ["ops", "employee"]},
    {userID: 3, name: "John", groups: ["qa", "employee"]},
    {userID: 4, name: "Paul", groups: ["dev", "qa", "employee"]}
];

var isDev = function (user) {
    return user.groups.includes("dev");
};
data.reduce(function(arr, user){
    if (isDev(user)) {
        arr.push(user);
    }
    return arr;    
}, []);
//[{userID: 1, name: "Ryan", groups: ["dev", "ops", "qa"]}, {userID: 4, name: "Paul", groups: ["dev", "qa"]}]

And here is our Some and Every examples rewritten as reductions:

var data = [
    {userID: 1, name: "Ryan", groups: ["dev", "ops", "qa", "employee"]},
    {userID: 2, name: "Bob", groups: ["ops", "employee"]},
    {userID: 3, name: "John", groups: ["qa", "employee"]},
    {userID: 4, name: "Paul", groups: ["dev", "qa", "employee"]}
];

var isDev = function (user) {
    return user.groups.includes("dev");
};

var isEmployee = function(user) {
    return user.groups.includes("employee");   
};

var isManager = function(user) {
    return user.groups.includes("manager");   
};

//every isDev
data.reduce(function(result, user) {
   return result && isDev(user);
}, true);  
//false, not everyone is a dev

//every isEmployee 
data.reduce(function(result, user) {
   return result && isEmployee(user);
}, true);
//true, everyone is an employee

//some isDev  
data.reduce(function(result, user) {
   return result || isDev(user);
}, false);  
//true, at least one user is a dev

//some isManager
data.reduce(function(result, user) {
   return result || isManager(user);
}, false);  
//false, no user is a manager

Now, do I recommend then that you just use reduce for everything and ignore the other built-in functions? Absolutely not – as you can see from the examples, the reduce versions are more complicated in every case. Not bad – but definitely not as streamlined. The API of the other functions help readability too – when you see map or filter you know exactly what is going on. Plus the specialized versions of these functions can take care of other optimizations behind the scenes.

Where reduce shines is when you either don’t have a more specialized function to use, or when you might need to combine multiple actions together, like in our user / group combination example.

You should definitely learn reduce and how it works, as well as how to read it. Get familiar with how the accumulation argument is passed from one iteration to the next – it’s not difficult with very little practice.

Editorial Note: reduce can be called by many names in different languages. reduce is by far the most common, but other languages might use fold or aggregate. These are different words for the same concept. There are also right and left versions of reduce, in JavaScript reduce is the left version. Left and right just refer to if the reduction happens from left to right or right to left (ascending vs descending, forwards or backwards). A right reduce is the same thing as doing a reverse of the data and then a left reduce.

Notes

In most languages, using these higher order functions instead of a loop is slower. Almost always will be – the overhead of the function calls, plus the ability to use things like break or continue in certain examples means that you can optimize those loops easier. But the speed comes at the cost of maintainability and readability. In any language that provides these functions I will always reach for these where I can and only begrudgingly go back and rewrite performance critical code into the loop version once testing shows it necessary.

You might notice I didn’t talk about forEach – that is because while it is a higher order function, it doesn’t generally take pure functions. If you are passing a pure function to forEach you probably should be using map or something else. forEach (some languages just call it each) is for side effects – referencing outside variables (such as through closure) or writing output to the screen or talking to a database, etc. There is nothing wrong with using it, but minimize it and consider first where you could use one of the other functions first and then use that output with your side-effecting code – it will provide much more testable and maintainable code.

Conclusion

I hope these explanations and examples have whet your appetite and shown how you can do your everyday programming by passing around functions, leading to easier to read and test code. But, even though these are common software patterns, the goal shouldn’t be to use them. These are just tools that you should add to your repertoire and you should learn when they are and aren’t appropriate.

That said, I believe that if you learn these patterns not only will you find ways to utilize them in your code, once you internalize the idea of first class functions and higher order functions you will start thinking of code very differently. Be careful though, higher order functions are gateway drugs. Soon you will be learing about currying, memoization and before you know it you’re talking about monads, functors and tagging Hindley-Milner notation on train cars – I’ve seen it a thousand times. 😉

If you find any errors in this article, or have any other clarifications or insights to any of this, please leave a comment below, I would love to see them.

Update: Thanks to Dan L, Mingo Hagen, Paul Turner and Adam Cameron from the cfml-slack for proofreading 🙂


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/lUz54agC9Uo/higher-order-functions.html

Original article

Atlassian sold $320M worth of software with no sales staff

Brandon Cipes, vice president for information systems at OceanX, has spent enough time in senior IT positions to hate sales calls. “It’s like buying a car—a process that seemingly should be so simple, but every time I have to, it’s like a five- to six-hour ordeal,” he says. “Most of our effort is trying to get the salespeople to leave us alone.” Cipes didn’t always feel that way, though. Back in 2013, he was used to the routine. His conversion began when he e-mailed business-software maker Atlassian, asking the company to send him a sales rep, and it said no.

Atlassian, which makes popular project-management and chat apps such as Jira and HipChat, doesn’t run on sales quotas and end-of-quarter discounts. In fact, its sales team doesn’t pitch products to anyone, because Atlassian doesn’t have a sales team. Initially an anomaly in the world of business software, the Australian company has become a beacon for other businesses counting on word of mouth to build market share. “Customers don’t want to call a salesperson if they don’t have to,” says Scott Farquhar, Atlassian’s co-chief executive officer. “They’d much rather be able to find the answers on the website.”

The way technology companies sell software has changed dramatically in the past decade. The availability of open source alternatives has pushed traditional brands and rising challengers to offer more free trials, free basic versions of their software with paid upgrades, and online promotions.

Incumbents such as IBM, Oracle, and Hewlett Packard Enterprise, which employ thousands of commissioned salespeople, are acquiring open source or cloud companies that sell differently, says Laurie Wurster, an analyst at researcher Gartner. Slack, Dropbox, and GitHub are among the companies trying to attract corporate clients with small-bore efforts that rely largely on good reviews. The idea is to distribute products to individuals or small groups at potential customers big and small and hope interest spreads upstairs.

So far, though, Atlassian remains the most extreme example of this model. It’s a 14-year-old company, valued at $5 billion since going public in December, without a single salesperson on the payroll. More than 80 Fortune 100 companies use Atlassian’s software, and venture capitalists and peers often talk about trying to follow, at least partly, its sales strategy.

Luck had a lot to do with that strategy, says Farquhar. He and co-CEO Mike Cannon-Brookes founded the company while finishing their IT degrees at the University of New South Wales, and the pair initially relied on word of mouth because they didn’t know anything about selling business software.

Their first break came a few months later, in 2002, when the website let you download a free trial but wasn’t yet equipped for purchases, and all their orders arrived via fax. One day the fax machine transmitted an order from American Airlines, where someone in IT had downloaded and configured the software without Atlassian’s help. “That was a huge turning point for us,” Farquhar says, adding that it gave the founders confidence they could make their business model work without a dedicated sales staff.

American paid about $800 for that first order. This year analysts forecast Atlassian’s revenue will top $450 million. Last year, when sales reached $320 million, the company’s sales and marketing spending, mostly on ads and payments to partners, totaled one-fifth of that. By comparison, Salesforce.com spent about half its revenue on sales and marketing; at Box, which has spent big to build a sales staff in the past couple of years, the number was 80 percent.

Jay Simons, Atlassian’s president, says the savings on staff means lower prices and more investment in research and development to refine software, making it easier to try, understand, and purchase. Farquhar says he’s resisted calls for half-measures, like hiring salespeople to manage subscriber renewals, and that he’s happier with a steady, predictable growth rate. “Salespeople are like your Adderall right before the exam,” he says. “It’s that last-minute kick when you’re not going to do well otherwise.”

In Silicon Valley, them’s still fightin’ words. “When you add a sales organization, revenue accelerates far greater than the cost of that organization,” says Peter Levine, a partner at venture firm Andreessen Horowitz. At GitHub, where Andreessen has invested, Levine lobbied heavily for the startup to recruit sales staff. Ultimately it did. (Bloomberg LP, which owns Bloomberg Businessweek, is an investor in Andreessen Horowitz.)

Atlassian’s roots lie in Sydney’s barren tech scene. It was kept aloft early on not by venture capital, but by the founders’ credit cards, meaning it didn’t have impatient investors to answer to. “I don’t think their success is replicable,” says Tomasz Tunguz, a partner at Redpoint Ventures.

Startups including Dropbox and Slack are taking a hybrid approach, relying on grass-roots pitches to land initial users within a company, then setting up sales calls once those users grow to a critical mass. “For me it’s not either-or, but how do we combine the best of both?” says Kakul Srivastava, GitHub’s vice president for product management. Even HP Enterprise is experimenting with online sales and try-before-you-buy. Caroline Tsay, the executive in charge of that effort, says she’s hired some former Atlassian staffers.

Atlassian faces a crowded market, and it’s unclear whether the company will be able to keep expanding, says John DiFucci, an analyst at Jefferies. Still, Atlassian’s Simons says he’s not worried about an end to growth without salespeople. “I’ve been asked that question every year for the past eight years,” he says. “Whatever the mythical wall is, we would have hit it by now.”

The bottom line: Atlassian sold $320 million worth of business software last year without a sales staff. Everyone else in the industry noticed.


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/EshQSHTiRoc/this-5-billion-software-company-has-no-sales-staff

Original article

Microsoft Urged to Open Source Classic Visual Basic

“On the 25th anniversary of classic Visual Basic, return it to its programmers…” reads the plea at UserVoice.com from Sue Gee — drawing 85 upvotes. “The new Microsoft claims to back open source, why not in this case? There is no need for Microsoft to do any more work on the code base – simply open source it and allow the community to keep it alive.”

In an essay at i-programmer.info, Gee shares a video of young Bill Gates building an app with Visual Basic in 1991, and complains that in the 25 years since Microsoft has open sourced .NET Core and the .NET Compiler Platform Roslyn, “but it has explicitly refused to open source VB6.” She notes that Friday Visual Basic’s program manager announced a “Visual Basic Silver Anniversary Celebratiathon,” promising he’s reaching out to the VB team members from the last 25 years for a behind-the-scenes retrospective, and adding “this is a party, so feel free to be interactive.”

“What the post glosses over is that this history was blighted by the fork in the road that was .NET and that many Visual Basic fans are highly unsatisfied that the programming environment they cherished is lost to them…” writes Gee. “Vote for the proposal not because you want to use VB6 or that you think it is worth having — Vote for it because a company like Microsoft should not take a language away from its users.”


Share on Google+

Read more of this story at Slashdot.


Original URL: http://rss.slashdot.org/~r/Slashdot/slashdot/~3/um-XqTEdkbc/microsoft-urged-to-open-source-classic-visual-basic

Original article

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑

%d bloggers like this: