Show HN: How to Setup Node.js App Automated Deployment and CI with PM2 for MVP’s

April 2016

tldr; This article shows you how to configure an insanely simple
automated continuous integration and deployment setup for a Node.js
app using GitHub, PM2, Digital Ocean, and
SemaphoreCI. I wrote it because nothing like this in its
entirety exists. It should take you 30 minutes to set up properly.

CI PM2 Node GitHub Server Setup



I’ve used or explored nearly every CI testing tool there is for Node.js
(maybe?). I have tried TravisCI, but grew tired of constant downtime
and slow, very slow build times (…OK, the builds ran fast, but they did not
kick off quickly!). Also I’ve tried CircleCI, but their founders
removed my thoughts from their community because they didn’t
agree to allow the file name for YAML config to be .circle.yml instead of
circle.yml. I also faced troubles while trying to configure and set up
Jenkins (though it was while I was working with an inexperienced
team, whom were the ones setting it up). I’ve also looked at
Shippable, but it really didn’t interest me, just like the rest
– because I now enjoy working with SemaphoreCI
namely since the prodigy TJ Holowaychuk recommended it to me.

For anyone interested in getting into the automated CI deployment business,
it’s relatively straightforward to market yourself – just list yourself
in all the Wikipedia articles, on Quora (with some upvote magic), have a good
service that doesn’t shut down or lie about build times, and have clear docs.
If you do those four things, you’re on the way to at least some passive income!

With regards to server hosting, I chose Digital Ocean because they rock.
I have never had a problem with them in over five years. That’s something!
I also printed t-shirts for Digital Ocean before I sold Teelaunch, and really
liked working with them.

Not only all that, but their service has great uptime, and their boxes
“droplets” are really fast to set up and reliable. I’m not a huge fan of
using Amazon EC2 and AWS in general for building Rapid MVP’s (of course
I would definitely use load balancing or something for scaling an app that has
thousands of users across the world). If your first question about building
an app is “How can I scale it?” or “Will Digital Ocean let me scale?” –
take my advice, you’re doing it wrong. Stop it. Think Rapid MVP.

To put it simply, Amazon has an interface that resembles a wild jungle with
overgrown vines on every tree, and Digital Ocean’s interface is a beautiful
oasis in a vast VPS desert.

As a side note, I can almost guarantee you that sometime in the future,
everyone will want barebones boxes connected to ethernet plugs. Because imagine
when everyone has fiber internet and anyone can host their e-commerce store
from a RaspberryPI running from their kitchen table.

1. Create your Droplet

First, you need a Digital Ocean account. Be patient as their signup process
may require you to verify your email and enter your credit card.

Sign up with this link to get $10 of free credit (2 months of hosting):

When you create your Digital Ocean (“DO”) droplet be sure to only allow SSH
only access and add your SSH key to Digital Ocean. You can do this from DO’s
dashboard and you can find more about this on a Digital Ocean article.

Make sure you create a droplet using the latest stable Ubuntu release.

Digital Ocean Droplet

SSH into the droplet and install dependencies for your stack with Node.
In my case, I needed to install Node, MongoDB, and Redis. Of course, MongoDB
and Redis are optional dependencies, but I use them because they allow me to
build Rapid MVP’s (quick prototypes in other words). Also, I really
like to use NVM to manage various version of Node installed, which
was created by another prodigy, Tim Caswell.

Make sure you replace all instances in this article of droplet-ip-address
with the IP address given to you by Digital Ocean for your droplet.

ssh root@droplet-ip-address

Install the basic requirements needed for the server:

sudo apt-get update
sudo apt-get upgrade
sudo apt-get install vim build-essential libssl-dev git unattended-upgrades authbind openssl fail2ban

Install NVM and set it up to use the latest stable version:

curl -o- | bash
nvm install stable
nvm alias default stable

Install PM2, which will handle deployments for us and manage our processes:

npm i -g pm2

Install MongoDB, which is optional:

sudo apt-key adv --keyserver hkp:// --recv 7F0CEB10
echo "deb "$(lsb_release -sc)"/mongodb-org/3.0 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-3.0.list
sudo apt-get update
sudo apt-get install -y mongodb-org
service mongod status

Install Redis, which is optional:

sudo add-apt-repository ppa:chris-lea/redis-server
sudo apt-get update
sudo apt-get install redis-server
redis-benchmark -q -n 1000 -c 10 -P 5
source ~/.profile

You might also want to look into installing fail2ban, changing the default
SSH port, and remove password-based login access. You can find how to do this
from this section in my security article, or just Google it.

2. Write your Node App File

This article assumes you already have created a GitHub repository for your
project and that you already have some app.js file in the root of it. If you
haven’t done that yet, then this section is for you. This section also
describes how to configure that app.js file for zero-downtime and graceful
reloading upon deployment of code.

For the purpose of this article, I share a basic app example that will respond
with “hello world” when you visit your droplet later on (over port 3000).

Answer yes to all the prompts or just hit ENTER to breeze through it:

npm init

Now save the basic express dependency:

npm i --save express

Create a new file called app.js (or edit your existing to include SIGINT):

vim app.js
var express = require('express');
var app = express();

app.get('/', function(req, res) {
  res.send('hello world')


process.on('SIGINT', function() {


  setTimeout(function() {
  }, 300);


Let’s test this out locally before you bother to continue further.

node app.js

Visit this URL in your browser (it should say “hello world”):

By default, PM2 will allow 1.6 seconds for your app to gracefully exit,
and you can read more on how to configure your app for zero-downtime here:

3. Set up SSH for SemaphoreCI

First, go to and sign up for an account.

Once you’ve logged in, create a project and connect with your GitHub account.

SemaphoreCI Loading

Make sure that your “Node version” shown under your SemaphoreCI project’s
build settings matches the output from your droplet when you run node -v.

For example, in this screenshot I have selected the v5.8.0 that I’m using.

SemaphoreCI Node Version

Now we need to add a user to the droplet to let SemaphoreCI deploy the app
after all tests have successfully passed.

Keep your SemaphoreCI browser tab open, because we will come back to that
in just a bit!

Copy to your clipboard the contents of your local ~/.ssh/ file.
If you have not yet already created this file, see GitHub’s instructions.

I’m using pbcopy (while on Mac OS X) to make it easy and do it the CLI way:

cat ~/.ssh/ | pbcopy

Now SSH back into your droplet if you’re not still connected:

ssh root@droplet-ip-address

Add the user semaphoreci on the droplet, so you can then SSH in as them.
When you are prompted for a password, write it down or make it memorable.

sudo adduser semaphoreci

Switch user to semaphoreci and paste your clipboard contents into the file
called ~/.ssh/authorized_keys. This will let you test deployments from
your local computer as the semaphoreci user later on. In other words, you
can SSH into your droplet as the semaphoreci user easily. It’ll make sense
later, don’t worry.

su semaphoreci
mkdir ~/.ssh
chmod 700 ~/.ssh
vim ~/.ssh/authorized_keys
chmod 600 ~/.ssh/authorized_keys

We need to create an SSH key for the actual semaphoreci user, so we can then
share the contents of the private key we create on the SemaphoreCI dashboard.

Change directories to your local box’s SSH folder and create a key:

cd ~/.ssh
ssh-keygen -t rsa -b 4096

When you’re prompted to enter a file to save the key, enter the following:


Don’t enter a password for simplicity.

Again, copy the contents of this SSH key now to your clipboard using pbcopy:

cat ~/.ssh/ | pbcopy

Now SSH back into your droplet, switch to the semaphoreci user (see above),
and add this as a new authorized key to that same file you created earlier
(and added your own SSH key into). You should add it as the next line in the
file on your droplet at /home/semaphoreci/.ssh/authorized_keys.
This will allow SemaphoreCI access to your droplet later on:

ssh root@droplet-ip-address
su semaphoreci
vim ~/.ssh/authorized_keys

Now go back to that browser tab you have open for SemaphoreCI, and click
on the link for “Set Up Deployment”. This link is found on the page that
looks like this:

Semaphore Settings

It will then present you with options to choose from. Scroll down and select
the option titled “Generic Deployment”, and then click “Automatic”. You should
now be on a screen that looks like this:

Semaphore Deploy Commands

Add the following deploy commands where it says “Enter your deploy commands”:

Make sure you replace droplet-ip-address with the IP address of your
Digital Ocean droplet. Also, if you changed to a non-standard SSH port, change
where it says 22 in -p 22 below.

npm i -g pm2

ssh-keyscan -p 22 -H droplet-ip-address >> ~/.ssh/known_hosts

pm2 deploy ecosystem.json production

After you enter this command, it will now prompt you to paste in the value
of the private key file for the semaphoreci user. You don’t have this on
your clipboard yet, so you need to use pbcopy again locally:

cat ~/.ssh/semaphoreci_id_rsa | pbcopy

Paste the contents of your clipboard in the box shown in this screenshot:

Semaphore Private Key

If you want to easily simulate SemaphoreCI logging in as the semaphoreci user
then you can do this by the running following from your local box:

ssh -i ~/.ssh/semaphoreci_id_rsa semaphoreci@droplet-ip-address

You can also do this command much easier by creating a file on your
local box called ~/.ssh/config with these contents (replace your droplet IP):

Host semaphoreci-droplet
  Hostname droplet-ip-address
  User semaphoreci
  ForwardAgent yes
  Port 22
  IdentityFile ~/.ssh/semaphoreci_id_rsa

Then you can just run ssh semaphoreci-droplet and save a bit of typing.
Note that I left the line Port 22 in there in case you change your SSH port.
The line that says ForwardAgent yes means it forwards your SSH agent.

I’d highly recommend you test this out right now to make sure it’s set up OK.

4. Add new GitHub Deployment Key

Since we have a semaphoreci on our droplet, we now need to add a deployment
key on GitHub for our project, so that we can test deployment locally.

SemaphoreCI already has added a deployment key for your project (if you set
it up correctly), so don’t be alarmed if there’s already a key created when
you get to the GitHub Deployment Key settings page for your repo. You’ll be
creating another one for local testing purposes, don’t worry!

First SSH into your repository as the semaphoreci user:

ssh semaphoreci-droplet

Now create an SSH key pair:

cd ~/.ssh
ssh-keygen -t rsa -b 4096

When it asks you where to save the file, use the default and hit ENTER.

Don’t enter a password for simplicity, again.

Go to and click on your project, then go to its Settings.

Under “Deploy keys” add a new deployment key, allow it write access, and
paste the public key file’s content we just created. To easily
get the contents of this public key on your clipboard, from your local box
run this command:

ssh semaphoreci-droplet "cat ~/.ssh/" | pbcopy

Here’s the screen showing where you enter your key. Don’t be alarmed if you
already see a Deploy here in here; it’s supposed to be there, as it was added
automatically by SemaphoreCI in a previous step (yes, you’re adding another!):

GitHub Deployment Key

If you get stuck on this step or need more instructions, see this article:

5. Share /var/www Access

We created the user semaphoreci in the previous section, and now we need
to give it recursive read and write access to the /var/www folder on the
server – so that the pm2 command can deploy to the server (from both
our local box if we want to deploy manually, and also from SemaphoreCI’s
environment for the automated continuous integration deployments).

We need to SSH into the droplet as the root user, so we can then add this
folder and then give permissions on it to the semaphoreci user.

ssh root@droplet-ip-address

Now create the folder using sudo:

sudo mkdir /var/www

To stay in compliance with standards used widely by infrastructure teams,
we’ll use the classic www-data group to manage permissions on this folder.

Add the semaphoreci user to this group:

sudo adduser semaphoreci www-data

Change ownership of the folder and its files recursively:

sudo chown -R www-data:www-data /var/www

Grant the group read and write permissions (say that phrase five times fast!):

sudo chmod -R g+wr /var/www

That’s all.

If you wanted to test it out, then SSH in as the semaphoreci
user, and try to run the command touch /var/www/test.txt. It should let
you create a blank text file in that folder as the semaphoreci user. If you
did not do this properly, then you will encounter the following read/write
error later on:

pm2 deploy ecosystem.json production setup
--> Deploying to production environment
--> on host droplet-ip-address
mkdir: cannot create directory ‘/var/www’: Permission denied
mkdir: cannot create directory ‘/var/www’: Permission denied
mkdir: cannot create directory ‘/var/www’: Permission denied

6. Configure PM2 for Deployment

We’re going to set up a configuration file to be read by PM2.

On your local box, make sure you have pm2 installed globally:

npm i -g pm2

Create a new file in the root of your GitHub project called ecosystem.json.

vim ecosystem.json

Note that you can automatically create this file (with defaults) from
PM2’s CLI using pm2 ecosystem, however for the purpose of this article
I’m providing you with the content here. You need to replace the following:

  • droplet-ip-address with your droplet’s IP
  • repo property value with the path to your GitHub repo
  "apps": [
      "name": "App",
      "script": "app.js",
      "exec_mode": "cluster",
      "instances": "max",
      "env_production": {
        "NODE_ENV": "production"
  "deploy": {
    "production": {
      "user": "semaphoreci",
      "host": "droplet-ip-address",
      "ref": "origin/master",
      "repo": "",
      "path": "/var/www/production",
      "post-deploy": "npm i && pm2 startOrGracefulReload ecosystem.json --env production",
      "forward-agent": "yes"

If you need a reference for the options here, see the official docs here:

Note, if you have a custom port, you’ll need to add that as a "port"
property in your ecosystem.json‘s deploy nested object for each env.

Now run setup for deployment with PM2 using the CLI command, and make sure
you run this command from the root of your project’s folder locally:

pm2 deploy ecosystem.json production setup

You could (for fun) try running this command twice. If it worked the first
time, you will get an error on the second try; it will say the folder exists
already at the path /var/www/production!

Go ahead and deploy the production environment and start its processes:

pm2 deploy ecosystem.json production

You can test it out at the following link (replace with your IP):

If all is OK, then make sure that PM2 is scheduled to
startup automatically if your server reboots or something happens.

Make sure you run this command as the semaphoreci user on the droplet:

ssh semaphoreci-droplet
pm2 startup ubuntu

It will give you output which you will then need to run as a user with root
access, which you can get by running:

ssh root@droplet-ip-address
sudo su -c "env PATH=$PATH:/usr/local/bin pm2 startup ubuntu -u semaphoreci --hp /home/semaphoreci"

Now save the current processes to automatically restore them if your
server reboots or something happens. To do this, first make sure we have PM2
processes running that we’ll be able to save:

ssh semaphoreci-droplet
pm2 status

If no processes appear, go back to the section with PM2 deployment commands,

If your processes appear, then run this command as the semaphoreci user
on the droplet, so that these processes will get restored if something happens:

ssh semaphoreci-droplet
pm2 save

All done! Now try to commit some code and watch SemaphoreCI deploy it for you.

For example, you could make it say “thanks nifty” instead of “hello world”:

vim app.js
app.get('/', function(req, res) {
-  res.send('hello world')
+  res.send('thanks nifty')
git add .
git commit -m 'testing out semaphoreci automatically deploy my project'
git push origin master

Now just wait and watch the SemaphoreCI dashboard. It will run a build,
then it will deploy it to your Digital Ocean droplet for you using PM2.

If you want to see the pm2 save do its magic, then just run sudo reboot,
or reboot your droplet from Digital Ocean’s interface. When it powers back on,
SSH into it as semaphoreci, and run pm2 status to see your app is running.

7. PM2 Deployment Commands

This documentation is sourced directly from Keymetrics Blog and
also from the official PM2 deploy documentation.

pm2 deploy production update

pm2 deploy production revert 1

pm2 deploy production exec "pm2 restart all"

This deploy command option is inspired from TJ’s deploy shell script at:

Notes | Github | Twitter | Updates | RSS/XML FeedPowered by Wintersmith

Original URL:

Original article

Show HN: gofeed, a robust RSS and Atom Parser for Go

Build Status Coverage Status Go Report Card License

The gofeed library is a robust feed parser that supports parsing both RSS and Atom feeds. The universal gofeed.Parser will parse and convert all feed types into a hybrid gofeed.Feed model. You also have the option of parsing them into their respective atom.Feed and rss.Feed models using the feed specific atom.Parser or rss.Parser.

Supported feed types:
  • RSS 0.90
  • Netscape RSS 0.91
  • Userland RSS 0.91
  • RSS 0.92
  • RSS 0.93
  • RSS 0.94
  • RSS 1.0
  • RSS 2.0
  • Atom 0.3
  • Atom 1.0

It also provides support for parsing several popular extension modules, including Dublin Core and Apple’s iTunes extensions. See the Extensions section for more details.

Table of Contents


Universal Feed Parser

The universal gofeed.Parser works in 3 stages: detection, parsing and translation. It first detects the feed type that it is currently parsing. Then it uses a feed specific parser to parse the feed into its true representation which will be either a rss.Feed or atom.Feed. These models cover every field possible for their respective feed types. Finally, they are translated into a gofeed.Feed model that is a hybrid of both feed types. Performing the universal feed parsing in these 3 stages allows for more flexibility and keeps the code base more maintainable by seperating RSS and Atom parsing into seperate packages.


The translation step is done by anything which adheres to the gofeed.Translator interface. The DefaultRSSTranslator and DefaultAtomTranslator are used behind the scenes when you use the gofeed.Parser with its default settings. You can see how they translate fields from atom.Feed or rss.Feed to the universal gofeed.Feed struct in the Default Mappings section. However, should you disagree with the way certain fields are translated you can easily supply your own gofeed.Translator and override this behavior. See the Advanced Usage section for an example how to do this.

Feed Specific Parsers

The gofeed library provides two feed specific parsers: atom.Parser and rss.Parser. If the hybrid gofeed.Feed model that the universal gofeed.Parser produces does not contain a field from the atom.Feed or rss.Feed model that you require, it might be beneficial to use the feed specific parsers. When using the atom.Parser or rss.Parser directly, you can access all of fields found in the atom.Feed and rss.Feed models. It is also marginally faster because you are able to skip the translation step.

However, for the vast majority of users, the universal gofeed.Parser is the best way to parse feeds. This allows the user of gofeed library to not care about the differences between RSS or Atom feeds.

Basic Usage

Universal Feed Parser

The most common usage scenario will be to use gofeed.Parser to parse an arbitrary RSS or Atom feed into the hybrid gofeed.Feed model. This hybrid model allows you to treat RSS and Atom feeds the same.

Parse a feed from an URL:
fp := gofeed.NewParser()
feed, _ := fp.ParseURL("")
Parse a feed from a string:
feedData := `

Sample Feed

fp := gofeed.NewParser()
feed, _ := fp.ParseString(feedData)
Parse a feed from an io.Reader:
file, _ := os.Open("/path/to/a/file.xml")
defer file.Close()
fp := gofeed.NewParser()
feed, _ := fp.Parse(file)

Feed Specific Parsers

You can easily use the rss.Parser and atom.Parser directly if you have a usage scenario that requires it:

Parse a RSS feed into a rss.Feed
feedData := ` (Example Name)

fp := rss.Parser{}
rssFeed, _ := fp.Parse(strings.NewReader(feedData))
Parse an Atom feed into a atom.Feed
feedData := `
Example Atom
fp := atom.Parser{}
atomFeed, _ := fp.Parse(strings.NewReader(feedData))

Advanced Usage

Parse a feed while using a custom translator

The mappings and precedence order that are outlined in the Default Mappings section are provided by the following two structs: DefaultRSSTranslator and DefaultAtomTranslator. If you have fields that you think should have a different precedence, or if you want to make a translator that is aware of an unsupported extension you can do this by specifying your own RSS or Atom translator when using the gofeed.Parser.

Here is a simple example of creating a custom Translator that makes the /rss/channel/itunes:author field have a higher precedence than the /rss/channel/managingEditor field in RSS feeds. We will wrap the existing DefaultRSSTranslator since we only want to change the behavior for a single field.

First we must define a custom translator:

type MyCustomTranslator struct {
    defaultTranslator *DefaultRSSTranslator

func NewMyCustomTranslator() *MyCustomTranslator {
  t := &MyCustomTranslator{}

  // We create a DefaultRSSTranslator internally so we can wrap its Translate
  // call since we only want to modify the precedence for a single field.
  t.defaultTranslator = &DefaultRSSTranslator{}
  return t

func (ct* MyCustomTranslator) Translate(feed interface{}) (*Feed, error) {
    rss, found := feed.(*rss.Feed)
    if !found {
        return nil, fmt.Errorf("Feed did not match expected type of *rss.Feed")

  f, err := ct.Translate(rss)
  if err != nil {
    return nil, err

  if rss.ITunesExt != nil && rss.ITunesExt.Author != "" {
      f.Author = rss.ITunesExt.Author
  } else {
      f.Author = rss.ManagingEditor
  return f

Next you must configure your gofeed.Parser to utilize the new gofeed.Translator:

feedData := `

Ender Wiggin
Valentine Wiggin


fp := gofeed.NewParser()
fp.RSSTrans = NewMyCustomTranslator()
feed, _ := fp.ParseString(feedData)
fmt.Println(feed.Author) // Valentine Wiggin


Every element which does not belong to the feed’s default namespace is considered an extension by gofeed. These are parsed and stored in a tree-like structure located at Feed.Extensions and Item.Extensions. These fields should allow you to access and read any custom extension elements.

In addition to the generic handling of extensions, gofeed also has built in support for parsing certain popular extensions into their own structs for convenience. It currently supports the Dublin Core and Apple iTunes extensions which you can access at Feed.ItunesExt, feed.DublinCoreExt and Item.ITunesExt and Item.DublinCoreExt

Invalid Feeds

A best-effort attempt is made at parsing broken and invalid XML feeds. Currently, gofeed can succesfully parse feeds with the following issues:

  • Unescaped/Naked Markup in feed elements
  • Undeclared namespace prefixes
  • Missing closing tags on certain elements
  • Illegal tags within feed elements without namespace prefixes
  • Missing “required” elements as specified by the respective feed specs.
  • Incorrect date formats

Default Mappings

The DefaultRSSTranslator and the DefaultAtomTranslator map the following rss.Feed and atom.Feed fields to their respective gofeed.Feed fields. They are listed in order of precedence (highest to lowest):

gofeed.Feed RSS Atom
Title /rss/channel/title
Description /rss/channel/description
Link /rss/channel/link
FeedLink /rss/channel/atom:link[@rel=”self”]/@href
Updated /rss/channel/lastBuildDate
Published /rss/channel/pubDate
Author /rss/channel/managingEditor
Language /rss/channel/language
Image /rss/channel/image
Copyright /rss/channel/copyright
Generator /rss/channel/generator /feed/generator
Categories /rss/channel/category
gofeed.Item RSS Atom
Title /rss/channel/item/title
Description /rss/channel/item/description
Content /feed/entry/content
Link /rss/channel/item/link
Updated /rss/channel/item/dc:date
Published /rss/channel/item/pubDate /feed/entry/published
Author /rss/channel/item/author
Guid /rss/channel/item/guid /feed/entry/id
Image /rss/channel/item/itunes:image
Categories /rss/channel/item/category
Enclosures /rss/channel/item/enclosure /feed/entry/link[@rel=”enclosure”]



This project is licensed under the MIT License


  • Mark Pilgrim for his work on the excellent Universal Feed Parser Python library. This library was referenced several times during the development of gofeed. Many of its unit test cases were also ported to the gofeed project as well.
  • Dan MacTough for his work on node-feedparser. It provided inspiration for the set of fields that should be covered in the hybrid gofeed.Feed model.
  • Matt Jibson for his date parsing function in the goread project.
  • Jim Teeuwen for his method of representing arbitrary feed extensions in the go-pkg-rss library.

Original URL:

Original article

Warcraft fans’ fury at Blizzard over server closure

World of Warcraft logoImage copyright
Blizzard Entertainment

Image caption

The original game was released in 2004

A petition to allow gamers to run their own servers for the original World of Warcraft (WoW) game has attracted almost 100,000 signatures online.

Games studio Blizzard Entertainment no longer operates servers for the original WoW, which was first released in 2004, so some fans run their own.

On 10 April, a popular fan server known as Nostalrius, with 150,000 active members, was closed after the threat of legal action by Blizzard Entertainment.

Blizzard has yet to respond to the BBC.

World of Warcraft is an online multi-player game in which players explore a vast landscape, complete quests and interact with other gamers.

Image copyright
Blizzard Entertainment

Image caption

Gamers can play as mythical characters in World of Warcraft

The original game from 2004 has since been updated with new instalments that some players say have materially changed the experience of the game, so some fans have set up their own servers to play the original, “vanilla” WoW.

Blizzard has previously said it had no plans to reopen access to the “classic” game.

“We realise that some of you feel that World of Warcraft was more fun in the past than it is today, and we also know that some of you would like nothing more than to go back and play the game as it was back then,” wrote a Blizzard community manager in 2011.

“The developers however prefer to see the game continuously evolve and progress, and as such we have no plans to open classic realms or limited expansion content realms.”

The decision to close fan-run server Nostalrius, which had attracted 800,000 players during its year online, has prompted anger from parts of the online gaming community.

Many who commented on the closure acknowledged the fact that running servers such as Nostalrius was technically illegal but said Blizzard should support non-profit fan-driven projects to keep the game going.

Image copyright

Image caption

YouTuber Jon Jafari, known as JonTron, has criticised Blizzard’s actions

“World of Warcraft meant a lot, to a lot of people,” said YouTube gamer Jon Jafari in a widely-shared video.

“It might seem silly because it’s just a game… but this game was a big part of my life and a lot of people’s lives. All these people want to do is go back and play it.

“If the server is making a profit, I can see them taking down something like that – but a lot of these are just for the love of the old game.”

Commenting on its closure, the team behind Nostalrius posted: “We never saw our community as a threat for Blizzard. It sounds more like a transverse place where players can continue to enjoy old World of Warcraft’s games no longer available.”

Original URL:

Original article

Make a Raspberry Pi-Powered Remote Camera that Monitors Weather, Temperature, and More

We know you can turn a Raspberry Pi into a cheap surveillance system and that displaying images on a small screen is no problem either . Adafruit combined all this together with some sensors to create a system where you can monitor the camera and data on one Raspberry Pi from another Raspberry Pi.

Read more…

Original URL:

Original article

Visual Studio Code 1.0

April 14, 2016 by The VS Code Team, @code

header graphic

Today we’re very proud to release version 1.0 of Visual Studio Code. Since our initial launch one year ago, 2 million developers have installed VS Code. Today, we’re excited to report that more than 500,000 developers actively use VS Code each month.

What started as an experiment to build a production quality editor using modern web technologies has blossomed into a new kind of cross-platform development tool, one that focuses on core developer productivity by centering the product on rich code editing and debugging experiences. Visual Studio Code brings the industry-leading experiences of Visual Studio to a streamlined development workflow, that can be a core part of the tool set of every developer, building any kind of application.

Getting to “1.0” over the last few months has been about more than features. We have worked with the community to further improve stability, fixing hundreds of bugs. And we’ve pushed hard on getting the best performance we can out of the editing experience.

VS Code was initially built for developers creating web apps using JavaScript and TypeScript. But in less than 6 months since we made the product extensible, the community has built over 1000 extensions that now provide support for almost any language or runtime in VS Code. Today, a broad range of developers from individuals and startups to Fortune 500 companies, including audiences completely new to Microsoft’s tools, are all more productive with a tool that fits comfortably into their current tool chain and workflow, and supports the technologies they use, from Go and Python to React Native and C++. With this great ecosystem in place, we’re now confident in declaring our API as stable, and guaranteeing compatibility going forward.

And we have strived to make VS Code 1.0 a great editor for every developer. VS Code is now fully localizable, and ships in 9 different languages, including French, German, Japanese, and Chinese. And, we have worked to make VS Code the most accessible of modern editors, with full keyboard navigation and support for screen reading and accessible navigation for visually impaired developers.

We could not have reached this important milestone without the help of all our contributors. Since committing to doing development in the open less than four months ago, we’ve consumed over 300 pull requests. Whether you created a PR, filed an issue, gave a thumbs up, tweeted, or simply used VS Code in your day-to-day, you’re a part of the team. Thank you!

installs graphic

The History of VS Code

Can we build a code editor fast enough that it doesn’t feel like you’re typing in a browser?

It was only a few short years ago that we kicked off what we then called the “Monaco” team. At the time, browsers were just beginning to introduce HTML5, and the race to build faster JavaScript runtimes was in full swing.

So we set out to answer the question, “Can we build a browser-based code editor that feels native?” Not just an experience for text editing, but source code editing. Suggestion lists, error and warning squiggles, Go to Definition, and more.

Today, we believe the answer was a resounding “Yes”. The editor we built can now be found on some of the most demanding global websites – OneDrive, Visual Studio Team Services, Bing Code Search, Azure – sites used by millions of people every day. It even ships to 100s of millions of Windows desktops with the F12 tools in Internet Explorer. And that same editor is at the heart of VS Code.

Of course, to build the editor we needed a development tool. Developers know that one of the best ways to evolve your code quickly is to “dogfood” it: use it the same way your customers will. It therefore made sense that we would create a local Node.js based service to serve up files and the editor in a lightweight development tool. This tool eventually made its way to the cloud as a part of Azure Websites.

But we strived to go further. We wanted to build a native development tool that developers could install and use anywhere, for any source code. And, from our experience, we believed that it was important to not just have an editor, but one that could help developers accomplish their most common tasks: navigating code, debugging, and working with Git. And, so, Visual Studio Code was born.

Being built on web technologies made it easy to host the tool in a native cross-platform shell. We decided early on to use, and contribute to, a number of open source technologies – including GitHub’s great Electron shell, which combines web and native UI with a Node.js API. In just a few short months, we were able to release the first preview of Visual Studio Code at //build/ 2015.

The initial response to a code editor running on OS X, Windows, and Linux was overwhelmingly positive, even with two fundamental gaps in the offering – extensibility and open development.

Keeping our principle of using VS Code the way our customers do, we decided that the best way to deliver a rich and stable API was to build VS Code using the same API we would expose to extension developers. In fact, the core language services for JavaScript and TypeScript are actually extensions that just happen to be bundled with the distribution. Today, we use VS Code to build and debug VS Code, its extensions, and Node-based services. The same rich TypeScript editing, navigation, and debugging experiences we enjoy when building VS Code are available to everyone developing an extension for VS Code. Six months after our initial preview release, we declared VS Code to be Beta quality at Connect(); 2015, with a full extensibility model, and support in the new Visual Studio Marketplace.

And at the same time, we open-sourced the VS Code repository and many of our own extensions, and moved to developing Visual Studio Code in the open.

timeline graphic

Being “1.0”

Today, Visual Studio Code delivers on many of the aspects that we imagined during incubation. VS Code has great editing and navigation experiences, streamlined debugging, and built-in Git support.

Developers today love VS Code for its powerful set of built-in features, intuitive editing and debugging experiences, performance and responsiveness, and great language and platform support. The VS Code download is under 40MB including support for 9 additional languages (Simplified Chinese, Traditional Chinese, French, German, Italian, Japanese, Korean, Russian and Spanish) and it installs in seconds. With the help of developers like @zersiax, VS Code is now accessible to visually impaired developers on Windows and soon on OS X and Linux.

More than anything else, what drives the success of Visual Studio Code is the feedback and interactions from the community. From the beginning, we’ve striven to be as open as possible in our roadmap and vision for VS Code, and in November, we took that a step further by open-sourcing VS Code and adding the ability for anyone to make it better through submitting issues and feedback, making pull requests, or creating extensions.

The community responded, with huge growth in the number of extensions and the way they’re using VS Code. Today we have extensions for Node.js, Go, C++, PHP, and Python, as well as many more languages, linters, and tools. And VS Code is being used both by teams of developers, but also in companies like Progressive Insurance, where VS Code is used not just by developers, but analysts and data scientists as well.

Seeing the support and help the community has already poured into the product, the potential for VS Code has never been greater.

Looking Ahead

While we’re excited about releasing 1.0 today, we are even more excited about the future.

Of course, we will continue to focus on the fundamentals. Performance, stability, accessibility, and compatibility are of utmost importance to our users, and they are to us as well. We will continue to invest in improving developer productivity, guided by the great user feedback on UserVoice. We will continue to work with partners and the community to expand support for new languages, and platforms, and experiences. And we will continue to work with you, our community, to build a great tool for you, and for every developer.

If you haven’t tried out Visual Studio Code yet, please download it and let us know what you think!

Thanks Again!

The VS Code Team, @code

Original URL:

Original article

Genius Is Hiring – Level up as a SEIT

Genius began as a website for decoding rap lyrics and has since evolved into the Internet’s best source for musical knowledge, with more than 40 million monthly unique visitors and tens of thousands of contributing users. Our mission is to be the layer on top of culture that helps you understand and appreciate it—in other words, to “annotate the world.”

Engineers at Genius have an unusual amount of leverage. Although we’re working on a huge project, and although we’ve raised more than $55 million, we’re still a small team. So every engineer is intimately involved in the product planning process. You’ll never work on a feature that you don’t think is a good idea to build.

Engineer in Test

Genius is looking for an Engineer in Test to ensure the quality of our products by leading both our automated and manual testing efforts. As Engineer in Test you’ll push the engineering team as a whole to build better and more robust software by identifying, diagnosing, and following up on issues and building automated testing tools that can keep up with our rapid deploy cycle. Based on strong performance, new hires as Engineer in Test may advance to full-stack engineer or another technical or product role.


  • Develop detailed knowledge of Genius’ products and an intuition about how to break it and expose bugs before users do
  • Drive adoption of testing best practices, metrics to monitor application changes associated with releases, bug prevention strategies, and other quality measures
  • Maintain and expand existing automated test suites for the web application frontend (Angular.js) and server backend (Ruby, Rails)
  • Evaluate and incorporate new testing and infrastructure tools to increase automated test coverage
  • Work with mobile developers and product managers to verify new mobile application features and identity regressions before release
  • Follow up on incoming bug reports from staff, users, and exception reporting tools


  • Strong background in web design and development, including testing best practice
  • Proficiency with at least one general-purpose programming language
  • Experience working on and supporting moderately sized to large production applications
  • Exceptional candidates will have experience building testing harnesses for large, high-traffic web applications

Original URL:

Original article

Microsoft Releases CentOS-Based ‘Linux Data Science Virtual Machine’ For Azure

An anonymous reader writes: Microsoft has announced a CentOS-based VM image for Azure called ‘Linux Data Science Virtual Machine’. The VM has pre-installed tools such as Anaconda Python Distribution, Computational Network Toolkit, and Microsoft R Open. It focuses on machine learning and analytics, making it a great choice for data scientists. “Thanks to Azure’s worldwide cloud infrastructure, customers now have on-demand access to a Linux environment to perform a wide range of data science tasks. The VM saves customers the time and effort of having to discover, install, configure and manage these tools individually. Hosting the data science VM on Azure ensures high availability, elastic capacity and a consistent set of tools to foster collaboration across your team”, says Gopi Kumar, Senior Program Manager, Microsoft Data Group.

Share on Google+

Read more of this story at Slashdot.

Original URL:

Original article

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑

%d bloggers like this: