PgAdmin 4

As you may know, many of us from the pgAdmin team have been hard at work on pgAdmin 4 for some time now. pgAdmin 4 is a complete rewrite of pgAdmin (the fourth, as you may guess), the previous version having reached the end of it’s maintainable life after 14 years of development.

Work on the project began slowly, almost two years ago, however the team at EnterpriseDB have ramped up the development pace over the last few months. Right now, we’re approaching alpha-readiness which we expect to be at within a few weeks.

Architecture

This new application is designed for operation on both the desktop and a webserver. Written in Python using the Flask framework for the backend, and Javascript/jQuery/Backbone for the frontend, it can easily be deployed as a WSGI application for multiple users in practically any network environment. A small runtime application allows it to be run as a desktop application – this is a Qt executable that incorporates a Python interpreter and web browser along with the main application in a single package that can be installed on a developer laptop as with previous versions of pgAdmin.

Functionality

Whilst the core functionality of pgAdmin 4 remains similar to pgAdmin 3, there are a number of changes we’ve made:

  • Support for unsupported database versions has been dropped.
  • We haven’t re-implemented support for some object types that no one really used in the tool before – for example, operator classes and families.
  • We haven’t (yet) reimplemented some of the tools that didn’t work so well in pgAdmin 3, such as the graphical query builder or database designer (which was always disabled entirely by default).
  • The Query Tool and Edit Grid have been merged into a single tool. Over coming releases we’ll be improving the functionality further to allow in-grid updates to be made to results from arbitrary queries (where a query is determined to be updateable). For now though, updating is allowed when pgAdmin knows the data source is a single table with a primary key.
  • The user interface is more flexible than ever, allowing tabs to be docked and re-arranged in more ways than previously.
  • We’ve spent time redesigning some of the UI paradigms in pgAdmin 3. Gone are the list controls with Add/Remove buttons, replaced with what we call sub-node grid controls that will allow in-grid editing of key values, with more detail available when needed in expandable rows.
  • We also spend time thinking about how to make it faster to use pgAdmin, by minimising the need to switch between dialogues, using searchable combo boxes and more.
  • The UI is much more attractive, making use of control groupings and expandable regions to make things more readable.

Screenshots

So, enough of the babble, here are some pre-release, semi-polished screenshots:

The main user interface, showing the properties of a function. 

Setting the ACL on a function. 
Adding a member to a composite type using the sub-node grid control. 
The Query Tool and Data Editor.

The Procedural Language Debugger.

Team

As you can imagine, there has been a significant amount of work done to get to this stage, and I really need to express my gratitude to those who have contributed, as well as the executive management team at EnterpriseDB who have allowed me to commit so many people to this project:

Project leadership
  • Ashesh Vashi (engineering team manager, code guru)
  • Karen Blatchley (project manager)

Development team
  • Khushboo Vashi
  • Akshay Joshi
  • Arun Kollan
  • Harshal Dhumal
  • Murtuza Zabuawala
  • Neel Patel
  • Sanket Mehta
  • Surinder Kumar

Packaging
  • Muhammad Aqeel
  • Paresh More
  • Sandeep Thakkar
QA
  • Priyanka Shendge
  • Fahar Abbas
Of course, there are also community members who are starting to contribute fixes and other improvements, such as Thom Brown (on his own time, not EDBs), Seçkin Alan, Ronan Dunklau and Prasad Somwanshi, all of whom (along with others I may have missed) deserve thanks.

Want to help or learn more?

If you want to help, you can checkout the code and start playing with it. We’re not yet feature complete (for example, the Tables node in the treeview is still in development), but we’re pretty close. Feel free to try out the code, and report or better yet, fix any bugs or issues you may find. If you wish to start working on new features that is also welcome, but please do email the hackers list first to ensure your work is not something that’s already on our project plan!
If anyone would like to talk more about pgAdmin 4, I’ll be at PGConf.US next week – the organisers know me well and should be able to help you find me for a chat or demo. See you there!


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/V_jUvRLSNTw/pgadmin-4-elephant-nears-finish-line.html

Original article

Show HN: How to Setup Node.js App Automated Deployment and CI with PM2 for MVP’s

April 2016

tldr; This article shows you how to configure an insanely simple
automated continuous integration and deployment setup for a Node.js
app using GitHub, PM2, Digital Ocean, and
SemaphoreCI. I wrote it because nothing like this in its
entirety exists. It should take you 30 minutes to set up properly.

CI PM2 Node GitHub Server Setup

Index

Preface

I’ve used or explored nearly every CI testing tool there is for Node.js
(maybe?). I have tried TravisCI, but grew tired of constant downtime
and slow, very slow build times (…OK, the builds ran fast, but they did not
kick off quickly!). Also I’ve tried CircleCI, but their founders
removed my thoughts from their community because they didn’t
agree to allow the file name for YAML config to be .circle.yml instead of
circle.yml. I also faced troubles while trying to configure and set up
Jenkins (though it was while I was working with an inexperienced
team, whom were the ones setting it up). I’ve also looked at
Shippable, but it really didn’t interest me, just like the rest
– because I now enjoy working with SemaphoreCI
namely since the prodigy TJ Holowaychuk recommended it to me.

For anyone interested in getting into the automated CI deployment business,
it’s relatively straightforward to market yourself – just list yourself
in all the Wikipedia articles, on Quora (with some upvote magic), have a good
service that doesn’t shut down or lie about build times, and have clear docs.
If you do those four things, you’re on the way to at least some passive income!

With regards to server hosting, I chose Digital Ocean because they rock.
I have never had a problem with them in over five years. That’s something!
I also printed t-shirts for Digital Ocean before I sold Teelaunch, and really
liked working with them.

Not only all that, but their service has great uptime, and their boxes
“droplets” are really fast to set up and reliable. I’m not a huge fan of
using Amazon EC2 and AWS in general for building Rapid MVP’s (of course
I would definitely use load balancing or something for scaling an app that has
thousands of users across the world). If your first question about building
an app is “How can I scale it?” or “Will Digital Ocean let me scale?” –
take my advice, you’re doing it wrong. Stop it. Think Rapid MVP.

To put it simply, Amazon has an interface that resembles a wild jungle with
overgrown vines on every tree, and Digital Ocean’s interface is a beautiful
oasis in a vast VPS desert.

As a side note, I can almost guarantee you that sometime in the future,
everyone will want barebones boxes connected to ethernet plugs. Because imagine
when everyone has fiber internet and anyone can host their e-commerce store
from a RaspberryPI running from their kitchen table.

1. Create your Droplet

First, you need a Digital Ocean account. Be patient as their signup process
may require you to verify your email and enter your credit card.

Sign up with this link to get $10 of free credit (2 months of hosting):
https://m.do.co/c/a7fe489d1b27

When you create your Digital Ocean (“DO”) droplet be sure to only allow SSH
only access and add your SSH key to Digital Ocean. You can do this from DO’s
dashboard and you can find more about this on a Digital Ocean article.

Make sure you create a droplet using the latest stable Ubuntu release.

Digital Ocean Droplet

SSH into the droplet and install dependencies for your stack with Node.
In my case, I needed to install Node, MongoDB, and Redis. Of course, MongoDB
and Redis are optional dependencies, but I use them because they allow me to
build Rapid MVP’s (quick prototypes in other words). Also, I really
like to use NVM to manage various version of Node installed, which
was created by another prodigy, Tim Caswell.

Make sure you replace all instances in this article of droplet-ip-address
with the IP address given to you by Digital Ocean for your droplet.

ssh root@droplet-ip-address

Install the basic requirements needed for the server:

sudo apt-get update
sudo apt-get upgrade
sudo apt-get install vim build-essential libssl-dev git unattended-upgrades authbind openssl fail2ban

Install NVM and set it up to use the latest stable version:

curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.31.0/install.sh | bash
nvm install stable
nvm alias default stable

Install PM2, which will handle deployments for us and manage our processes:

npm i -g pm2

Install MongoDB, which is optional:

sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 7F0CEB10
echo "deb http://repo.mongodb.org/apt/ubuntu "$(lsb_release -sc)"/mongodb-org/3.0 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-3.0.list
sudo apt-get update
sudo apt-get install -y mongodb-org
service mongod status

Install Redis, which is optional:

sudo add-apt-repository ppa:chris-lea/redis-server
sudo apt-get update
sudo apt-get install redis-server
redis-benchmark -q -n 1000 -c 10 -P 5
source ~/.profile

You might also want to look into installing fail2ban, changing the default
SSH port, and remove password-based login access. You can find how to do this
from this section in my security article, or just Google it.

2. Write your Node App File

This article assumes you already have created a GitHub repository for your
project and that you already have some app.js file in the root of it. If you
haven’t done that yet, then this section is for you. This section also
describes how to configure that app.js file for zero-downtime and graceful
reloading upon deployment of code.

For the purpose of this article, I share a basic app example that will respond
with “hello world” when you visit your droplet later on (over port 3000).

Answer yes to all the prompts or just hit ENTER to breeze through it:

npm init

Now save the basic express dependency:

npm i --save express

Create a new file called app.js (or edit your existing to include SIGINT):

vim app.js
var express = require('express');
var app = express();

app.get('/', function(req, res) {
  res.send('hello world')
});

app.listen(3000);

process.on('SIGINT', function() {

  
  

  setTimeout(function() {
    
    process.exit(0);
  }, 300);

});

Let’s test this out locally before you bother to continue further.

node app.js

Visit this URL in your browser (it should say “hello world”):
http://localhost:3000

By default, PM2 will allow 1.6 seconds for your app to gracefully exit,
and you can read more on how to configure your app for zero-downtime here:
http://pm2.keymetrics.io/docs/usage/signals-clean-restart/

3. Set up SSH for SemaphoreCI

First, go to https://semaphoreci.com and sign up for an account.

Once you’ve logged in, create a project and connect with your GitHub account.

SemaphoreCI Loading

Make sure that your “Node version” shown under your SemaphoreCI project’s
build settings matches the output from your droplet when you run node -v.

For example, in this screenshot I have selected the v5.8.0 that I’m using.

SemaphoreCI Node Version

Now we need to add a user to the droplet to let SemaphoreCI deploy the app
after all tests have successfully passed.

Keep your SemaphoreCI browser tab open, because we will come back to that
in just a bit!

Copy to your clipboard the contents of your local ~/.ssh/id_rsa.pub file.
If you have not yet already created this file, see GitHub’s instructions.

I’m using pbcopy (while on Mac OS X) to make it easy and do it the CLI way:

cat ~/.ssh/id_rsa.pub | pbcopy

Now SSH back into your droplet if you’re not still connected:

ssh root@droplet-ip-address

Add the user semaphoreci on the droplet, so you can then SSH in as them.
When you are prompted for a password, write it down or make it memorable.

sudo adduser semaphoreci

Switch user to semaphoreci and paste your clipboard contents into the file
called ~/.ssh/authorized_keys. This will let you test deployments from
your local computer as the semaphoreci user later on. In other words, you
can SSH into your droplet as the semaphoreci user easily. It’ll make sense
later, don’t worry.

su semaphoreci
mkdir ~/.ssh
chmod 700 ~/.ssh
vim ~/.ssh/authorized_keys
chmod 600 ~/.ssh/authorized_keys

We need to create an SSH key for the actual semaphoreci user, so we can then
share the contents of the private key we create on the SemaphoreCI dashboard.

Change directories to your local box’s SSH folder and create a key:

cd ~/.ssh
ssh-keygen -t rsa -b 4096

When you’re prompted to enter a file to save the key, enter the following:

semaphoreci_id_rsa

Don’t enter a password for simplicity.

Again, copy the contents of this SSH key now to your clipboard using pbcopy:

cat ~/.ssh/semaphoreci_id_rsa.pub | pbcopy

Now SSH back into your droplet, switch to the semaphoreci user (see above),
and add this as a new authorized key to that same file you created earlier
(and added your own SSH key into). You should add it as the next line in the
file on your droplet at /home/semaphoreci/.ssh/authorized_keys.
This will allow SemaphoreCI access to your droplet later on:

ssh root@droplet-ip-address
su semaphoreci
vim ~/.ssh/authorized_keys

Now go back to that browser tab you have open for SemaphoreCI, and click
on the link for “Set Up Deployment”. This link is found on the page that
looks like this:

Semaphore Settings

It will then present you with options to choose from. Scroll down and select
the option titled “Generic Deployment”, and then click “Automatic”. You should
now be on a screen that looks like this:

Semaphore Deploy Commands

Add the following deploy commands where it says “Enter your deploy commands”:

Make sure you replace droplet-ip-address with the IP address of your
Digital Ocean droplet. Also, if you changed to a non-standard SSH port, change
where it says 22 in -p 22 below.


npm i -g pm2


ssh-keyscan -p 22 -H droplet-ip-address >> ~/.ssh/known_hosts


pm2 deploy ecosystem.json production

After you enter this command, it will now prompt you to paste in the value
of the private key file for the semaphoreci user. You don’t have this on
your clipboard yet, so you need to use pbcopy again locally:

cat ~/.ssh/semaphoreci_id_rsa | pbcopy

Paste the contents of your clipboard in the box shown in this screenshot:

Semaphore Private Key

If you want to easily simulate SemaphoreCI logging in as the semaphoreci user
then you can do this by the running following from your local box:

ssh -i ~/.ssh/semaphoreci_id_rsa semaphoreci@droplet-ip-address

You can also do this command much easier by creating a file on your
local box called ~/.ssh/config with these contents (replace your droplet IP):

Host semaphoreci-droplet
  Hostname droplet-ip-address
  User semaphoreci
  ForwardAgent yes
  Port 22
  IdentityFile ~/.ssh/semaphoreci_id_rsa

Then you can just run ssh semaphoreci-droplet and save a bit of typing.
Note that I left the line Port 22 in there in case you change your SSH port.
The line that says ForwardAgent yes means it forwards your SSH agent.

I’d highly recommend you test this out right now to make sure it’s set up OK.

4. Add new GitHub Deployment Key

Since we have a semaphoreci on our droplet, we now need to add a deployment
key on GitHub for our project, so that we can test deployment locally.

SemaphoreCI already has added a deployment key for your project (if you set
it up correctly), so don’t be alarmed if there’s already a key created when
you get to the GitHub Deployment Key settings page for your repo. You’ll be
creating another one for local testing purposes, don’t worry!

First SSH into your repository as the semaphoreci user:

ssh semaphoreci-droplet

Now create an SSH key pair:

cd ~/.ssh
ssh-keygen -t rsa -b 4096

When it asks you where to save the file, use the default and hit ENTER.

Don’t enter a password for simplicity, again.

Go to https://github.com and click on your project, then go to its Settings.

Under “Deploy keys” add a new deployment key, allow it write access, and
paste the id_rsa.pub public key file’s content we just created. To easily
get the contents of this public key on your clipboard, from your local box
run this command:

ssh semaphoreci-droplet "cat ~/.ssh/id_rsa.pub" | pbcopy

Here’s the screen showing where you enter your key. Don’t be alarmed if you
already see a Deploy here in here; it’s supposed to be there, as it was added
automatically by SemaphoreCI in a previous step (yes, you’re adding another!):

GitHub Deployment Key

If you get stuck on this step or need more instructions, see this article:

https://developer.github.com/guides/managing-deploy-keys/#deploy-keys

5. Share /var/www Access

We created the user semaphoreci in the previous section, and now we need
to give it recursive read and write access to the /var/www folder on the
server – so that the pm2 command can deploy to the server (from both
our local box if we want to deploy manually, and also from SemaphoreCI’s
environment for the automated continuous integration deployments).

We need to SSH into the droplet as the root user, so we can then add this
folder and then give permissions on it to the semaphoreci user.

ssh root@droplet-ip-address

Now create the folder using sudo:

sudo mkdir /var/www

To stay in compliance with standards used widely by infrastructure teams,
we’ll use the classic www-data group to manage permissions on this folder.

Add the semaphoreci user to this group:

sudo adduser semaphoreci www-data

Change ownership of the folder and its files recursively:

sudo chown -R www-data:www-data /var/www

Grant the group read and write permissions (say that phrase five times fast!):

sudo chmod -R g+wr /var/www

That’s all.

If you wanted to test it out, then SSH in as the semaphoreci
user, and try to run the command touch /var/www/test.txt. It should let
you create a blank text file in that folder as the semaphoreci user. If you
did not do this properly, then you will encounter the following read/write
error later on:

pm2 deploy ecosystem.json production setup
--> Deploying to production environment
--> on host droplet-ip-address
mkdir: cannot create directory ‘/var/www’: Permission denied
mkdir: cannot create directory ‘/var/www’: Permission denied
mkdir: cannot create directory ‘/var/www’: Permission denied

6. Configure PM2 for Deployment

We’re going to set up a configuration file to be read by PM2.

On your local box, make sure you have pm2 installed globally:

npm i -g pm2

Create a new file in the root of your GitHub project called ecosystem.json.

vim ecosystem.json

Note that you can automatically create this file (with defaults) from
PM2’s CLI using pm2 ecosystem, however for the purpose of this article
I’m providing you with the content here. You need to replace the following:

  • droplet-ip-address with your droplet’s IP
  • repo property value with the path to your GitHub repo
{
  "apps": [
    {
      "name": "App",
      "script": "app.js",
      "exec_mode": "cluster",
      "instances": "max",
      "env_production": {
        "NODE_ENV": "production"
      }
    }
  ],
  "deploy": {
    "production": {
      "user": "semaphoreci",
      "host": "droplet-ip-address",
      "ref": "origin/master",
      "repo": "git@github.com:username/reponame.git",
      "path": "/var/www/production",
      "post-deploy": "npm i && pm2 startOrGracefulReload ecosystem.json --env production",
      "forward-agent": "yes"
    }
  }
}

If you need a reference for the options here, see the official docs here:
http://pm2.keymetrics.io/docs/usage/deployment/

Note, if you have a custom port, you’ll need to add that as a "port"
property in your ecosystem.json‘s deploy nested object for each env.

Now run setup for deployment with PM2 using the CLI command, and make sure
you run this command from the root of your project’s folder locally:

pm2 deploy ecosystem.json production setup

You could (for fun) try running this command twice. If it worked the first
time, you will get an error on the second try; it will say the folder exists
already at the path /var/www/production!

Go ahead and deploy the production environment and start its processes:

pm2 deploy ecosystem.json production

You can test it out at the following link (replace with your IP):
http://your-droplet-ip:3000

If all is OK, then make sure that PM2 is scheduled to
startup automatically if your server reboots or something happens.

Make sure you run this command as the semaphoreci user on the droplet:

ssh semaphoreci-droplet
pm2 startup ubuntu

It will give you output which you will then need to run as a user with root
access, which you can get by running:

ssh root@droplet-ip-address
sudo su -c "env PATH=$PATH:/usr/local/bin pm2 startup ubuntu -u semaphoreci --hp /home/semaphoreci"

Now save the current processes to automatically restore them if your
server reboots or something happens. To do this, first make sure we have PM2
processes running that we’ll be able to save:

ssh semaphoreci-droplet
pm2 status

If no processes appear, go back to the section with PM2 deployment commands,

If your processes appear, then run this command as the semaphoreci user
on the droplet, so that these processes will get restored if something happens:

ssh semaphoreci-droplet
pm2 save

All done! Now try to commit some code and watch SemaphoreCI deploy it for you.

For example, you could make it say “thanks nifty” instead of “hello world”:

vim app.js
app.get('/', function(req, res) {
-  res.send('hello world')
+  res.send('thanks nifty')
});
git add .
git commit -m 'testing out semaphoreci automatically deploy my project'
git push origin master

Now just wait and watch the SemaphoreCI dashboard. It will run a build,
then it will deploy it to your Digital Ocean droplet for you using PM2.

If you want to see the pm2 save do its magic, then just run sudo reboot,
or reboot your droplet from Digital Ocean’s interface. When it powers back on,
SSH into it as semaphoreci, and run pm2 status to see your app is running.

7. PM2 Deployment Commands

This documentation is sourced directly from Keymetrics Blog and
also from the official PM2 deploy documentation.


pm2 deploy production update


pm2 deploy production revert 1


pm2 deploy production exec "pm2 restart all"

This deploy command option is inspired from TJ’s deploy shell script at:

https://github.com/visionmedia/deploy

Notes

niftylettuce@gmail.com | Github | Twitter | Updates | RSS/XML FeedPowered by Wintersmith


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/wANZLjjzS5c/

Original article

Show HN: gofeed, a robust RSS and Atom Parser for Go

README.md

Build Status Coverage Status Go Report Card License

The gofeed library is a robust feed parser that supports parsing both RSS and Atom feeds. The universal gofeed.Parser will parse and convert all feed types into a hybrid gofeed.Feed model. You also have the option of parsing them into their respective atom.Feed and rss.Feed models using the feed specific atom.Parser or rss.Parser.

Supported feed types:
  • RSS 0.90
  • Netscape RSS 0.91
  • Userland RSS 0.91
  • RSS 0.92
  • RSS 0.93
  • RSS 0.94
  • RSS 1.0
  • RSS 2.0
  • Atom 0.3
  • Atom 1.0

It also provides support for parsing several popular extension modules, including Dublin Core and Apple’s iTunes extensions. See the Extensions section for more details.

Table of Contents

Overview

Universal Feed Parser

The universal gofeed.Parser works in 3 stages: detection, parsing and translation. It first detects the feed type that it is currently parsing. Then it uses a feed specific parser to parse the feed into its true representation which will be either a rss.Feed or atom.Feed. These models cover every field possible for their respective feed types. Finally, they are translated into a gofeed.Feed model that is a hybrid of both feed types. Performing the universal feed parsing in these 3 stages allows for more flexibility and keeps the code base more maintainable by seperating RSS and Atom parsing into seperate packages.

Diagram

The translation step is done by anything which adheres to the gofeed.Translator interface. The DefaultRSSTranslator and DefaultAtomTranslator are used behind the scenes when you use the gofeed.Parser with its default settings. You can see how they translate fields from atom.Feed or rss.Feed to the universal gofeed.Feed struct in the Default Mappings section. However, should you disagree with the way certain fields are translated you can easily supply your own gofeed.Translator and override this behavior. See the Advanced Usage section for an example how to do this.

Feed Specific Parsers

The gofeed library provides two feed specific parsers: atom.Parser and rss.Parser. If the hybrid gofeed.Feed model that the universal gofeed.Parser produces does not contain a field from the atom.Feed or rss.Feed model that you require, it might be beneficial to use the feed specific parsers. When using the atom.Parser or rss.Parser directly, you can access all of fields found in the atom.Feed and rss.Feed models. It is also marginally faster because you are able to skip the translation step.

However, for the vast majority of users, the universal gofeed.Parser is the best way to parse feeds. This allows the user of gofeed library to not care about the differences between RSS or Atom feeds.

Basic Usage

Universal Feed Parser

The most common usage scenario will be to use gofeed.Parser to parse an arbitrary RSS or Atom feed into the hybrid gofeed.Feed model. This hybrid model allows you to treat RSS and Atom feeds the same.

Parse a feed from an URL:
fp := gofeed.NewParser()
feed, _ := fp.ParseURL("http://feeds.twit.tv/twit.xml")
fmt.Println(feed.Title)
Parse a feed from a string:
feedData := `

Sample Feed

`
fp := gofeed.NewParser()
feed, _ := fp.ParseString(feedData)
fmt.Println(feed.Title)
Parse a feed from an io.Reader:
file, _ := os.Open("/path/to/a/file.xml")
defer file.Close()
fp := gofeed.NewParser()
feed, _ := fp.Parse(file)
fmt.Println(feed.Title)

Feed Specific Parsers

You can easily use the rss.Parser and atom.Parser directly if you have a usage scenario that requires it:

Parse a RSS feed into a rss.Feed
feedData := `

example@site.com (Example Name)

`
fp := rss.Parser{}
rssFeed, _ := fp.Parse(strings.NewReader(feedData))
fmt.Println(rssFeed.WebMaster)
Parse an Atom feed into a atom.Feed
feedData := `
Example Atom
`
fp := atom.Parser{}
atomFeed, _ := fp.Parse(strings.NewReader(feedData))
fmt.Println(atomFeed.Subtitle)

Advanced Usage

Parse a feed while using a custom translator

The mappings and precedence order that are outlined in the Default Mappings section are provided by the following two structs: DefaultRSSTranslator and DefaultAtomTranslator. If you have fields that you think should have a different precedence, or if you want to make a translator that is aware of an unsupported extension you can do this by specifying your own RSS or Atom translator when using the gofeed.Parser.

Here is a simple example of creating a custom Translator that makes the /rss/channel/itunes:author field have a higher precedence than the /rss/channel/managingEditor field in RSS feeds. We will wrap the existing DefaultRSSTranslator since we only want to change the behavior for a single field.

First we must define a custom translator:

type MyCustomTranslator struct {
    defaultTranslator *DefaultRSSTranslator
}

func NewMyCustomTranslator() *MyCustomTranslator {
  t := &MyCustomTranslator{}

  // We create a DefaultRSSTranslator internally so we can wrap its Translate
  // call since we only want to modify the precedence for a single field.
  t.defaultTranslator = &DefaultRSSTranslator{}
  return t
}

func (ct* MyCustomTranslator) Translate(feed interface{}) (*Feed, error) {
    rss, found := feed.(*rss.Feed)
    if !found {
        return nil, fmt.Errorf("Feed did not match expected type of *rss.Feed")
    }

  f, err := ct.Translate(rss)
  if err != nil {
    return nil, err
  }

  if rss.ITunesExt != nil && rss.ITunesExt.Author != "" {
      f.Author = rss.ITunesExt.Author
  } else {
      f.Author = rss.ManagingEditor
  }
  return f
}

Next you must configure your gofeed.Parser to utilize the new gofeed.Translator:

feedData := `

Ender Wiggin
Valentine Wiggin

`

fp := gofeed.NewParser()
fp.RSSTrans = NewMyCustomTranslator()
feed, _ := fp.ParseString(feedData)
fmt.Println(feed.Author) // Valentine Wiggin

Extensions

Every element which does not belong to the feed’s default namespace is considered an extension by gofeed. These are parsed and stored in a tree-like structure located at Feed.Extensions and Item.Extensions. These fields should allow you to access and read any custom extension elements.

In addition to the generic handling of extensions, gofeed also has built in support for parsing certain popular extensions into their own structs for convenience. It currently supports the Dublin Core and Apple iTunes extensions which you can access at Feed.ItunesExt, feed.DublinCoreExt and Item.ITunesExt and Item.DublinCoreExt

Invalid Feeds

A best-effort attempt is made at parsing broken and invalid XML feeds. Currently, gofeed can succesfully parse feeds with the following issues:

  • Unescaped/Naked Markup in feed elements
  • Undeclared namespace prefixes
  • Missing closing tags on certain elements
  • Illegal tags within feed elements without namespace prefixes
  • Missing “required” elements as specified by the respective feed specs.
  • Incorrect date formats

Default Mappings

The DefaultRSSTranslator and the DefaultAtomTranslator map the following rss.Feed and atom.Feed fields to their respective gofeed.Feed fields. They are listed in order of precedence (highest to lowest):

gofeed.Feed RSS Atom
Title /rss/channel/title
/rdf:RDF/channel/title
/rss/channel/dc:title
/rdf:RDF/channel/dc:title
/feed/title
Description /rss/channel/description
/rdf:RDF/channel/description
/rss/channel/itunes:subtitle
/feed/subtitle
/feed/tagline
Link /rss/channel/link
/rdf:RDF/channel/link
/feed/link[@rel=”alternate”]/@href
/feed/link[not(@rel)]/@href
FeedLink /rss/channel/atom:link[@rel=”self”]/@href
/rdf:RDF/channel/atom:link[@rel=”self”]/@href
/feed/link[@rel=”self”]/@href
Updated /rss/channel/lastBuildDate
/rss/channel/dc:date
/rdf:RDF/channel/dc:date
/feed/updated
/feed/modified
Published /rss/channel/pubDate
Author /rss/channel/managingEditor
/rss/channel/webMaster
/rss/channel/dc:author
/rdf:RDF/channel/dc:author
/rss/channel/dc:creator
/rdf:RDF/channel/dc:creator
/rss/channel/itunes:author
/feed/author
Language /rss/channel/language
/rss/channel/dc:language
/rdf:RDF/channel/dc:language
/feed/@xml:lang
Image /rss/channel/image
/rdf:RDF/image
/rss/channel/itunes:image
/feed/logo
Copyright /rss/channel/copyright
/rss/channel/dc:rights
/rdf:RDF/channel/dc:rights
/feed/rights
/feed/copyright
Generator /rss/channel/generator /feed/generator
Categories /rss/channel/category
/rss/channel/itunes:category
/rss/channel/itunes:keywords
/rss/channel/dc:subject
/rdf:RDF/channel/dc:subject
/feed/category
gofeed.Item RSS Atom
Title /rss/channel/item/title
/rdf:RDF/item/title
/rdf:RDF/item/dc:title
/rss/channel/item/dc:title
/feed/entry/title
Description /rss/channel/item/description
/rdf:RDF/item/description
/rss/channel/item/dc:description
/rdf:RDF/item/dc:description
/feed/entry/summary
Content /feed/entry/content
Link /rss/channel/item/link
/rdf:RDF/item/link
/feed/entry/link[@rel=”alternate”]/@href
/feed/entry/link[not(@rel)]/@href
Updated /rss/channel/item/dc:date
/rdf:RDF/rdf:item/dc:date
/feed/entry/modified
/feed/entry/updated
Published /rss/channel/item/pubDate /feed/entry/published
/feed/entry/issued
Author /rss/channel/item/author
/rss/channel/item/dc:author
/rdf:RDF/item/dc:author
/rss/channel/item/dc:creator
/rdf:RDF/item/dc:creator
/rss/channel/item/itunes:author
/feed/entry/author
Guid /rss/channel/item/guid /feed/entry/id
Image /rss/channel/item/itunes:image
/rss/channel/item/media:image
Categories /rss/channel/item/category
/rss/channel/item/dc:subject
/rss/channel/item/itunes:keywords
/rdf:RDF/channel/item/dc:subject
/feed/entry/category
Enclosures /rss/channel/item/enclosure /feed/entry/link[@rel=”enclosure”]

Dependencies

License

This project is licensed under the MIT License

Credits

  • Mark Pilgrim for his work on the excellent Universal Feed Parser Python library. This library was referenced several times during the development of gofeed. Many of its unit test cases were also ported to the gofeed project as well.
  • Dan MacTough for his work on node-feedparser. It provided inspiration for the set of fields that should be covered in the hybrid gofeed.Feed model.
  • Matt Jibson for his date parsing function in the goread project.
  • Jim Teeuwen for his method of representing arbitrary feed extensions in the go-pkg-rss library.


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/Wzg6vPeKAjo/gofeed

Original article

Warcraft fans’ fury at Blizzard over server closure

World of Warcraft logoImage copyright
Blizzard Entertainment

Image caption

The original game was released in 2004

A petition to allow gamers to run their own servers for the original World of Warcraft (WoW) game has attracted almost 100,000 signatures online.

Games studio Blizzard Entertainment no longer operates servers for the original WoW, which was first released in 2004, so some fans run their own.

On 10 April, a popular fan server known as Nostalrius, with 150,000 active members, was closed after the threat of legal action by Blizzard Entertainment.

Blizzard has yet to respond to the BBC.

World of Warcraft is an online multi-player game in which players explore a vast landscape, complete quests and interact with other gamers.

Image copyright
Blizzard Entertainment

Image caption

Gamers can play as mythical characters in World of Warcraft

The original game from 2004 has since been updated with new instalments that some players say have materially changed the experience of the game, so some fans have set up their own servers to play the original, “vanilla” WoW.

Blizzard has previously said it had no plans to reopen access to the “classic” game.

“We realise that some of you feel that World of Warcraft was more fun in the past than it is today, and we also know that some of you would like nothing more than to go back and play the game as it was back then,” wrote a Blizzard community manager in 2011.

“The developers however prefer to see the game continuously evolve and progress, and as such we have no plans to open classic realms or limited expansion content realms.”

The decision to close fan-run server Nostalrius, which had attracted 800,000 players during its year online, has prompted anger from parts of the online gaming community.

Many who commented on the closure acknowledged the fact that running servers such as Nostalrius was technically illegal but said Blizzard should support non-profit fan-driven projects to keep the game going.

Image copyright
YouTube/JonTronShow

Image caption

YouTuber Jon Jafari, known as JonTron, has criticised Blizzard’s actions

“World of Warcraft meant a lot, to a lot of people,” said YouTube gamer Jon Jafari in a widely-shared video.

“It might seem silly because it’s just a game… but this game was a big part of my life and a lot of people’s lives. All these people want to do is go back and play it.

“If the server is making a profit, I can see them taking down something like that – but a lot of these are just for the love of the old game.”

Commenting on its closure, the team behind Nostalrius posted: “We never saw our community as a threat for Blizzard. It sounds more like a transverse place where players can continue to enjoy old World of Warcraft’s games no longer available.”


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/gQ08KIKcfrs/technology-36044000

Original article

Make a Raspberry Pi-Powered Remote Camera that Monitors Weather, Temperature, and More

We know you can turn a Raspberry Pi into a cheap surveillance system and that displaying images on a small screen is no problem either . Adafruit combined all this together with some sensors to create a system where you can monitor the camera and data on one Raspberry Pi from another Raspberry Pi.

Read more…



Original URL: http://feeds.gawker.com/~r/lifehacker/full/~3/BCCN94Y0lio/make-a-raspberry-pi-powered-remote-camera-that-monitors-1771019090

Original article

Visual Studio Code 1.0

April 14, 2016 by The VS Code Team, @code

header graphic

Today we’re very proud to release version 1.0 of Visual Studio Code. Since our initial launch one year ago, 2 million developers have installed VS Code. Today, we’re excited to report that more than 500,000 developers actively use VS Code each month.

What started as an experiment to build a production quality editor using modern web technologies has blossomed into a new kind of cross-platform development tool, one that focuses on core developer productivity by centering the product on rich code editing and debugging experiences. Visual Studio Code brings the industry-leading experiences of Visual Studio to a streamlined development workflow, that can be a core part of the tool set of every developer, building any kind of application.

Getting to “1.0” over the last few months has been about more than features. We have worked with the community to further improve stability, fixing hundreds of bugs. And we’ve pushed hard on getting the best performance we can out of the editing experience.

VS Code was initially built for developers creating web apps using JavaScript and TypeScript. But in less than 6 months since we made the product extensible, the community has built over 1000 extensions that now provide support for almost any language or runtime in VS Code. Today, a broad range of developers from individuals and startups to Fortune 500 companies, including audiences completely new to Microsoft’s tools, are all more productive with a tool that fits comfortably into their current tool chain and workflow, and supports the technologies they use, from Go and Python to React Native and C++. With this great ecosystem in place, we’re now confident in declaring our API as stable, and guaranteeing compatibility going forward.

And we have strived to make VS Code 1.0 a great editor for every developer. VS Code is now fully localizable, and ships in 9 different languages, including French, German, Japanese, and Chinese. And, we have worked to make VS Code the most accessible of modern editors, with full keyboard navigation and support for screen reading and accessible navigation for visually impaired developers.

We could not have reached this important milestone without the help of all our contributors. Since committing to doing development in the open less than four months ago, we’ve consumed over 300 pull requests. Whether you created a PR, filed an issue, gave a thumbs up, tweeted, or simply used VS Code in your day-to-day, you’re a part of the team. Thank you!

installs graphic

The History of VS Code

Can we build a code editor fast enough that it doesn’t feel like you’re typing in a browser?

It was only a few short years ago that we kicked off what we then called the “Monaco” team. At the time, browsers were just beginning to introduce HTML5, and the race to build faster JavaScript runtimes was in full swing.

So we set out to answer the question, “Can we build a browser-based code editor that feels native?” Not just an experience for text editing, but source code editing. Suggestion lists, error and warning squiggles, Go to Definition, and more.

Today, we believe the answer was a resounding “Yes”. The editor we built can now be found on some of the most demanding global websites – OneDrive, Visual Studio Team Services, Bing Code Search, Azure – sites used by millions of people every day. It even ships to 100s of millions of Windows desktops with the F12 tools in Internet Explorer. And that same editor is at the heart of VS Code.

Of course, to build the editor we needed a development tool. Developers know that one of the best ways to evolve your code quickly is to “dogfood” it: use it the same way your customers will. It therefore made sense that we would create a local Node.js based service to serve up files and the editor in a lightweight development tool. This tool eventually made its way to the cloud as a part of Azure Websites.

But we strived to go further. We wanted to build a native development tool that developers could install and use anywhere, for any source code. And, from our experience, we believed that it was important to not just have an editor, but one that could help developers accomplish their most common tasks: navigating code, debugging, and working with Git. And, so, Visual Studio Code was born.

Being built on web technologies made it easy to host the tool in a native cross-platform shell. We decided early on to use, and contribute to, a number of open source technologies – including GitHub’s great Electron shell, which combines web and native UI with a Node.js API. In just a few short months, we were able to release the first preview of Visual Studio Code at //build/ 2015.

The initial response to a code editor running on OS X, Windows, and Linux was overwhelmingly positive, even with two fundamental gaps in the offering – extensibility and open development.

Keeping our principle of using VS Code the way our customers do, we decided that the best way to deliver a rich and stable API was to build VS Code using the same API we would expose to extension developers. In fact, the core language services for JavaScript and TypeScript are actually extensions that just happen to be bundled with the distribution. Today, we use VS Code to build and debug VS Code, its extensions, and Node-based services. The same rich TypeScript editing, navigation, and debugging experiences we enjoy when building VS Code are available to everyone developing an extension for VS Code. Six months after our initial preview release, we declared VS Code to be Beta quality at Connect(); 2015, with a full extensibility model, and support in the new Visual Studio Marketplace.

And at the same time, we open-sourced the VS Code repository and many of our own extensions, and moved to developing Visual Studio Code in the open.

timeline graphic

Being “1.0”

Today, Visual Studio Code delivers on many of the aspects that we imagined during incubation. VS Code has great editing and navigation experiences, streamlined debugging, and built-in Git support.

Developers today love VS Code for its powerful set of built-in features, intuitive editing and debugging experiences, performance and responsiveness, and great language and platform support. The VS Code download is under 40MB including support for 9 additional languages (Simplified Chinese, Traditional Chinese, French, German, Italian, Japanese, Korean, Russian and Spanish) and it installs in seconds. With the help of developers like @zersiax, VS Code is now accessible to visually impaired developers on Windows and soon on OS X and Linux.

More than anything else, what drives the success of Visual Studio Code is the feedback and interactions from the community. From the beginning, we’ve striven to be as open as possible in our roadmap and vision for VS Code, and in November, we took that a step further by open-sourcing VS Code and adding the ability for anyone to make it better through submitting issues and feedback, making pull requests, or creating extensions.

The community responded, with huge growth in the number of extensions and the way they’re using VS Code. Today we have extensions for Node.js, Go, C++, PHP, and Python, as well as many more languages, linters, and tools. And VS Code is being used both by teams of developers, but also in companies like Progressive Insurance, where VS Code is used not just by developers, but analysts and data scientists as well.

Seeing the support and help the community has already poured into the product, the potential for VS Code has never been greater.

Looking Ahead

While we’re excited about releasing 1.0 today, we are even more excited about the future.

Of course, we will continue to focus on the fundamentals. Performance, stability, accessibility, and compatibility are of utmost importance to our users, and they are to us as well. We will continue to invest in improving developer productivity, guided by the great user feedback on UserVoice. We will continue to work with partners and the community to expand support for new languages, and platforms, and experiences. And we will continue to work with you, our community, to build a great tool for you, and for every developer.

If you haven’t tried out Visual Studio Code yet, please download it and let us know what you think!

Thanks Again!

The VS Code Team, @code


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/XVuZwyCM_2c/vscode-1.0

Original article

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑

%d bloggers like this: