Ethereum Blockchain Project Launches First Production Release

ethereum Homestead

Next-generation blockchain platform Ethereum has released ‘Homestead’ the first production release of its software, which was implemented at block 1,150,000 today.

The introduction follows Frontier, Ethereum‘s inaugural release, which was released to developers in July 2015. Further, it comes on the heels of escalating interest in the open, public platform, which is attracting the interest of major financial institutions, in part due to its support for self-executing smart contracts.

For example, a private version of the Ethereum network served as the platform for the first major test conducted by blockchain consortium startup R3CEV in January, with the trial uniting 11 major banks in a high-profile proof-of-concept.

Andrew Keys, co-founder of decentralized application development firm ConsenSys Enterprise, explained that whereas Frontier featured only command-line interfaces, Homestead will expand what users will be able to build on the platform and the ease with which they’ll be able to build proof-of-concepts and minimum viable products.

Keys told CoinDesk:

“Homestead’s arrival will begin to demonstrate the next generation of blockchain technology, whereby anything we can dream of, can be accomplished in a decentralized manner using Ethereum.”

Elsewhere, early members of the platform sought to stress the recent benchmarks that they believe point to the success of the project at a time when financial incumbents are increasingly focused on alternatives to the bitcoin network.

“We’ve seen Microsoft and IBM doing projects on Ethereum. There’s a lot of coders. It’s exciting to see something you were in on in the early stages growing and bearing fruit,” Anthony Di Iorio, one of Ethereum’s founders and a Chief Digital Officer at the Toronto Stock Exchange, said in an interview.

In a Google Hangout hosted by Ethereum community members, there was celebration as the software was implemented at roughly 18:50 UTC.

That was the most anti-climactic event I’ve ever hosted,” Alex Van de Sande, lead designer at the Ethereum Foundation, commented in the channel.

Growth of Ethereum

ethereum developers

Announced in January 2014 by inventor and project leader Vitalik Buterin, Ethereum raised some $18m in a crowdsale of the tokens that help power its platform later that year.

The success of the crowdfunding effort, while controversial, earned the platform coverage in The Wall Street Journal and other major financial publications.

Since then, those close to the project’s development believe its network health is now comparable to that of the long-running bitcoin network on which it was inspired.

“You need to look at the growth of the Ethereum network via the growth of its nodes, sitting at 5,100 versus bitcoin’s about 6,000 roughly. That’s quite significant and shows the stability and global nature of the Ethereum network,” William Mougayar, a special advisor for the Ethereum Foundation, the non-profit that supports its development, told CoinDesk.

Mougayar indicated that Ethereum is now processing around 25,000 transactions a day, or about 10% of the number currently supported by the bitcoin network.

Since the beginning of February, Ethereum has also seen strong growth in the value of its token, ether, which users need to convert into ‘gas’ in order to run its applications. Ether’s valuation has increased to more than $1bn, nearly five-fold annual growth, though how much of this total is fueled by use as opposed to speculation, is unclear.

While Ethereum has seen such a significant rise, there are not many easy ways for the average user to buy ETH. The most popular exchanges are Poloniex, Kraken and Shapeshift, and most recently, Bitfinex.

Still, those close to the project believe this will change as interest grows.

“We are seeing major interest from cryptocurrency exchanges. We can expect to see many US and foreign exchanges adding ether to their trading pairs,” Keys suggested.

Notably, Bitfinex said the high trading volumes of ether at other exchanges had become “hard to ignore”, citing demand was a key reason for its support.

Hard fork upgrade

fork, road, consensus

The Homestead launch is also notably for the development effort behind the upgrade, as Ethereum underwent a hard fork, a fundamental change in the protocol that makes older versions incompatible.

The decision to implement through a hard fork comes at a time when the bitcoin community has been ensnared in a months-long debate over how its network could be upgraded in a similar fashion.

As such, Ethereum stakeholders sought to highlight this choice in contrast to bitcoin, framing the network as perhaps more adaptable to development needs.

“Interestingly, in comparison to the governance problems in the bitcoin community, the Ethereum community has supported the changes and achieved consensus on the upgrade to Homestead,” Keys noted.

In a FAQ published on the platform’s growing Reddit community, Taylor Gerring, director of technology at the Ethereum Project, outlined potential issues users may experience in the transition process.

He noted that upgrading is not necessary and that an user tokens would be safe so long as they owned their private keys, but that they would be unable to reconnect to the network until they upgrade their software.

In the bitcoin community, there is still continuous debate on how to achieve a hard fork, with two competing development teams pursuing alternative strategies.

The Bitcoin Classic team, for instance, has been advocating for 75% of bitcoin’s transaction validators to approve the measure, a decision that would be followed by a 28-day grace period. On the other side, the Bitcoin Core team, the network’s longest-serving team of developers, has argued that hard forks should be done very slowly, arguing for code to be submitted this year that wouldn’t go into effect until July 2017.

In an interview with CoinDesk, Gerring went on to explain further differences between how the Ethereum developers are working differently than bitcoin developers, stating that the platform’s community is shouldering the burden of supporting user needs.

“[Our approach] stands in stark contrast to what we’re seeing in the bitcoin community where an intermediate block scaling solution is being shuttered for something the community isn’t necessarily clamouring for,” he said.

Technology improvements

ethereum

These upgrades, Ethereum’s developers contend, will now allow the network to be competitive among private and public blockchain softwares. On a more specific level, however, Homestead will see Ethereum introduce three Ethereum Improvement Proposals (EIPs) as part of the upgrade: EIP-2, EIP-7, and EIP-8.

Stephan Tual, founder of Slock.it and the former CCO of Ethereum, explained that with these technical improvements, “the training wheels are off” for developers.

One of the big changes is what Tual referred to as the removal of canary contracts. In essence, if a form had occurred to a consensus bug, members of the Ethereum team could have stepped in with the Frontier software.

“Activating these contracts would have triggered a ‘stop’ call for mining in the official clients in order to regain control of the situation and hard fork properly,” Tual said to CoinDesk. “With these contracts gone, the Ethereum network is now fully launched and completely autonomous.”

The other major change he discussed was the introduction of new codes in Solidity, the language used to compile for the Ethereum Virtual Machine, the part of the protocol that handles the internal state of the network.

“Being able to hard fork at anytime, and more importantly, getting the support from the miners (who will vote by updating the software or not) is a big, big differentiator from other blockchains, paving the way for major features upgrades in the future,” Tual said.

Along with the upgrade to the protocol, the developers introduced a wallet product called Mist, which will allow users to write and deploy smart contracts in addition to serving as a place to hold ethers. Mougayar explained that, while this is taken for granted in the bitcoin world, it is “a big deal” for Ethereum given it has not had such an offering.

With the software forked and Homestead the new predominant protocol, there is an expectation that more decentralized applications will be introduced.

Keys explained that some of the applications he’s most looking forward to now that Homestead is out are a prediction market, a decentralized music platform, a tokenized gold custodian and asset tracking system, and an easy way for users to convert bitcoin to Ether.

Supporters like Keys believe the combination of these upgrades will find more large institutions accepting Ethereum alongside bitcoin as a blockchain technology that will be integral to future financial products.

Keys concluded:

“Fortune 500s are learning Ethereum’s Turing-complete virtual machine and smart contracting language are necessary to create enterprise applications.”

Images via Ethereum

Ethereum


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/IbdLg3HjLZM/

Original article

Elasticsearch as a Framework

Or how Crate uses Elasticsearch.

Most people who know Elasticsearch think of it as a search engine, and they’re
probably correct. But we at Crate think about it a bit
differently and use it as a framework.

In this post I’ll try to explain how that works.

A short Elasticsearch intro

Elasticsearch is a clustered
search engine. One can put documents into it and then run queries to find these
documents and retrieve them. Being clustered means that it can run on one or
more machines and the documents that are stored in Elasticsearch will be
distributed among those machines.

There are two forms to communicate with it. Either via HTTP or via a Java
client which uses something called transport protocol.

This transport protocol is also used for the communication between the
machines within a cluster.

The indexing and search capabilities are powered by
Lucene. A “high performance, full-featured
Information Retrieval library”.

In a nutshell:

  • Elasticsearch does clustering (including all the tricky stuff that sane
    people don’t want to worry about: Discovery, master election, replication,
    dealing with net splits and race conditions)

  • Lucene does search and indexing (which Elasticsearch distributes among
    multiple machines)

A short Crate intro

Crate is a distributed SQL database that leverages Elasticsearch and Lucene.
In it’s infant days it parsed SQL statements and translated them into
Elasticsearch queries. It was basically a layer on top of Elasticsearch.

If you wrote something like

select * from users

It would be translated into (roughly)

POST /users/default/_search -d '{"query": {"match_all": {}}}'

Those were the early days. Since then it has evolved a lot. It got it’s own
execution engine with it’s own DSL. Internally the same statement from before
is now turned into something like this:

"fetchPhase": {
    "executionNodes": ["TLMh2zg0SRiC79mYxZj5uw"],
    "phaseType": "FETCH",
    "fetchRefs": ["doc.users._doc['name']"],
    "id": 1
},
"planType": "QueryThenFetch",
"subPlan": {
    "planType": "CollectAndMerge",
    "collectPhase": {
        "projections": [...],
        "phaseType": "COLLECT",
        "toCollect": ["doc.users._docid"],
        "id": 0,
        "distribution": {"distributedByColumn": 0, "type": "BROADCAST"},
        "routing": {
            "TLMh2zg0SRiC79mYxZj5uw": {
                "users": [ 0, 1, 2, 3, 4 ]
            }
        }
    }
},
"localMerge": {
    "projections": [...],
    "executionNodes": [],
    "phaseType": "MERGE",
    "id": 2
}

(Don’t worry if that doesn’t make sense to you. You may worry if it does)

This was done to implement features in Crate that do not exist in Elasticsearch.

But this whole SQL execution engine also uses Elasticsearch in some way. I’ll
try to explain how.

Elasticsearch as a web framework

Since many developers are familiar with web frameworks I’ll try to go with
that. It actually fits with how parts of Elasticsearch work quite well.

In a web framework one can usually register a route to a handler. Like, if a
browser points to /foo/bar, then FooBarHandler.get should be called.

The get implementation of the FooBarHandler class may receives some sort of
request object, then some business logic is executed and finally a response is
generated to be returned by the get function.

Meanwhile the web framework does all the magic to produce the request object,
call the appropriate function and deliver the response object in a sensible
form back over the wire to the client.

Elasticsearch includes it’s on HTTP server and is in a sense also a web
framework. Or rather: It does what web frameworks do.

Back to the example from earlier:

POST /users/default/_search -d '{"query": {"match_all": {}}}'

A HTTP request like that causes a handler registered to
/{index}/{type}/_search to be fired.
(RestSearchAction
to be specific – in case you want to take a look at the source).

In Elasticsearch HTTP requests are always translated into their transport
request equivalents. And the purpose of this handler is to do that.

In this case it becomes a SearchRequest. The same kind of SearchRequest a
user of the Java API would create to execute the same query.

This SearchRequest is then sent to a TransportRequestHandler that is
registered under a name. (The name being similar to a route like /foo/bar in
the example before)

Those TransportRequestHandler are mostly part of a class called
TransportAction which contains the actual logic on how to process a
request.

To sum up:

  • Elasticsearch has two network services: HTTP (9200) and Transport (9300).
  • Internally routes/URLs are mapped to handlers just like in web frameworks
    (E.g. /url/x/y maps to handleRequest in class XY).
  • HTTP requests are converted to transport requests.
  • Transport requests are sent to RequestHandler classes registered under a
    name .
  • TransportAction does the actual work. (Like making more requests to
    other nodes to gather data)

Transport protocol

The transport protocol is the binary protocol used to send objects between
nodes in a Elasticsearch (or Crate) cluster.

Most web APIs nowadays accpet requests containing JSON payloads. A client can
send any JSON document as long as it is valid JSON. It doesn’t matter what keys
or values it contains. The server will be able to read the whole request and
parse the JSON. If it can then do anything useful with that payload is another
matter.

With the transport protocol things are a bit different. Requests and responses
are kind of static. The fields and their types have to be pre-defined. A
TransportAction can only ever receive one type of request.

This has the advantage that it is faster because the requests and responses
don’t have to include type and length information or field names.

(If you want to see some real code, take a look at the readFrom and writeTo
implementations of the
SQLRequest. Notice how both methods match in what they do)

This transport infastructure is one huge part of Elasticsearch that Crate uses
heavily.

But there is more. If it was just for the communication we could’ve rolled with
Netty and something like Google Protobuf. (The Transport
service is based on Netty)

A cluster: Routing, shard allocation, replication and recovery

Elasticsearch provides us with a cluster. And this isn’t just about discovering
other machines and then connecting them with each other. This includes much
more.

Shard allocation and Routing

In Elasticsearch documents are stored within an index, or rather within a shard
that is part of an index.

One Elasticsearch index consists of one or more shards. How many is defined
when an index is created.

The terminology can be a bit confusing as there are two kinds of indices. It
might refer to a index that has multiple shards, or it might refer to a Lucene
index. A Lucene index being a shard. Yes that’s right: A index consists of
(Lucene) indices.

A shard is the smallest unit that can be distributed among nodes. It cannot be
further split for distribution. If there are two nodes in a cluster and an
index has only 1 shard then one node will stay empty.

If you’re not confused enough already let’s make it a bit more complicated:

In Crate the terminology is table and shards. A shard is the same thing as in
Elasticsearch, but a table can be 1 or more Elasticsearch indices:

table > (ES) index > shard (lucene index)

Why bring this up? Because Crate uses Elasticsearch to do the shard allocation.
Elasticsearch services decide which shard should reside on which node.

Crate uses the information provided by Elasticsearch in order to make requests
to the correct nodes.

Remember the Crate execution DSL from before? Especially this part:

"routing": {
    "TLMh2zg0SRiC79mYxZj5uw": {
        "users": [ 0, 1, 2, 3, 4 ]
    }
}

It tells us that shards 0-4 of the users table are on a node with the cryptic
id TLMh2zg0SRiC79mYxZj5uw.

Replication and Recovery

Elasticsearch has replication and recovery built in, and so does Crate.

Replication means that if a document is inserted into an table/index that has a
replica configured, the document will be put into 1 shard and a copy will be
put into a replica of that 1 shard.

This replica is by default on another machine and cannot be allocated on the
same machine as the primary shard. If one machine goes poof the data is still
available.

Crate hooks into the replication/recovery mechanism to also support replication
& recovery for it’s blob tables. (A special form of storage for binary objects
that isn’t based on Elasticsearch or Lucene)


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/fRgd9_1ohws/

Original article

Show HN: Localkube – zero to Kubernetes 1.2 in one command

README.md

logo

Build Status Release Hex.pm GoDoc Status

Website | Slack | Email | Twitter | Facebook

spread is a command line tool that builds and deploys a Docker project to a Kubernetes cluster in one command. The project’s goals are to:

  • Enable rapid iteration with Kubernetes
  • Be the fastest, simplest way to deploy Docker to production
  • Work well for a single developer or an entire team (no more broken bash scripts!)

See how we deployed Mattermost (and you can too!):

logo

Spread is under open, active development. New features will be added regularly over the next few months – explore our roadmap to see what will be built next and send us pull requests for any features you’d like to see added.

See our philosophy for more on our mission and values.

Requirements

Installation

OS X

$ brew tap redspread/spread
$ brew install spread

Linux/Windows

Make sure Go and Git are installed.

go get rsprd.com/spread/cmd/spread

Quickstart

This assumes you have kubectl configured to a running Kubernetes cluster, whether local or remote.

  1. Install Spread
  2. Clone Mattermost, the open source Slack $ git clone http://github.com/redspread/kube-mattermost
  3. Deploy Mattermost to Kubernetes: $ spread build . (local cluster) or $ spread deploy . (remote cluster)
  4. Copy the IP and put it in your browser to see your self-hosted app!

For a more detailed walkthrough, see the full guide.

Localkube

Spread makes it easy to set up and iterate with localkube, a local Kubernetes cluster streamlined for rapid development.

Requirements:

  • Make sure Docker is set up correctly, including starting docker-machine to bring up a VM. [1]

Getting started:

  • Run spread cluster start to start localkube
  • Sanity check: kubectl cluster-info [2]

Suggested workflow:

  • docker build the image that you want to work with [2]
  • Create Kubernetes objects that use the image build above
  • Run spread build . to deploy to cluster [3]
  • Iterate on your application, updating image and objects running spread build . each time you want to deploy changes
  • To preview changes, grab the IP of your docker daemon and use kubectl describe services/'SERVICE-NAME' for the NodePort, then put the IP:NodePort in your browser
  • When finished, run spread cluster stop to stop localkube

[1] For now, we recommend everyone use a VM when working with localkube
[2] There will be a delay in returning info the first time you start localkube, as the Weave networking container needs to download. This pause will be fixed in future releases.
[3] spread will soon integrate building (#59)
[4] Since localkube shares a Docker daemon with your host, there is no need to push images 🙂

See more for our suggestions when developing code with localkube.

What’s been done so far

  • spread deploy [-s] PATH [kubectl context]: Deploys a Docker project to a Kubernetes cluster. It completes the following order of operations:
    • Reads context of directory and builds Kubernetes deployment hierarchy.
    • Updates all Kubernetes objects on a Kubernetes cluster.
    • Returns a public IP address, if type Load Balancer is specified.
  • Established an implicit hierarchy of Kubernetes objects
  • Multi-container deployment
  • Support for Linux and Windows
  • localkube: easy-to-setup local Kubernetes cluster for rapid development

What’s being worked on now

  • Build functionality for spread deploy so it also builds any images indicated to be built and pushes those images to the indicated Docker registry.
  • spread deploy -p: Pushes all images to registry, even those not built by spread deploy.
  • Inner-app linking
  • spread logs: Returns logs for any deployment, automatic trying until logs are accessible.
  • spread build: Builds Docker context and pushes to a local Kubernetes cluster.
  • spread rewind: Quickly rollback to a previous deployment.

See more of our roadmap here!

Future Goals

  • Peer-to-peer syncing between local and remote Kubernetes clusters
  • Develop workflow for application and deployment versioning
  • Introduce paramaterization for container configuration

FAQ

How are clusters selected? Remote clusters are selected from the current kubectl context. Later, we will add functionality to explicitly state kubectl arguments.

How should I set up my directory? Spread requires a specific project directory structure, as it builds from a hierarchy of entities:

  • Dockerfile
  • *.ctr – optional container file, there can be any number
  • pod.yaml – pod file, there can be only one per directory
  • rc.yaml – replication controller file, there can be only one per directory
  • /.k2e – holds arbitrary Kubernetes objects, such as services and secrets

What is the *.ctr file? The .ctr file is the container struct usually found in the pod.yaml or rc.yaml. Containers can still be placed in pods or replication controllers, but we’re encouraging separate container files because it enables users to eventually reuse containers across an application.

Can I deploy a project with just a Dockerfile and *.ctr? Yes. Spread implicitly infers the rest of the app hierarchy.

Contributing

We’d love to see your contributions – please see the CONTRIBUTING file for guidelines on how to contribute.

Reporting bugs

If you haven’t already, it’s worth going through Elika Etemad’s guide for good bug reporting. In one sentence, good bug reports should be both reproducible and specific.

Contact

Founders: founders@redspread.com
Slack: slackin.redspread.com
Planning: Roadmap
Bugs: Issues

License

Spread is under the Apache 2.0 license. See the LICENSE file for details.


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/cDhXhGjniGw/spread

Original article

Show HN: Lion, a fast HTTP router for building modern scalable modular REST APIs

README.md

Lion is a fast HTTP router for Go with support for middlewares for building modern scalable modular REST APIs.

Lion's Hello World GIF

Features

  • Context-Aware: Lion uses the de-facto standard net/Context for storing route params and sharing variables between middlewares and HTTP handlers. It could be integrated in the standard library for Go 1.7 in 2016.
  • Modular: You can define your own modules to easily build a scalable architecture
  • REST friendly: You can define modules to groups http resources together.
  • Zero allocations: Lion generates zero garbage.

Table of contents

Install/Update

$ go get -u github.com/celrenheit/lion

Hello World

package main

import (
    "fmt"
    "net/http"

    "github.com/celrenheit/lion"
    "golang.org/x/net/context"
)

func Home(c context.Context, w http.ResponseWriter, r *http.Request) {
    fmt.Fprintf(w, "Home")
}

func Hello(c context.Context, w http.ResponseWriter, r *http.Request) {
    name := lion.Param(c, "name")
    fmt.Fprintf(w, "Hello "+name)
}

func main() {
    l := lion.Classic()
    l.GetFunc("/", Home)
    l.GetFunc("/hello/:name", Hello)
    l.Run()
}

Try it yourself by running the following command from the current directory:

$ go run examples/hello/hello.go

Getting started with modules and resources

We are going to build a sample products listing REST api (without database handling to keep it simple):

func main() {
    l := lion.Classic()
    api := l.Group("/api")
    api.Module(Products{})
    l.Run()
}

// Products module is accessible at url: /api/products
// It handles getting a list of products or creating a new product
type Products struct{}

func (p Products) Base() string {
    return "/products"
}

func (p Products) Get(c context.Context, w http.ResponseWriter, r *http.Request) {
    fmt.Fprintf(w, "Fetching all products")
}

func (p Products) Post(c context.Context, w http.ResponseWriter, r *http.Request) {
    fmt.Fprintf(w, "Creating a new product")
}

func (p Products) Routes(r *lion.Router) {
    // Defining a resource for getting, editing and deleting a single product
    r.Resource("/:id", OneProduct{})
}

// OneProduct resource is accessible at url: /api/products/:id
// It handles getting, editing and deleting a single product
type OneProduct struct{}

func (p OneProduct) Get(c context.Context, w http.ResponseWriter, r *http.Request) {
    id := lion.Param(c, "id")
    fmt.Fprintf(w, "Getting product: %s", id)
}

func (p OneProduct) Put(c context.Context, w http.ResponseWriter, r *http.Request) {
    id := lion.Param(c, "id")
    fmt.Fprintf(w, "Updating article: %s", id)
}

func (p OneProduct) Delete(c context.Context, w http.ResponseWriter, r *http.Request) {
    id := lion.Param(c, "id")
    fmt.Fprintf(w, "Deleting article: %s", id)
}

Try it yourself. Run:

$ go run examples/modular-hello/modular-hello.go

Open your web browser to http://localhost:3000/api/products or http://localhost:3000/api/products/123. You should see “Fetching all products” or “Getting product: 123“.

Handlers

Handlers should implement the Handler interface:

type Handler interface {
    ServeHTTPC(context.Context, http.ResponseWriter, *http.Request)
}

Using Handlers

l.Get("/get", get)
l.Post("/post", post)
l.Put("/put", put)
l.Delete("/delete", delete)

Using HandlerFuncs

HandlerFuncs shoud have this function signature:

func handlerFunc(c context.Context, w http.ResponseWriter, r *http.Request)  {
  fmt.Fprintf(w, "Hi!")
}

l.GetFunc("/get", handlerFunc)
l.PostFunc("/post", handlerFunc)
l.PutFunc("/put", handlerFunc)
l.DeleteFunc("/delete", handlerFunc)

Using native http.Handler

type nativehandler struct {}

func (_ nativehandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {

}

l.GetH("/somepath", nativehandler{})
l.PostH("/somepath", nativehandler{})
l.PutH("/somepath", nativehandler{})
l.DeleteH("/somepath", nativehandler{})

Using native http.Handler using lion.Wrap()

Note: using native http handler you cannot access url params.

func main() {
    l := lion.New()
    l.Get("/somepath", lion.Wrap(nativehandler{}))
}

Using native http.Handler using lion.WrapFunc()

func getHandlerFunc(w http.ResponseWriter, r *http.Request) {

}

func main() {
    l := lion.New()
    l.Get("/somepath", lion.WrapFunc(getHandlerFunc))
}

Middlewares

Middlewares should implement the Middleware interface:

type Middleware interface {
    ServeNext(Handler) Handler
}

The ServeNext function accepts a Handler and returns a Handler.

You can also use MiddlewareFuncs. For example:

func middlewareFunc(next Handler) Handler  {
    return next
}

You can also use Negroni middlewares by registering them using:

l := lion.New()
l.UseNegroni(negroni.NewRecovery())
l.Run()

Resources

You can define a resource to represent a REST, CRUD api resource.
You define global middlewares using Uses() method. For defining custom middlewares for each http method, you have to create a function which name is composed of the http method suffixed by “Middlewares”. For example, if you want to define middlewares for the Get method you will have to create a method called: GetMiddlewares().

A resource is defined by the following methods. Everything is optional:

// Global middlewares for the resource (Optional)
Uses() Middlewares

// Middlewares for the http methods (Optional)
GetMiddlewares() Middlewares
PostMiddlewares() Middlewares
PutMiddlewares() Middlewares
DeleteMiddlewares() Middlewares


// HandlerFuncs for each HTTP Methods (Optional)
Get(c context.Context, w http.ResponseWriter, r *http.Request)
Post(c context.Context, w http.ResponseWriter, r *http.Request)
Put(c context.Context, w http.ResponseWriter, r *http.Request)
Delete(c context.Context, w http.ResponseWriter, r *http.Request)

Example:

package main

type todolist struct{}

func (t todolist) Uses() lion.Middlewares {
    return lion.Middlewares{lion.NewLogger()}
}

func (t todolist) Get(c context.Context, w http.ResponseWriter, r *http.Request) {
    fmt.Fprintf(w, "getting todos")
}

func main() {
    l := lion.New()
    l.Resource("/todos", todolist{})
    l.Run()
}

 Modules

Modules are a way to modularize an api which can then define submodules, subresources and custom routes.
A module is defined by the following methods:

// Required: Base url pattern of the module
Base() string

// Routes accepts a Router instance. This method is used to define the routes of this module.
// Each routes defined are relative to the Base() url pattern
Routes(*Router)

// Optional: Requires named middlewares. Refer to Named Middlewares section
Requires() []string
package main

type api struct{}

// Required: Base url
func (t api) Base() string { return "/api" }

// Required: Here you can declare sub-resources, submodules and custom routes.
func (t api) Routes(r *lion.Router) {
    r.Module(v1{})
    r.Get("/custom", t.CustomRoute)
}

// Optional: Attach Get method to this Module.
// ====> A Module is also a Resource.
func (t api) Get(c context.Context, w http.ResponseWriter, r *http.Request) {
    fmt.Fprintf(w, "This also a resource accessible at http://localhost:3000/api")
}

// Optional: Defining custom routes
func (t api) CustomRoute(c context.Context, w http.ResponseWriter, r *http.Request) {
    fmt.Fprintf(w, "This a custom route for this module http://localhost:3000/api/")
}

func main() {
    l := lion.New()
    // Registering the module
    l.Module(api{})
    l.Run()
}

Examples

Using GET, POST, PUT, DELETE http methods

l := lion.Classic()

// Using Handlers
l.Get("/get", get)
l.Post("/post", post)
l.Put("/put", put)
l.Delete("/delete", delete)

// Using functions
l.GetFunc("/get", getFunc)
l.PostFunc("/post", postFunc)
l.PutFunc("/put", putFunc)
l.DeleteFunc("/delete", deleteFunc)

l.Run()

Using middlewares

func main() {
    l := lion.Classic()

    // Using middleware
    l.Use(lion.NewLogger())

    // Using middleware functions
    l.UseFunc(someMiddlewareFunc)

    l.GetFunc("/hello/:name", Hello)

    l.Run()
}

Group routes by a base path

l := lion.Classic()
api := l.Group("/api")

v1 := l.Group("/v1")
v1.GetFunc("/somepath", gettingFromV1)

v2 := l.Group("/v2")
v2.GetFunc("/somepath", gettingFromV2)

l.Run()

Mouting a router into a base path

l := lion.Classic()

sub := lion.New()
sub.GetFunc("/somepath", getting)


l.Mount("/api", sub)

Default middlewares

lion.Classic() creates a router with default middlewares (Recovery, RealIP, Logger, Static).
If you wish to create a blank router without any middlewares you can use lion.New().

func main()  {
    // This a no middlewares registered
    l := lion.New()
    l.Use(lion.NewLogger())

    l.GetFunc("/hello/:name", Hello)

    l.Run()
}

Custom middlewares should implement the Middleware interface:

type Middleware interface {
    ServeNext(Handler) Handler
}

You can also make MiddlewareFuncs to use using .UseFunc() method.
It has to accept a Handler and return a Handler:

func(next Handler) Handler

Custom Logger example

type logger struct{}

func (*logger) ServeNext(next lion.Handler) lion.Handler {
    return lion.HandlerFunc(func(c context.Context, w http.ResponseWriter, r *http.Request) {
        start := time.Now()

        next.ServeHTTPC(c, w, r)

        fmt.Printf("Served %s in %sn", r.URL.Path, time.Since(start))
    })
}

Then in the main function you can use the middleware using:

l := lion.New()

l.Use(&logger{})
l.GetFunc("/hello/:name", Hello)
l.Run()

https://github.com/celrenheit/lion/blob/master/LICENSE


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/ILR-XIsAvpQ/lion

Original article

Toonz Goes Open Source

Breaking New! 

 

 

TOONZ GOES OPEN SOURCE

 

 

 

 

Digital Video, the makers of TOONZ, and DWANGO, a Japanese publisher, announced today they have signed an agreement for  the acquisition by Dwango of Toonz, an animation software which was independently developed by Digital Video (Rome, Italy).   

Digital Video and Dwango agreed to close the deal under the condition Dwango will publish and develop an Open Source platform based on Toonz (OpenToonz).  Effective Saturday March 26, the TOONZ Studio Ghibli Version will be made available to the animation community as a free download.

 

OpenToonz will include features developed by Studio Ghibli (*Toonz Ghibli Edition) which has been a long time Toonz user. Through OpenToonz, Dwango will create a platform that will aim to have  research labs and the animated film industry actively cooperating with each other.

 

With this agreement in place, Digital Video will move to the open source business model, offering to the industry commissioning, installation & configuration, training, support and customization services while allowing the animators’ community to use a state of the art technology at no cost.

 

Digital Video will also continue to develop and market a Toonz Premium version at a very competitive price for those companies willing to invest in the customization of Toonz for their projects. A comprehensive list of the new services available can be found at www.toonzpremium.com.

 

 

Commenting on this exciting announcement, Mr. Atsushi Okui, Executive Imaging Director at Studio Ghibli said “During the production of ‘Princess Mononoke’ in 1995, we needed a software enabling us to create a certain section of the animation digitally. We checked for what was available at that time and chose ‘Toonz’. Our requirement was that in order to continue producing theatre-quality animation without additional stress, the software must have the ability to combine the hand-drawn animation with the digitally painted ones seamlessly. From then onwards we continued to use the software while going through major updates to make it easier for us to use. We are happy to hear that this open source version contains the Ghibli Edition. We hope that many people inside and outside of the animation industry will utilize this software for their work. We would like to extend our gratitude to the staff of Digital Video.”

 

Claudio Mattei, Managing Director at Digital Video, the makers of TOONZ, said:

“The contract with Dwango, which offers the Toonz open source platform to the animation community, has enabled Digital Video to realize one of its strategies, i.e. to make of Toonz a world standard for 2D animation. This deal will be  also the starting point of a new exciting plan to endorse the Open Source business model, by supporting training and customizing Toonz for the old and new users. We are proud to share this path with Dwango and with Studio Ghibli, the renowned Toonz user since 1995.”

 

Mr Nobuo Kawakami, Chairman and CTO at Dwango, added:

“It is a great honour for us to be able to release OpenToonz as open source software.

We’d like to express our deepest appreciation to Digital Video and Studio Ghibli for their  help and support We hope the high-quality software that meets the demands of animation professionals will contribute to revitalizing the animation industry. Dwango will also utilize OpenToonz in order to present its research and development results.”

 

The open source version of TOONZ will be officially presented in Tokyo at Anime Japan (March 26 and 27)

 

Digital Video is an R&D company with a widely proven talent in defining and developing innovative and visionary products and bringing them to market. The company has its main offices in Rome (Italy) and specializes in high-end computer graphics for professional applications as well as for broadcast and cinema. More information at www.digitalvideo.eu.com

 

Contact: mailto:company@toonz.com


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/z8gZ5ACP_g0/

Original article

LinearGo now supports Go 1.6

README.md

Build Status

This is a Golang wrapper for LIBLINEAR (C.-J. Lin et al.) (GitHub).
Note that the interface of this package might be slightly different from
liblinear C interface because of Go convention. Yet, I’ll try to align the
function name and functionality to liblinear C library.

GoDoc: Document.

Introduction to LIBLINEAR

LIBLINEAR is a linear classifier for data with millions of instances and features. It supports

  • L2-regularized classifiers
  • L2-loss linear SVM, L1-loss linear SVM, and logistic regression (LR)
  • L1-regularized classifiers (after version 1.4)
  • L2-loss linear SVM and logistic regression (LR)
  • L2-regularized support vector regression (after version 1.9)
  • L2-loss linear SVR and L1-loss linear SVR.

Install

This package depends on LIBLINEAR 2.1+ and Go 1.6+. Please install them first via Homebrew or
other package managers on your OS:

brew update
brew info liblinear # make sure your formula will install version higher than 2.1
brew install liblinear

brew info go # make sure version 1.6+
brew install go

After liblinear installation, just go get this package

go get github.com/lazywei/lineargo

Usage

The package is based on mat64.

import linear "github.com/lazywei/lineargo"

// ReadLibsvm(filepath string, oneBased bool) (X, y *mat64.Dense)
X, y := linear.ReadLibsvm("heart_scale", true)

// Train(X, y *mat64.Dense, bias float64, solverType int,
//  C_, p, eps float64,
//  classWeights map[int]float64) (*Model)
// Please checkout liblinear's doc for the explanation for these parameters.
model := linear.Train(X, y, -1, linear.L2R_LR, 1.0, 0.1, 0.01, map[int]float64{1: 1, -1: 1})
y_pred:= linear.Predict(model, X)

fmt.Println(linear.Accuracy(y, y_pred))

Self-Promotion

This package is mainly built because of
mockingbird, which is a programming
language classifier in Go. Mockingbird is my Google Summer of Code 2015 Project
with GitHub and linguist. If you like it,
please feel free to follow linguist, mockingbird, and this library.


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/Qk3eCm7hc0Q/lineargo

Original article

Curl, 17 years old today (2015)

Today we celebrate the fact that it is exactly 17 years since the first public release of curl. I have always been the lead developer and maintainer of the project.

Birthdaycake

When I released that first version in the spring of 1998, we had only a handful of users and a handful of contributors. curl was just a little tool and we were still a few years out before libcurl would become a thing of its own.

The tool we had been working on for a while was still called urlget in the beginning of 1998 but as we just recently added FTP upload capabilities that name turned wrong and I decided cURL would be more suitable. I picked ‘cURL’ because the word contains URL and already then the tool worked primarily with URLs, and I thought that it was fun to partly make it a real English word “curl” but also that you could pronounce it “see URL” as the tool would display the contents of a URL.

Much later, someone (I forget who) came up with the “backronym” Curl URL Request Library which of course is totally awesome.

17 years are 6209 days. During this time we’ve done more than 150 public releases containing more than 2600 bug fixes!

We started out GPL licensed, switched to MPL and then landed in MIT. We started out using RCS for version control, switched to CVS and then git. But it has stayed written in good old C the entire time.

The term “Open Source” was coined 1998 when the Open Source Initiative was started just the month before curl was born, which was superseded with just a few days by the announcement from Netscape that they would free their browser code and make an open browser.

We’ve hosted parts of our project on servers run by the various companies I’ve worked for and we’ve been on and off various free services. Things come and go. Virtually nothing stays the same so we better just move with the rest of the world. These days we’re on github a lot. Who knows how long that will last…

We have grown to support a ridiculous amount of protocols and curl can be built to run on virtually every modern operating system and CPU architecture.

The list of helpful souls who have contributed to make curl into what it is now have grown at a steady pace all through the years and it now holds more than 1200 names.

Employments

In 1998, I was employed by a company named Frontec Tekniksystem. I would later leave that company and today there’s nothing left in Sweden using that name as it was sold and most employees later fled away to other places. After Frontec I joined Contactor for many years until I started working for my own company, Haxx (which we started on the side many years before that), during 2009. Today, I am employed by my forth company during curl’s life time: Mozilla. All through this project’s lifetime, I’ve kept my work situation separate and I believe I haven’t allowed it to disturb our project too much. Mozilla is however the first one that actually allows me to spend a part of my time on curl and still get paid for it!

The Netscape announcement which was made 2 months before curl was born later became Mozilla and the Firefox browser. Where I work now…

Future

I’m not one of those who spend time glazing toward the horizon dreaming of future grandness and making up plans on how to go there. I work on stuff right now to work tomorrow. I have no idea what we’ll do and work on a year from now. I know a bunch of things I want to work on next, but I’m not sure I’ll ever get to them or whether they will actually ship or if they perhaps will be replaced by other things in that list before I get to them.

The world, the Internet and transfers are all constantly changing and we’re adapting. No long-term dreams other than sticking to the very simple and single plan: we do file-oriented internet transfers using application layer protocols.

Rough estimates say we may have a billion users already. Chances are, if things don’t change too drastically without us being able to keep up, that we will have even more in the future.

1000 million users

It has to feel good, right?

I will of course point out that I did not take curl to this point on my own, but that aside the ego-boost this level of success brings is beyond imagination. Thinking about that my code has ended up in so many places, and is driving so many little pieces of modern network technology is truly mind-boggling. When I specifically sit down or get a reason to think about it at least.

Most of the days however, I tear my hair when fixing bugs, or I try to rephrase my emails to no sound old and bitter (even though I can very well be that) when I once again try to explain things to users who can be extremely unfriendly and whining. I spend late evenings on curl when my wife and kids are asleep. I escape my family and rob them of my company to improve curl even on weekends and vacations. Alone in the dark (mostly) with my text editor and debugger.

There’s no glory and there’s no eternal bright light shining down on me. I have not climbed up onto a level where I have a special status. I’m still the same old me, hacking away on code for the project I like and that I want to be as good as possible. Obviously I love working on curl so much I’ve been doing it for over seventeen years already and I don’t plan on stopping.

Celebrations!

Yeps. I’ll get myself an extra drink tonight and I hope you’ll join me. But only one, we’ll get back to work again afterward. There are bugs to fix, tests to write and features to add. Join in the fun! My backlog is only growing…


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/TiBEsqwTR-U/

Original article

Review: Raspberry Pi model 3 B, with Benchmarks vs. Pi 2

Raspberry Pi Model 3 B - Image from raspberrypi.org

On Pi Day (3/14/16), I finally acquired a Raspberry Pi model 3 B from my local Micro Center (I had ordered one from Pimoroni on launch day, but it must be stuck in customs). After arriving home with it, I decided to start running it through its paces. Below is my review and extensive benchmarking of the Pi 3 (especially in comparison to the Pi 2).

Hardware changes

There are a few notable hardware changes on the Pi 3:

  1. The LEDs have moved to the opposite side of the ‘front’ of the board, having been displaced by a tiny wireless antenna where they used to be. This has the unfortunate side effect of making the LED openings on most older Pi cases (including the Raspberry Pi ‘official’ case) misaligned. I can still see a faint light through the front of my case, and everything fits fine, so it’s not a huge deal to me.
  2. The microSD card is not retained by a spring ‘push to eject’ mechanism anymore; it simply slides in and slides out (just like the Pi Zero). This helps cut down on accidental removal when holding the Pi to insert USB plugs, but it can also make the card slightly harder to remove in some cases.
  3. There’s a wireless antenna near the GPIO. If you need to use Bluetooth or WiFi, make sure you place the Pi 3 in a case that’s not a faraday cage, and also consider the orientation of your Pi when trying to pick up radio signals—orienting it away from your router behind a brick wall will lead to a poor connection.

Networking Benchmarks

Network performance is one of the most straightforward, but most caveat-laden, aspects to benchmark. You can measure raw throughput, network file copy, and file download/upload performance. However, some of the benchmarks (e.g. file copies) are also dependent on other parts of the Pi (e.g. disk I/O, USB bus, memory I/O, etc.), so I focus on raw network throughput. And in that regard, the Pi 3 offers a respectable gain over the Raspberry Pi model 2 B:

Raspberry Pi Model 3 B - Networking iperf throughput benchmark vs pi 2

For the Pi 2, I tested WiFi with an Edimax USB 2.0 802.11n adapter, and for the Pi 3, I tested with the built-in WiFi; in both cases the signal strength was excellent, and I connected to an AirPort Extreme (6th gen) WiFi router. WiFi performance was consistent on both Pis, and with both, I disabled WiFi power management to make the connection stable (see caveats at the end of this section).

For the ‘GigE’ test, I plugged a TRENDnet USB 3.0 Gigabit ethernet adapter into one of the Pi’s USB 2.0 ports, configured the adapter as an eth1 interface, and disconnected/disabled the built in interface(s). See this article for more info: Gigabit Networking on a Raspberry Pi.

The good news? The Pi 3 can hit sustained throughput of 321 Mbps using a Gigabit adapter. This means the Pi can sustain a theoretical 40 MB/s file copy, making the Pi 3 a marginally-useful file server (e.g. NFS) or dedicated proxy/router—much more so than any Pi before.

The bad news? Don’t expect those numbers to be sustainable if you shove lots of bits through the Pi; in my testing, even with a passive aluminum heatsink, the Pi 3 started throttling the transfer speeds when pushing it to the max after 3-5 minutes, and needed a 2-4 minute cool-down period before speeds would max out again.

The built-in LAN offered almost identical benchmark results as it did on the Pi 2.

When it comes to WiFi, there are some other caveats:

For the full dataset and rationale behind different benchmarks, check out: Raspberry Pi Networking Benchmarks.

microSD Card Benchmarks

In late 2015, I published a comprehensive benchmark of microSD card performance on the Raspberry Pi 2. The Pi 3, with its faster clock and new SoC architecture, has the potential to make microSD card operations even faster, so I re-ran benchmarks on all the Samsung and SanDisk cards I have on the Pi 3, and have added those results to my Pi Dramble microSD Card Benchmarks page. In summary:

Raspberry Pi Model 3 B vs Model 2 B - microSD card reader 4K random read benchmark

One caveat: With microSD performance timings, the numbers are always about +/- 5% different between runs, so the real-world difference between the Pi 2 and Pi 3 in their default state will be minimal at best—though the Pi 3 is measurably and reliably a little faster, in my testing.

Things get even more interesting if you overclock the microSD card reader in the Raspberry Pi 2/3. Overclocking adds another 20% speed boost to most file operations (at least for reads), and can double large file read/write performance! Unless you need the utmost performance and have very reliable power supplies and microSD cards, though, it’s safer to leave the Pi at its normal clock.

Keeping it cool: the (hot) 64-bit Cortex-A53 Broadcom SoC

Raspberry Pi 3 - aluminum heat sink
For the Pi 3, a heat sink isn’t just for overclockers.

The Raspberry Pi 3’s new A53-series 64-bit System-on-a-Chip uses a bit more power than the older ARM processors used in previous Pis, and as a result, there is also more thermal dissipation; if the heat from the chip doesn’t have anywhere to go, then things can get hot enough to trigger some processor throttling.

Many overclockers would put heat sinks on their Pi 2s and older models, but in my testing, thermal throttling was never an issue—even when running with a moderate overclock and hammering the Pi Dramble with thousands of PHP requests/second continuously (e.g. 100% CPU for many minutes). Reddit user ghalfacree used a thermal imager to take thermal images of the various Raspberry Pi models, and you can see there’s a marked difference when it comes to heat dissipation from the CPU on the Pi 3.

For this reason, I purchased some passive aluminum heat sinks (with pre-applied thermal paste) and applied them to the SoC on my Raspberry Pi 3s. So far the heat sink seems to be working well; it dissipates enough heat to prevent CPU throttling under high load. I haven’t done much testing with overclocking, but I typically keep the defaults since I favor reliability over performance.

Running stress --cpu 4 to hammer the processor continuously, I didn’t notice any sustained throttling after installing the heat sink (there was a short (1-2 second) throttle every few minutes), and temperatures seemed to level off around 81.0°C (at least according to the internal CPU temperature reporting). Measuring the surface temp of the heat sink showed about 65°C—quite hot to the touch! But unless you’re going to be running the CPU at full throttle, overclocked, all the time, I think a passive heat sink and a case that allows some amount of convective airflow will be enough.

Also, a ProTip for monitoring CPU frequency while doing benchmarks; run while true; do vcgencmd measure_temp && vcgencmd measure_clock arm; sleep 2; done in a separate terminal window to see updates continuously. Thanks to GTR2fan on the Pi forums for that!

Powering the Pi 3

The Raspberry Pi 3 is no Pi Zero when it comes to conserving power and handling almost any kind of flaky power supply.

Raspberry Pi 3 - low voltage onscreen rainbow icon
The undervolt rainbow made frequent appearances when building PHP from source.

I have many USB power supplies of varying quality. Pis < 3 would survive with almost any USB power source (as long as you don't need to power a bunch of USB devices off the Pi), but the Pi 3 needs a much better supply. Even using a reliable Apple 2A iPad charger, I noticed the red power LED flicker off every now and then, with a corresponding momentary voltage dip below 5.12V and an undervolt rainbow.

You need a good power adapter for the Pi 3, much more so than in the past, even if you’re running it headless. Under heavy load (e.g. PHP processes pegging all 4 CPU cores), the Pi 3 had momentary voltage drops. If you’re running the Pi 3 as a desktop replacement, attached to a monitor, using Raspbian’s GUI with a keyboard and mouse, this is even more crucial to a painless experience.

Luckily for my Pi Dramble, the 6 port USB charger I use provides a solid 2A per port, and things seem to run fine there, even under load.

Power benchmarks

Raspberry Pi 2 vs Raspberry Pi 3 model B power consumption

The Pi 2 was slightly more power-hungry than all the Pis before, and the Pi 3 continues the tradition; the higher-clocked CPU, extra built-in circuits, and newer quad core architecture add up to about a 40% increase in power draw when under heavy load. Luckily, idle and normal CPU load doesn’t incur too much of a penalty vs. the Pi 2—it’s only bursty CPU spikes that draw more power (and generate a lot more heat).

I have some tips for limiting power consumption on the Pi 3 (similar to Zero, A+, etc.) in this post: Conserve power when using Raspberry Pis.

PHP benchmarks – Drupal 8

Since I’m a Drupal developer in my day job, and since I like tinkering with my Raspberry Pi Dramble cluster to tinker with Ansible, Drupal, GPIO, and more, I like to use Drupal to benchmark the entire system—memory, CPU, disk I/O, and networking. I’ve been maintaining and re-running a large number of benchmarks to test Drupal on the Raspberry Pi in various configurations. Full details (which are continuously updated) are listed on the Pi Dramble Drupal Benchmarks page, but here are some relevant benchmarks comparing the Pi 3 to the Pi 2:

Drupal 8.0.5 and PHP 7.0.4 performance on Raspberry Pi 2 vs Raspberry Pi 3

I’ve beaten the PHP 5.6 vs PHP 7 vs HHVM benchmarks to death, so from this point forward I’m only going to be dealing with PHP 7.x when benchmarking Drupal 8. The Drupal Pi project allows you to build PHP from source in the project’s config.yml, and using that I installed PHP 7.0.4 with Drupal 8.0.5.

The Pi 3 offers a decent and highly consistent 30-33% performance improvement for Drupal 8 and the LEMP stack.

Summary

The Raspberry Pi 3 is, in some ways, just an evolutionary improvement over the Pi 2. In terms of major changes, the Pi 2 was a much larger change over the B+ and all other Pis before due mainly to its multi-core processor. The Pi 3’s incremental improvements, and convenient built-in WiFi and Bluetooth, make the Pi 3 the best Pi for almost any general computing need. I the Pi 3 release is like an iPhone ‘S’ release—no major changes to the look and feel or earth-shattering new features… but everything feels better, and there are small improvements to make usage easier (e.g. no more $10 WiFi dongle to buy separately). It’s the first Pi on which Raspbian’s desktop UI and browser feels usable outside of testing purposes, and that’s pretty exciting!

For the first time, however, I recommend purchasing an official Raspberry Pi power supply (or making sure the one you use is very good quality with consistent 2+ amp output), and also purchasing at least a passive heat sink to adhere (with proper thermal compound) to the Broadcom SoC.

Further reading


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/Me5QsiBtHzU/review-raspberry-pi-model-3-b-benchmarks-vs-pi-2

Original article

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑

%d bloggers like this: