Coinbase Co-founder: Ethereum Is the Forefront of Digital Currency

Ethereum is the Forefront of Digital Currency

We have sat here for the last 3 years seeing only infrastructure apps like wallets and exchanges emerge on top of Bitcoin. Why is that?

My theory has been that the scripting language in Bitcoin — the piece of every Bitcoin transaction that lets you run a little software program along with it — is too restrictive.

Enter Ethereum. Ethereum has taken what was a four function calculator of a programming language in Bitcoin and turned it into a full fledged computer. We now stand only 9 months out from the beginning of the Ethereum network and the level of app development is already faster than Bitcoin’s. We are finally getting rapid iteration at the app layer. In one early example, people have designed a decentralized organization (The DAO) — a company whose heart is code and peripheral operations are run by humans, rather than the other way around — that has raised $150m so far in the largest crowdfunding ever.

To be clear, I don’t think this needs to be a contest between Bitcoin vs. Ethereum and Coinbase plans to strongly support both. I think this is about advancing digital currency as much as we can. There is a significant amount of overlap between the two, however, so the comparison is valuable and the potential for competition is real.

How did we get here?

First, some history. When the Bitcoin white paper emerged in 2008 it was completely revolutionary. The amount of concepts that had to come together in just the right way — computer science, cryptography, and economic incentives — was astonishing. When the actual Bitcoin network launched in 2009, no one knew about it, and many of those who did thought it would surely fail. Just to make sure the thing worked, the scripting language in Bitcoin was intentionally extremely restrictive. “Scripting language” is a fancy way of saying an easy to work with programming language (in fact, Bitcoin doesn’t exactly have a scripting language, it uses a stack with script operators — more on that later). The scripting language in Bitcoin is important because it is what makes Bitcoin “programmable money”. Within each Bitcoin transaction is the ability to write a little program. For example, you can write a little program in a Bitcoin transaction that says “this transaction isn’t valid unless it’s June 15th, 2016 or later”. This is very powerful because you can move money automatically with computer code and everyone can see the rules by which that money moves and know those rules will be followed.

It was, and still is, incredible that Bitcoin got off the ground and is alive after 7 years. It is the first network ever to allow anyone in the world to access a fundamentally open financial system through free software. It has ~$7bn in market cap and has never had a systemic issue which could not be fixed. To some this is already a great success.

However, we also stand here 7 years into Bitcoin with few apps and no “killer apps” beyond store of value and speculation. The scripting language in Bitcoin has barely expanded and remains very restrictive. While Bitcoin has become embroiled in debate over the block size — an important topic for the health of the network, but not something that should halt progress in a young and rapidly developing field — Ethereum is charting new territory, both intellectually and executionally.

Make no mistake — Ethereum would never have existed without Bitcoin as a forerunner. That said, I think Ethereum is ahead of Bitcoin in many ways and represents the bleeding edge of digital currency. I believe this for a few reasons:

Ethereum’s programming languages lets you do much more than Bitcoin’s

As mentioned above, Bitcoin’s scripting language is intentionally restrictive. You might liken it to programming with an advanced graphing calculator — functionality is limited. As a result, you can only do basic things. It is also hard to understand and use. Rather than most modern programming languages where the code is almost readable like a sentence, it looks like unintelligible machine code. As a result, it took Mike Hearn, a talented ex-Google developer, a whopping 8 months to write a first version of a fairly simple crowdfunding application.

In contrast, Ethereum’s programming languages (Solidity for those who like Javascript, Serpent for those who like Python) let you do pretty much anything an advanced programming language would let you do. This is why they are said to be “Turing complete”. Equally important, they are easy to use. It is simple for any developer to pick it up and quickly write their first app.

Here’s an example of a script in Bitcoin:

OP_DUP OP_HASH160 62e907b15cbf27d5425399ebf6f0fb50ebb88f18 OP_EQUALVERIFY OP_CHECKSIG

And one in Ethereum’s Solidity:

contract Simple {
function() {
var two = 1 + 1;

Developers at Coinbase have written simple Ethereum apps in a day or two.

I cannot overemphasize enough how important this combination of full programming functionality and ease of use is. People are doing things in Ethereum that are not possible right now in Bitcoin. It has created a new generation of developers which never worked with Bitcoin but are interested in Ethereum.

Bitcoin could have this advanced functionality, but it would be through a series of other layers that work with the Bitcoin protocol that haven’t been created yet, while Ethereum is providing it out of the box.

Beyond the radical difference in scripting languages, developer tools are much better in Ethereum. Bitcoin has never had a set of developer tools that caught on much, and they are sorely needed given it is much harder to work with Bitcoin out of the box. Ethereum has made life as a developer much easier. It has a welcoming homepage for devs and its own development environment (Mix IDE) amongst others.

Ethereum has a more robust developer community

The developer community in Bitcoin feels fairly dormant. Bitcoin never really made it past the stage of simple wallets and exchanges. The most notable thing to be released recently is an implementation of the Lightning Network (a way of making transactions, especially microtransactions, more efficient) called Thunder. This is an additional protocol layer, not an application, however, and could be used by both Bitcoin and Ethereum.

In contrast, Ethereum’s developer community feels vibrant and growing. Most importantly, entirely new things are being tried on Ethereum. While most are experiments or toys at the moment, you can see a list of apps that developers from around the world which is rapidly expanding.

Developer mindshare is the most important thing to have in digital currency. The only reason these networks (Bitcoin, Ethereum) and their tokens (bitcoin, ether) have value is because there is a future expectation that people will want to acquire those tokens to use the network. And developers create the applications which drive that demand. Without a reason to use the network, both the network and its currency are worth nothing.

Ethereum’s core development team is healthy while Bitcoin’s is dysfunctional

Vitalik Buterin, the creator of Ethereum, has shown early promise as the leader of an open source project. He seems both comfortable as a community and technical leader. As an example, here’s what he sent us when we added Ethereum to GDAX, our exchange.

In contrast, Bitcoin has had a leadership vacuum since Gavin Andresen stepped aside after other core developers did not get on board with his (in my opinion rational and convincing) arguments to increase the block size. “Core developers” as they now stand are also relatively fragmented.

Beyond a leadership vacuum, Bitcoin’s “leadership” is less clear and toxic. Greg Maxwell, technical leader of Blockstream which employs a solid chunk of core developers, recently referred to other core developers who were working with miners on a block size compromise as “well meaning dips***s.” A second discussion board needed to form on reddit, /r/btc, because of censorship on the original /r/bitcoin. The content on the Bitcoin discussion boards feels like squabbling while Ethereum’s is talking about relevant issues and new ideas. In summary, Ethereum leadership (and as a result its community) is moving forward while things need to get worse before they can get better in Bitcoin.

Ethereum has a growth mindset while Bitcoin has a false sense of accomplishment

The general mindset of the two communities feels different as well. Many in Bitcoin seem to have a false sense of “we’ve got this really valuable network we need to protect!”. In my opinion that view is wrong and dangerous. Bitcoin is still orders of magnitude smaller than the major financial networks of the world at ~$200m/day in transaction volume (Visa $18 billion/day, SWIFT wire $5 trillion/day) and ~10 million users (5 billion in banks). And while transactions per day on Bitcoin seem to be increasing at a healthy pace, the actual $ volume of transactions on Bitcoin is not growing much.

Bitcoin transaction volume in peak times compared to other networks — we’ve got a long way to go

Meanwhile, the core development team in Ethereum is focused. This is evident from the Ethereum blog. When I started reading it, it was everything I found myself thinking about for the present and future of Bitcoin but didn’t see being discussed much: scaling the network, the viability of proof of stake, how to create a stable digital currency, what a blockchain based company (DAO) would look like, amongst other topics. These are very ambitious ideas and some won’t work. But some probably will work, and they will be important — moving to proof of stake and eliminating physical mining being one of the most promising.

Ethereum is making faster and more consistent technical progress on the core protocol

In Bitcoin, we have mostly been stuck on the block size debate for the last year and a half. Some minor improvements have been made (CHECKLOCKTIMEVERIFY to enable the time locking functionality mentioned earlier), and others are in development but not yet live (Segregated Witness to make the network more efficient). None of these changes have sparked much in the way of application development yet.

Meanwhile, beyond the more robust programming language, Ethereum is making advancements that are core to even basic transactions. Its mining allows for much quicker blocks, and thus, transaction confirmation times — about 14 seconds on Ethereum compared to 10 minutes on Bitcoin (not an apples to apples comparison, but the larger point holds). This is largely due to the concept of miners getting paid for the work they put in whether or not they are the first to solve the next block (a system called “uncle blocks”). While this system isn’t perfect yet, it’s meaningful forward progress towards quicker transaction confirmations.

Counterargument and caveats

Ethereum is young and it’s prudent to highlight the risks:

  • Ethereum has been able to take more risk with new features because it is has had less to lose. Most of Ethereum’s history has occurred while it has held in the hundreds of millions of dollars, while Bitcoin is in the billions. As Ethereum continues to grow, it may not be able to “move fast and break things” in the same way. In practice I think this mostly comes down to the quality of the core development team — if they continue to make progress and build trust with the community execution can still be rapid, as shown by Linus Torvalds with Linux as an open source project.
  • Ethereum hasn’t gone through a governance crisis. Vitalik acknowledged this at an Ethereum meetup we hosted at Coinbase. Like any project that has success, it’s inevitable to hit bumps as peoples’ vested interests get bigger.
  • Ethereum allows you to do more than you currently can in Bitcoin, and that brings increased regulatory risk. This is less of a systemic risk to Ethereum as a network, rather more of a risk to specific applications of Ethereum. A good example would be decentralized organizations (ex: the DAO) and regulation which would normally apply to a corporation.
  • There is a greater security risk with Ethereum. Having a more robust programming language creates a greater surface area for things to go wrong. Bitcoin has been battle tested for 7 years. Ethereum has been live for 9 months and now stores about $1bn. While there hasn’t been a major issue yet, it is possible there are issues people are not yet aware of. This probability goes down with each passing day. People will definitely create smart contracts with bugs in Ethereum. This won’t be because of a failure of the core Ethereum protocol though, much like the failure of Mt. Gox was not an error in the Bitcoin protocol.
  • Ethereum may attempt to move to proof of stake. This would be a huge breakthrough if it works as it would eliminate the need for proof of work and all of the hardware and electricity use that goes with it, but also presents a large risk. I believe this risk is manageable because there would be extensive testing beforehand.
  • Scaling the network is harder when it supports mini programs in addition to basic transaction processing. This was the biggest question I had when I started to read about the idea in 2014. While there is no silver bullet here, I think some combination of solutions will be developed over time as they are with any evolving technology. Some possibilities for Ethereum are sharding the network, computing power and networks naturally getting faster over time, and the economics of the Ethereum blockchain only running the most important things as a forcing function. There is a decent argument (best articulated by Gavin Andresen in his article Bit-thereum) that it’s better to keep the base transaction layer dumb for scaling reasons with advanced logic in higher layers. It’s possible we come full circle and end up back there, but this isn’t how interesting things are being created at the moment because it’s harder to 1) create and 2) get decent adoption of multiple layers in the stack than it is to have it all out of the box in Ethereum.

Wait — why is this a contest? Are Bitcoin and Ethereum competitors or complementary?

This remains to be seen. It’s possible Bitcoin remains the protocol that people are comfortable storing their value in because it is more stable and reliable. This would allow Ethereum to continue to take more risk by trying less tested advancements. In this scenario, Bitcoin is more of a settlement network while Ethereum is used to run decentralized applications (where most of the transaction volume occurs is up in the air). The two could be quite complementary.

What is very real, though, is the possibility that Ethereum blows past Bitcoin entirely. There is nothing that Bitcoin can do which Ethereum can’t. While Ethereum is less battle tested, it is moving faster, has better leadership, and has more developer mindshare. Developers → apps → users → network success. First mover advantage is challenging to overcome, but at current pace, it’s conceivable.

What does all this mean?

It’s all good news for digital currency. Ethereum is pushing the envelope and I am more excited than ever. Competition and new ideas create better outcomes for everyone. Even if Ethereum goes up in flames our collective knowledge in digital currency will have leveled up significantly. I have not given up on Bitcoin and it’s hard to argue with a network that has been so resilient. I, and Coinbase, plan on supporting both. We’ll probably support other things that haven’t been invented yet in the future. At the end of the day, I have no allegiance to any particular network; I just want whatever brings the most benefit to the world.

Taking a step back, it feels like the rate of change in digital currency is accelerating.

Digital currency is a unique field because of how ambitious the scope is: creating a better transaction network for the entire world (for currency, assets, our online identities, and many other things). Like the Internet itself, this is not one company selling its own proprietary product, it is a series of low-level protocols that will connect everyone someday. And, like the Internet, it will (and has) taken longer to develop, but the impact will be immense.

Fasten your seatbelts.

Original URL:

Original article

Pastejacking Attack Appends Malicious Terminal Commands To Your Clipboard

An anonymous reader writes: “It has been possible for a long time for developers to use CSS to append malicious content to the clipboard without a user noticing and thus fool them into executing unwanted terminal commands,” writes Softpedia. “This type of attack is known as clipboard hijacking, and in most scenarios, is useless, except when the user copies something inside their terminal.” Security researcher Dylan Ayrey published a new version of this attack last week, which uses only JavaScript as the attack medium, giving the attack more versatility and making it now easier to carry out. The attack is called Pastejacking and it uses Javascript to theoretically allow attackers to add their malicious code to the entire page to run commands behind a user’s back when they paste anything inside the console. “The attack can be deadly if combined with tech support or phishing emails,” writes Softpedia. “Users might think they’re copying innocent text into their console, but in fact, they’re running the crook’s exploit for them.”

Share on Google+

Read more of this story at Slashdot.

Original URL:

Original article

Amazon Elastic Transcoder Update – Support for MPEG-DASH

Amazon Elastic Transcoder converts media files (audio and video) from one format to another. The service is robust, scalable, cost-effective, and easy to use. You simply create a processing pipeline (pointing to a pair of S3 buckets for input and output in the process), and then create transcoding jobs. Each job reads a specific file from the input bucket, transcodes it to the desired format(s) as specified in the job, and then writes the output to the output bucket. You pay for only what you transcode, with price points for Standard Definition (SD) video, High Definition (HD) video, and audio. We launched the service with support for an initial set of transcoding presets (combinations of output formats and relevant settings). Over time, in response to customer demand and changes in encoding technologies, we have added additional presets and formats. For example, we added support for the VP9 Codec earlier this year.

Support for MPEG-DASH
Today we are adding support for transcoding to the MPEG-DASH format. This International Standard format supports high-quality audio and video streaming from HTTP servers, and has the ability to adapt to changes in available network throughput using a technique known as adaptive streaming. It was designed to work well across multiple platforms and at multiple bitrates, simplifying the transcoding process and sidestepping the need to create output in multiple formats.

During the MPEG-DASH transcoding process, the content is transcoded into segmented outputs at the different bitrates and a playlist is created that references these outputs. The client (most often a video player) downloads the playlist to initiate playback. Then it monitors the effective network bandwidth and latency, requests video segments as needed. If network conditions change during the playback process, the player will take action, upshifting or downshifting as needed.

You can serve up the transcoded content directly from S3 or you can use Amazon CloudFront to get the content even closer to your users. Either way, you need to create a CORS policy that looks like this:


If you are using CloudFront, you need to enable the OPTIONS method, and allow it to be cached:

You also need to add three headers to the whitelist for the distribution:

Transcoding With MPEG-DASH
To make use of the adaptive bitrate feature of MPEG-DASH, you create a single transcoding job and specify multiple outputs, each with a different preset. Here are your choices (4 for video and 1 for audio):

When you use this format, you also need to choose a suitable segment duration (in seconds). A shorter duration produces a larger number of smaller segments and allows the client to adapt to changes more quickly.

You can create a single playlist that contains all of the bitrates, or you can choose the bitrates that are most appropriate for your customers and your content. You can also create your own presets, using an existing one as a starting point:

Available Now
MPEG-DASH support is available now in all Regions where Amazon Elastic Transcoder is available. There is no extra charge for this use of this format (see Elastic Transcoder Pricing to learn more).



Original URL:

Original article

A better way to structure D3 code

Note: This blog post is aimed at beginner-to-intermediate users of the D3 JavaScript library. If you want to skip straight to the good stuff, check out the accompanying example on

Code written using D3 is difficult to manage. Even in a simple line chart, there will be almost a dozen important variables such as (deep breath): width, height, margins, SVG, x-scale, x-axis generator function, x-axis SVG group, x-scale, y-axis generator function, y-axis SVG group, line generator function, path element and (most important of all) a data array/object. And that’s just the bare minimum.

Because most of these need to be accessible at several points in a script, the temptation is to structure the entire thing in one giant function. Many examples on are essentially unstructured, which makes the concepts nice and clear but in real-world code can lead to an unmanageable mess.

Credit where credit is due: I was introduced to this idea by my colleague Jason French. I’ve since adopted it and use it regularly. This is my attempt at formalising it.

A solution: object-oriented programming

Think of a D3 chart or visualisation as a ‘widget’ on the page. This provides a number of benefits:

  • All of the chart’s related properties and functions are kept in a single place, both in script files and during execution.
  • Multiple instances of the chart can exist on the same page without conflicting.
  • External controls (buttons, sliders, etc) can easily modify the chart without risking breaking anything.

Here’s what we’re aiming for: being able to create the chart as if it were a Highcharts/C3-style thing.

var chart = new Chart({
    element: document.querySelector('.chart-container'),
    data: [
        [new Date(2016,0,1), 10],
        [new Date(2016,1,1), 70],
        [new Date(2016,2,1), 30],
        [new Date(2016,3,1), 10],
        [new Date(2016,4,1), 40]

Which we could then modify like so:

// load in new data
chart.setData( newData );

// change line colour
chart.setColor( 'blue' );

// redraw chart, perhaps on window resize

A quick introduction to constructor functions

Before we move onto the D3-specific stuff, it’s worth learning how to use constructor functions. This is a useful general-purpose pattern for JavaScript code used frequently in both JavaScript’s native functions and in third-party libraries.

You may already be familiar with:

var d = new Date(2016,0,1);

This creates a new object stored in d, which is based on (but does not replace) the original Date object. Date is a constructor, and d is an instance of Date.

We can make our own constructor functions like so:

var Cat = function() {
    // nothing here yet
Cat.prototype.cry = function() {
    return 'meoww';

The .prototype bit defines an instance method, which will be available to each instance of the Cat constructor. We would call it like so:

var bob = new Cat();
bob.cry(); // => 'meoww'

Inside of the constructor function, there is a special variable called this which refers to the current instance. We can use it to share variables between instance methods.

var Cat = function(crySound) {
    this.crySound = crySound;
Cat.prototype.cry = function() {
    return this.crySound;

In this case, we are customising the new cat’s crySound.

var bob = new Cat('meoww');
var noodle = new Cat('miaow');
bob.cry(); // => 'meoww'
noodle.cry(); // => 'miaow'

Because each instance is a new object, this style of coding is called object-oriented programming.

There’s a lot more to constructor functions, and if you want to learn more I recommend reading CSS Tricks’ Understanding JavaScript Constructors and Douglas Crockford’s more hardcore Classical Inheritance in JavaScript.

A chart as a constructor function

Instead of Cat – which is obviously a fairly useless constructor – we could instead make a constructor for a D3 chart:

var Chart = function(opts) {
    // stuff
Chart.prototype.setColor = function() {
    // more stuff
Chart.prototype.setData = function() {
    // even more stuff

Here’s a live example of a chart made using a Chart constructor. Try clicking the buttons below and resizing the window.

And here’s the corresponding JavaScript for the chart. To see how it’s being used, read the full code on

A few things worth emphasising here:

  • Each instance method fulfils a specific purpose. Only some of them are ‘public’ and are meant to be called externally.
  • Some public methods (in this case, setColor) don’t require redrawing the entire chart. Others, like setData, do.
  • Only variables used in other instance methods are added to this.
  • draw needs to be able to work both on initial load and on updates.
  • If you wanted to have method instances which trigged animations (for example, transitioning axes), you would need to make draw more complex and not simple wipe the element clean each time.

Watch out for anonymous functions

The only catch with using constructor functions is that the value of this will change inside of anonymous functions – which, in D3, are everywhere.

What do I mean by that? Inside of Chart or a Chart.prototype method, this refers to the Chart instance, as expected.

var Chart = function(opts) {
    // here, `this` is the chart
Chart.prototype.setColor = function() {
    // here, `this` is still the chart

However, the value of this can change when inside an anonymous function:

Chart.prototype.example = function() {
    // here, `this` is the chart
    var line = d3.svg.line()
        .x(function(d) {
            // but in here, `this` is the SVG line element

There’s a simple solution, which is to load this into a variable called _this:

Chart.prototype.example = function() {
    var _this = this;
    var line = d3.svg.line()
        .x(function(d) {
            // in here, `this` is the SVG line element
            // but `_this` (with an underscore) is the chart

Hardly difficult to get around, then, but worth keeping in mind. Some people prefer to use that instead of _this, which is just as good.

Rules to live by

To keep the Chart function’s responsibilities from spiralling out of control, I try to stick by these rules:

  • A chart’s appearance should not change if you call its draw() function without changing anything else. Or, to put it in programmer jargon: they must maintain state.
  • A chart does not load its own data. Data is passed to it. Ideally already formatted as nice friendly arrays.
  • A chart does not affect anything outside of its parent element. Pass in callback functions as arguments, if necessary.
  • A chart’s internal functions should each be kept short. A good length is “not taller than your screen”. (Alas, this rule is easily broken.)
  • Make a new constructor for a different type of chart (LineChart, BarChart, etc), rather than relying on “if” statements.

It works, honest

I’ve used this pattern several times now in graphics published on, including ECB Meets, Euro Reacts and The World’s Safest Bonds Are Actually Wild Risks. In both cases, the constructor pattern made managing these dynamically-updating charts a breeze. The rest of the code, on the other hand…

Ps. If you enjoyed this, you might like my previous blog post on how my JavaScript coding style has changed since 2014.

Thanks to Amelia Bellamy-Royds for providing feedback on a draft of this post.

Original URL:

Original article

Cray’s latest supercomputer runs OpenStack and open source big data tools

Cray Urika-GX super computer Cray has always been associated with speed and power and its latest computing beast called the Cray Urika-GX system has been designed specifically for big data workloads. What’s more, it runs on OpenStack, the open source cloud platform and supports open source big data processing tools like Hadoop and Spark. Cray recognizes that the computing world had evolved since Seymour Cray… Read More

Original URL:

Original article

JQuery 3.0 Release Candidate

Welcome to the Release Candidate for jQuery 3.0! This is the same code we expect to release as the final version of jQuery 3.0 (pending any major bugs or regressions). When released, jQuery 3.0 will become the only version of jQuery. The 1.12 and 2.2 branches will continue to receive critical support patches for a while, but will not get any new features or major revisions. Note that jQuery 3.0 will not support IE6-8. If you need IE6-8 support, you can continue to use the latest 1.12 release.

Despite the 3.0 version number, we anticipate that these releases shouldn’t be too much trouble when it comes to upgrading existing code. Yes, there are a few “breaking changes” that justified the major version bump, but we’re hopeful the breakage doesn’t actually affect that many people.

To assist with upgrading, we have a brand new 3.0 Upgrade Guide. And the jQuery Migrate 3.0-rc plugin will help you to identify compatibility issues in your code. Your feedback on the changes will help us greatly, so please try it out on your existing code and plugins!

You can get the files from the jQuery CDN, or link to them directly:

You can also get the release candidate from npm:

npm install [email protected]

In addition, we’ve got the release candidate for jQuery Migrate 3.0. We highly recommend using this to address any issues with breaking changes in jQuery 3.0. You can get those files here:

npm install [email protected]

For more information about upgrading your jQuery 1.x and 2.x pages to jQuery 3.0 with the help of jQuery Migrate, see yesterday’s jQuery Migrate blog post.

Major changes

Below are just the highlights of the major new features, improvements, and bug fixes in these releases, you can dig into more detail on the 3.0 Upgrade Guide. A complete list of issues fixed is available on our GitHub bug tracker.

jQuery.Deferred is now Promises/A+ compatible

jQuery.Deferred objects have been updated for compatibility with Promises/A+ and ES2015 Promises, verified with the Promises/A+ Compliance Test Suite. This meant we needed some major changes to the .then() method:

  • An exception thrown in a .then() callback now becomes a rejection value. Previously, exceptions bubbled all the way up, aborting callback execution and irreversibly locking both the parent and child Deferred objects.
  • The resolution state of a Deferred created by .then() is now controlled by its callbacks—exceptions become rejection values and non-thenable returns become fulfillment values. Previously, returns from rejection handlers became rejection values.
  • Callbacks are always invoked asynchronously. Previously, they would be called immediately upon binding or resolution, whichever came last.

Consider the following, in which a parent Deferred is rejected and a child callback generates an exception:

var parent = jQuery.Deferred();
var child = parent.then( null, function() {
  return "bar";
var callback = function( state ) {
  return function( value ) {
    console.log( state, value );
    throw new Error( "baz" );
var grandchildren = [
  child.then( callback( "fulfilled" ), callback( "rejected" ) ),
  child.then( callback( "fulfilled" ), callback( "rejected" ) )
parent.reject( "foo" );
console.log( "parent resolved" );

As of jQuery 3.0, this will log “parent resolved” before invoking any callback, each child callback will then log “fulfilled bar”, and the grandchildren will be rejected with Error “baz”. In previous versions, this would log “rejected bar” (the child Deferred having been rejected instead of fulfilled) once and then immediately terminate with uncaught Error “baz” (“parent resolved” not being logged and the grandchildren remaining unresolved).

While caught exceptions had advantages for in-browser debugging, it is far more declarative (i.e. explicit) to handle them with rejection callbacks. Keep in mind that this places the responsibility on you to always add at least one rejection callback when working with promises. Otherwise, any errors will go unnoticed.

Legacy behavior can be recovered by replacing use of .then() with the now-deprecated .pipe() method (which has an identical signature).

We’ve also built a plugin to help in debugging Promises/A+ compatible Deferreds. If you are not seeing enough information about an error on the console to determine its source, check out the jQuery Deferred Reporter Plugin.

jQuery.when has also been updated to accept any thenable object, which includes native Promise objects.

Added .catch() to Deferreds

The catch() method was added to promise objects as an alias for .then(null, fn).

Error cases don’t silently fail

Perhaps in a profound moment you’ve wondered, “What is the offset of a window?” Then you probably realized that is a crazy question – how can a window even have an offset?

In the past, jQuery has sometimes tried to make cases like this return something rather than having them throw errors. In this particular case of asking for the offset of a window, the answer up to now has been { top: 0, left: 0 } With jQuery 3.0, such cases will throw errors so that crazy requests aren’t silently ignored. Please try out this release and see if there is any code out there depending on jQuery to mask problems with invalid inputs.

Removed deprecated event aliases

.load, .unload, and .error, deprecated since jQuery 1.8, are no more. Use .on() to register listeners.

Animations now use requestAnimationFrame

On platforms that support the requestAnimationFrame API, which is pretty much everywhere but IE9 and Android<4.4, jQuery will now use that API when performing animations. This should result in animations that are smoother and use less CPU time – and save battery as well on mobile devices.

jQuery tried using requestAnimationFrame a few years back but there were serious compatibility issues with existing code so we had to back it out. We think we’ve beaten most of those issues by suspending animations while a browser tab is out of view. Still, any code that depends on animations to always run in nearly real-time is making an unrealistic assumption.

Massive speedups for some jQuery custom selectors

Thanks to some detective work by Paul Irish at Google, we identified some cases where we could skip a bunch of extra work when custom selectors like :visible are used many times in the same document. That particular case is up to 17 times faster now!

Keep in mind that even with this improvement, selectors like :visible and :hidden can be expensive because they depend on the browser to determine whether elements are actually displaying on the page. That may require, in the worst case, a complete recalculation of CSS styles and page layout! While we don’t discourage their use in most cases, we recommend testing your pages to determine if these selectors are causing performance issues.

This change actually made it into 1.12/2.2, but we wanted to reiterate it for jQuery 3.0.

As mentioned above, the Upgrade Guide is now available for anyone ready to try out this release. Aside from being helpful in upgrading, it also lists more of the notable changes.

Original URL:

Original article

Ping An becomes first Chinese member of R3 blockchain consortium

LONDON (Reuters) – China’s second-biggest insurance company, Ping An Group, has become the first Chinese member of a global consortium led by fintech firm R3 that is working on ways blockchain technology can be used in financial markets, the companies said on Tuesday.

Original URL:

Original article

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑

%d bloggers like this: