IBM offers advice on how to secure blockchain in the cloud

Cloud providers hosting blockchain secure transactions technology should take additional steps to protect their records, IBM says.

IBM’s new framework for securely operating blockchain networks, released Friday, recommends that network operators make it easy to audit their operating environments and use optimized accelerators for hashing — the generation of numbers from strings of text — and the creation of digital signatures to pump up CPU performance. 

Along with the security guidelines, IBM announced new cloud-based blockchain services designed to meet existing regulatory and security requirements. The company has worked with security experts to create cloud services for “tamper-resistant” blockchain networks, it said.

To read this article in full or to leave a comment, please click here

Original URL:

Original article

Microsoft Flow — An IFTTT Alternative — Aims To Connect Your Online Apps

An anonymous user writes: Microsoft has unveiled a new product called Microsoft Flow, which is designed to better connect diverse services so that you could, if you were so inclined, put all your tweets into a spreadsheet or get an SMS alert when you receive an email. That example may be a solution in search of a problem, but there are other more useful possibilities. Flow could be set up so that any email from your boss triggers an SMS notification to your phone, for example. Or you could make sure any updated work documents get deposited in your team’s SharePoint. To be sure, Microsoft is not first to this app-integration party. Many people already use If This Then That (IFTTT) or Zapier, which claims more than 500 app integrations, to knit their services together.Some IFTTT users must be breathing a sigh of relief.

Share on Google+

Read more of this story at Slashdot.

Original URL:

Original article

Android Studio 2.0 Is Google’s New, Improved Development Suite

Posted by
Dave Man

in Mobile

In 2013, Google released the first version of its Integrated Development Environment (IDE), Android Studio. It aimed to provide an all-in-one development experience that was faster and smoother than the de facto standard for Android coders at the time, Eclipse.
After three years, Android developers got the first major update to the official IDE earlier this month with the release of Android Studio 2.0, followed quickly by a 2.1 update. Built-in tools include the obvious code editor, code analysis functionality and a fully configurable emulator.

Android Studio 2.0 offers an improved device emulator

Google says Android Studio 2.1 is the fastest way to build apps with higher quality and better performance for all Android devices: phones and tablets, Android Auto, Android Wear, and Android TV.  The update comes with new features such as Instant Run and Cloud Test Lab integration.
Instant Run allows developers to code and run their apps continuously to see the changes they make during programing. The Instant Run button analyzes the changes and determines how it can deploy the new code in the fastest way, updating the emulator.
Android Emulator on Android Studio 2.1 is about three times faster compared to the old version. Android Debug Bridge (ADB), communicates with the emulator 10 times faster than with an attached physical device. The new emulator also comes with Google Play Services built in and new management features for calls, battery, network, and more.
Cloud Test Lab Integration allows developers to code and test-run an app across a wide range of Android devices for compatibility purpose. The Cloud Test Lab itself isn’t free, but it alleviates the need to run many different emulators locally.
App Indexing Code Generation helps to improve the visibility of app content in Google Search by adding auto-generated URL links into the app code. Developers can then test and validate the app’s indxing code in Android Studio.
GPU Debugger Preview helps diagnose and debug graphics rendering problems. Developers can go through an Open GL ES game or app frame by frame to pin down issues.
Android Studio 2.1 is available on Windows, Mac and Linux. Developers already using Android Studio can get the newest update in the program menu or download from the Android Studio site.

Multi-window view in Android N Preview allows two apps to share the screenWhat’s coming next?

Google’s development team released Android Studio 2.1 on April 26. This first update focuses on fixing bugs and improving Instant Run with new tweaks to increase performance.
In addition, Android Studio 2.1 supports Android N Developer Preview, a pre-release edition of the next Android version. Android N is still under development but available for beta testers as Android N preview until the final release, which is projected in Q3 of 2016.
Once released to the public, Android N will introduce new features such as multi-window support, notification enhancements, a mobile data saver, Android TV recording, and network security.
Have questions about app development for Android or other mobile platforms? Email us or call 603.881.9200.

Original URL:

Original article

DeepMind moves to TensorFlow

Posted by Koray Kavukcuoglu, Research Scientist, Google DeepMind

At DeepMind, we conduct state-of-the-art research on a wide range of algorithms, from deep learning and reinforcement learning to systems neuroscience, towards the goal of building Artificial General Intelligence. A key factor in facilitating rapid progress is the software environment used for research. For nearly four years, the open source Torch7 machine learning library has served as our primary research platform, combining excellent flexibility with very fast runtime execution, enabling rapid prototyping. Our team has been proud to contribute to the open source project in capacities ranging from occasional bug fixes to being core maintainers of several crucial components.

With Google’s recent open source release of TensorFlow, we initiated a project to test its suitability for our research environment. Over the last six months, we have re-implemented more than a dozen different projects in TensorFlow to develop a deeper understanding of its potential use cases and the tradeoffs for research. Today we are excited to announce that DeepMind will start using TensorFlow for all our future research. We believe that TensorFlow will enable us to execute our ambitious research goals at much larger scale and an even faster pace, providing us with a unique opportunity to further accelerate our research programme.

As one of the core contributors of Torch7, I have had the pleasure of working closely with an excellent community of developers and researchers, and it has been amazing to see all the great work that has been built on top of the platform and the impact this has had on the field. Torch7 is currently being used by Facebook, Twitter, and many start-ups and academic labs as well as DeepMind, and I’m proud of the significant contribution it has made to a large community in both research and industry. Our transition to TensorFlow represents a new chapter, and I feel very excited about the prospect of DeepMind contributing heavily to another great open source machine learning platform that everyone can use to advance the state-of-the-art.

Original URL:

Original article

Google rolls out “If This Then That” support for its $200 OnHub router

Google’s OnHub router just got a major new feature: IFTTT support. The demoed features let you do things like lock your doors when your device disconnects from the router or send an e-mail when someone connects to your wireless network. There are a few example recipes on this IFTTT page, or you can make your own using any of the channels supported on IFTTT.

IFTTT (If This Then That) is a service that lets you connect apps to other apps or connect apps to smart home devices. Developers for apps and services can build “If” triggers and “Do” actions that plug into the site. Users can make a “recipe” by combining these triggers and actions into a useful program, using the format “If [something happens], do [this action].”

Say you want to automatically tweet out a link every time an article on a website is posted. You can grab the RSS trigger function, so now you have “if a new item on this RSS feed appears, then [do this action].” Then you can combine it with the Twitter action and make “if a new item on this RSS feed appears, then tweet it out.” Each trigger and action has its own configuration options, so you can do necessary plumbing like giving the “RSS action” the exact RSS feed it needs and giving the Twitter bot your login credentials so it can post from your account.

All of the triggers and actions run on the IFTTT service, basically making it an app store for actions and triggers. For the OnHub, Google built one trigger for a device connecting, one trigger for a device disconnecting, and an action to “prioritize” a certain device, which just gives it a stronger Wi-Fi connection. You can combine these with the over 300 other supported apps and devices on IFTTT, so there’s probably something cool you can teach the router to do.

The OnHub was released as a $200 Wi-Fi router that didn’t have the performance or capabilities to match other $200 routers. Its only real differentiators were the funky design, easy setup, and the promise of future updates. It’s also packed with smart home antennas that Google still doesn’t really talk about. There’s support for Bluetooth 4.1, Google’s “Thread” network protocol (based on IEEE 802.15.4), and “Weave” communication standard. With the extra antennas and the label on the bottom of the router declaring “Built for Google On,” many believed this was the start of Google’s smart home ecosystem.

The speed of these updates has been slower than anyone expected. There have been a few tweaks to how the device works and some security patches, but most of the big stuff is still missing. Eight months later, the OnHub still doesn’t support IPv6, and the USB port still can’t be used for network storage. Bluetooth, Thread, and Weave support are all still dormant.

Now with the IFTTT update, the OnHub finally supports some smart home features—but it’s using someone else’s ecosystem. IFTTT is now the gateway for controlling other things in your house via the OnHub rather than using some kind of Google communication standard like we expected. This is all still happening over Wi-Fi, too, so the OnHub is still not using any of the smart-home antennas it shipped with. As the only router with IFTTT support, though, this move at least gives the OnHub a promising feature set for smart home users.

Listing image by Ron Amadeo

Original URL:

Original article

Challenges of Deployment to ECS

Article note: ECS is a great reason to learn the AWS API. There is a lot of power hidden in there.

Amazon’s EC2 Container Service (ECS) promises a production-ready container management service but setting it up and running apps on it is not without challenges.

I have been working seriously with ECS for close to a year now, building open-source infrastructure automation tools around all things AWS. While building these tools, running my own production services on ECS, and supporting users in doing the same, I am maintaining a running list of the challenges ECS and containers pose.

Through this lens, I can confidently say that ECS is living up to the promise. It offers flexible options for running apps as containers while offering great reliability for service uptime and and formation management. But it is not trivial to get it all working, and it requires a new way of thinking about our infrastructure and how we build our applications.

ECS Primer

ECS is a set of APIs that turn EC2 instances into compute cluster for container management.

First EC2 instances must call the RegisterContainerInstance API to signal that they are ready to run containers. Next we use the RegisterTaskDefinition API to define the tasks — essentially settings like an image, command and memory for docker run — that we plan to schedule in the cluster. Finally we use the RunTask API to run a one-off container, and the CreateService API to run a long-running container.

All of this works without requiring that we install or operate our own container scheduler system like Mesos, Kubernetes, Docker Swarm or Core OS Fleet.

With all this flexibility we can now map a development workflow onto ECS:

  • Describe our app in docker-compose.yml as a set of Docker images and commands and deploy this onto AWS

  • Scale each process type independently (web=2 x 1024 MB, worker=10 x 512 MB)

  • Run one-off admin containers (rake db:migrate)

1. Cluster Setup and Automation

The first huge challenge is that ECS is nothing but an API on its own.

We need to bring and configure our own instances, load balancers, logging, monitoring and Docker registry. We probably also want some tools to build Docker images, create Task Definitions, and to create and update Tasks Services.

While ECS has a really slick “first run wizard” and CLI, these tools are probably not enough to manage the entire lifecycle of a serious production application.

The good news is that there are open source projects to help with this. My team is working full time on Convox. The infrastructure team at Remind is building Empire. Both automate the setup of ECS and make application deployment and maintenance simple.

The other good news is that all the pieces we need are available as reliable and affordable services on AWS and beyond. We don’t have to operate a registry, we can use AWS EC2 Container Registry (ECR), Docker Hub or We don’t have to build logging infrastructure when we can integrate with CloudWatch Logs or Papertrail.

Now the challenge we face is picking and configuring the best infrastructure components for a secure and reliable cluster. I recommend:

  • VPC with 3 private subnets and NAT gateways in 3 availability zones

  • EC2 and an AutoScale Groups with at least 3 instances

  • Amazon Linux ECS Optimized AMIs

  • EC2 Container Registry

  • Elastic Load Balancers

  • CloudWatch Logs

We also want to automate the setup and maintenance of these components. I recommend:

  • A CloudFormation stack

  • Resources for the above infrastructure

  • A parameter for the AMI to safely apply OS security updates

  • A parameter for the Instance Type and Instance Count to scale the cluster

We also need to pick and configure the best infrastructure components for a single app or service running on ECS:

  • EC2 Container Registry for app images

  • Elastic Load Balancer for serving traffic and performing health checks on the app web server

  • A CloudWatch Log Group for the app container logs

  • ECS TaskDefinition describing our app commands

  • ECS Service configuration describing how many tasks (containers) we want to run

And this should also be automated with a CloudFormation stack.

As you can see, there is already a huge challenge in what infrastructure is needed and automating its setup before we can run a container or two.

2. Distributed State Machine

ECS, like all container schedulers, is a challenge in distributed systems.

Say we want to deploy a web service as four containers running off the “httpd” image:

 image: httpd
   — 80:80

This is simple to ask for but deceptively hard to actually make happen.

To grant our wish, ECS needs to execute this request across at least four instances, but every instance is a black box over the network and poses known challenges.

Sometimes the the instance can’t even think about starting the container:

Sometimes it can try to start a container but not succeed:

  • Errors pulling an image due to bad registry auth

  • Errors starting a new container due to a full disk volume

  • Application errors cause the container to crash immediately

And it is guaranteed that an instance won’t run the container forever due to:

  • Hardware failures

  • Scheduled AMI updates

To handle all these challenges ECS needs to very carefully tell the instances what to do in the first place, collect constant feedback from the hosts and from the network, and very carefully tell the instances to do new things to route around failures.

This all represents a tough fundamental problem computer science: distributed state machines. We defined a desired state — 4 web servers — but a coordination service needs to:

  • Maintain a consistent view of containers running in the cluster and the remaining capacity on every instance

  • Turn the desired state into individual operations, i.e. start 1 web server with 512 MB of memory

  • Ask instances with capacity to execute these operations over the network

  • Retry operations if errors are observed

  • Retry operations if no success is observed

  • Constantly monitor for unexpected state changes and retry operations

  • Route around any failures like network partitions in the coordination service layer

This is tremendously hard to do right.

The best solutions need a highly-available consistent datastore which is best built on top of complex consensus algorithms. If you’re interested in how these distributed state machines generally work I recommend to research the Paxos and Raft consensus algorithms.

Werner Vogels shares more about the sophisticated engineering AWS uses to pull it off on ECS here. This brain behind ECS needs to always be available, always know the desired state, constantly observe the actual state, and perfectly execute operations to converge or reconcile between the two.

Thank goodness we don’t have to write and operate this system ourselves. We just want to run a few web servers!

Still, when deploying to ECS we can expect to occasionally see side effects of the distributed state machine. For example, we may see extra web processes if the ECS agent on one of our instances loses touch with the coordination service.

3. Application Health Checks and Feedback

Consider one more evil scenario… Our app is built to not serve web traffic until it connects to its database, and the database is offline or the app has the wrong password. So our web container is running but not able to serve traffic. What should ECS do?

For this, the ECS integrates deeply with one other service as watchdog: ELB. ELB is configured with rules to say if a container is healthy with respect to actually serving HTTP traffic. If this health check doesn’t pass, ECS has to stop the container and start a new one somewhere else.

So we now find ourselves imposed with strict rules about how our applications have to respond to HTTP responses. The health check is very configurable, but in general our app needs to boot cleanly in 30 seconds and always return a response on /check or else ECS will kill the container and try again somewhere else.

A bad deploy that doesn’t pass health checks can cause trouble.

4. Rolling Deploys

A side effect of the distributed state machine is that apps are now always deployed in a rolling fashion. On a service update ECS follows a “make one, break one” pattern, where it only stops an old task after it successfully started a new one that passes the ELB health check.

This is extremely powerful. The system always preserves the capacity you asked for. It carefully orchestrates draining existing requests in the ELB before removing a backend.

But this poses challenges to how we write the apps we are deploying to ECS. It is guaranteed that 2 versions of our code will be running occasionally.

Our software needs to be okay running two different versions at the same time. Our clients need to be okay talking to two different API versions in the same session. Two different releases of our software need to be ok talking to the database before, during and/or after a schema migration.

5. Instance Management

Even though we are now running containerized workloads ECS does not hide the fact that there is a cluster of instances behind our containers. Traditional instance management techniques still apply.

Our cluster should be in an AutoScaling Group (ASG) to preserve instance capacity. We still need additional monitoring, alerting and automation around instance failures that EC2 doesn’t catch.

We also need to be able to apply AMI updates gracefully, so having CloudFormation orchestrate booting a new AMI successfully before terminating an old instance is important (make one, break one for our instances).

I’m observing that right now that great instance management even more important than before, as heavy container workloads can be more demanding on an instance, and are exercising some fairly new corners of the kernel, network stack and filesystem.

Filesystem lockups in production: []( lockups in production:

6. Logs and Events

Container logging is a challenge all to itself.

Both Docker and ECS have a well understood historical gap in this space, originally launching with little tooling built into to help with application logs. For apps with real logging demands it is often left as an exercise to the app developer to bake in log forwarding logic into the container itself.

Thankfully this is all improving due to the recent excellent Docker logging drivers and ECS optionality to pick and configure one.

Still, the dynamic and ephemeral nature of containers causes challenges. When our containers more frequently stop and restart on new instances, we probably want to inject task ids, container ids and instance ids into the log streams, and the Docker logging drivers don’t really help with this.

So it might still be our responsibility to include more context in our application logging.

Finally we could almost certainly use even more context from all of ECS injected into our application logs to make sense of everything. Knowing that a container restarted onto a new host due to a failure is valuable to see in the app logs. The start and absolute end of our rolling deploy would be nice to see too.

None of this comes out of the box on ECS.

The best solution is to run an additional agent as a container on every instance:

  • Monitor all the other containers

  • Subscribe to their logs

  • Add more context like instance and process type

  • Forward logs to CloudWatch

As well as to run another monitor container somewhere that:

  • Periodically describes ECS services

  • Monitors deployment status

  • Synthesizes these into coherent events like “deployment started” and “deployment completed”

  • Sends notifications

Without effortless access to application logs and ECS events it can be extremely challenging to understand what is going on inside the cluster during deployments and during problems.

7. Mental Challenges

All of this adds up to a complex system that is often hard to understand, reason about, and debug.

There’s no way to predict exactly how a deployment will be carried out. What instances the 4 web containers land on and how long it takes to get there can not be predicted, only observed.

You can easily make 2 or more UpdateService API calls in rapid succession. If you start with 1 web process, ask for 10, then quickly change your mind and ask for 5, what are the expectations while ECS carries out these deployments?

It’s actually quite easy to get the system in a state where it will never converge. Ask for 4 web processes but only run 2 instances in your cluster, and watch ECS quietly retry forever.

And the actual formation of our containers are constantly changing from under us due to our application code and the underlying hardware health.

ECS, ELB, ASG and every process type of all our apps feed back on each other and somehow need to end up in a steady state.


In many ways ECS is significantly more challenging than EC2/AMI based deployments because its an entirely new layer of complexity on top of EC2.

This always leaves me with a nagging question…

Is ECS worth it?

I ask the same question to you… Have you experienced these or challenges on ECS? Have you solved them and happily got back to deploying code? Or have you pulled your hair out and second guessed the tools and the complexity?

Thankfully I’ve had enough success overcoming these challenges on ECS that I’m not looking back.

Deployments are faster and safer than ever before. All the complexities of the distributed state machine represent the most sophisticated automation around running apps that we’ve ever had. This is extremely sophisticated monitoring algorithm constantly working to verify that things are running so us humans don’t have to.

Finally its still the early days for all of these tools. ECS started out rather spartan, then Amazon released the ELB integration, then announced ECS, and recently added more deployment configuration.

I fully expect Amazon will continue to chip away at these hard infrastructure parts and Docker will continue to improve the container runtime.

We will get to focus the vast majority of time building and deploying new versions of our apps and trust that the container services will keep everything running.

I work full time on open source infrastructure automation at Convox (website, GitHub).

Please send feedback and/or questions Twitter to @nzoschke or email to

Thanks to Mackenzie Burnett, Eric Holmes, Calvin French-Owen and Malia Powers among others for feedback.

Discuss this on Hacker News.

Original URL:

Original article

V8: ES6, ES7, and beyond

The V8 team places great importance on the evolution of JavaScript into an increasingly expressive and well-defined language that makes writing fast, safe, and correct web applications easy. In June 2015, the ES6 specification was ratified by the TC39 standards committee, making it the largest single update to the JavaScript language. New features include classes, arrow functions, promises, iterators / generators, proxies, well-known symbols, and additional syntactic sugar. TC39 has also increased the cadence of new specifications and released the candidate draft for ES7 in February 2016, to be ratified this summer. While not as expansive as the ES6 update due to the shorter release cycle, ES7 notably introduces the exponentiation operator and Array.prototype.includes().

Today we’ve reached an important milestone: V8 supports ES6 and ES7. You can use the new language features today in Chrome Canary, and they will ship by default in the M52 release of Chromium.

Given the nature of an evolving spec, the differences between various types of conformance tests, and the complexity of maintaining web compatibility, it can be difficult to determine when a certain version of ECMAScript is considered fully supported by a JavaScript engine. Read on for why spec support is more nuanced than version numbers, why proper tail calls are still under discussion, and what caveats remain at play.

An evolving spec

When TC39 decided to publish more frequent updates to the JavaScript specification, the most up-to-date version of the language became the master, draft version. Although versions of the ECMAScript spec are still produced yearly and ratified, V8 implements a combination of the most recently ratified version (e.g. ES6), certain features which are close enough to standardization that they are safe to implement (e.g. the exponentiation operator and Array.prototype.includes() from the ES7 candidate draft), and a collection of bug fixes and web compatibility amendments from more recent drafts. Part of the rationale for such an approach is that language implementations in browsers should match the specification, even if the it’s the specification that needs to be updated. In fact, the process of implementing a ratified version of the spec often uncovers many of the fixes and clarifications that comprise the next version of the spec.

Currently shipping parts of the evolving ECMAScript specification

For example, when implementing the ES6 RegExp sticky flag, the V8 team discovered that the semantics of the ES6 spec broke many existing sites (including all sites using versions 2.x.x of the the popular XRegExp library on npm). Since compatibility is a cornerstone of the web, engineers from the V8 and Safari JavaScriptCore teams proposed an amendment to the RegExp specification to fix the breakage, which was agreed upon by TC39. The amendment won’t appear in a ratified version until ES8, but it’s still a part of the ECMAScript language and we’ve implemented it in order to ship the RegExp sticky flag.

The continual refinement of the language specification and the fact that each version (including the yet-to-be-ratified draft) replaces, amends, and clarifies previous versions makes it tricky to understand the complexities behind ES6 and ES7 support. While it’s impossible to state succinctly, it’s perhaps most accurate to say that V8 supports compliance with the “continually maintained draft future ECMAScript standard”!

Measuring conformance

In an attempt to make sense of this specification complexity, there are a variety of ways to measure JavaScript engine compatibility with the ECMAScript standard. The V8 team, as well as other browser vendors, use the test262 test suite as the gold standard of conformance to the continually maintained draft future ECMAScript standard. This test suite is continually updated to match the spec and it provides 16,000 discrete functional tests for all the features and edge cases which make up a compatible, compliant implementation of JavaScript. Currently V8 passes approximately 98% of test262, and the remaining 2% are a handful of edge cases and future ES features not yet ready to be shipped.

Since it’s difficult to skim the enormous number of test262 tests, other conformance tests exist, such as the Kangax compatibility table. Kangax makes it easy to skim to see whether a particular feature (like arrow functions) has been implemented in a given engine, but doesn’t test all the conformance edge cases that test262 does. Currently, Chrome Canary scores a 98% on the Kangax table for ES6 and 100% on the sections of Kangax corresponding to ES7 (e.g. the sections labelled “2016 features” and “2016 misc” under the ESnext tab).

The remaining 2% of the Kangax ES6 table tests proper tail calls, a feature which has been implemented in V8, but deliberated turned off in Chrome Canary due to outstanding developer experience concerns detailed below. With the “Experimental JavaScript features” flag enabled, which forces this feature on, Canary scores 100% on the entirety of the Kangax table for ES6.

Proper Tail Calls

Proper tail calls have been implemented but not yet shipped given that a change to the feature is currently under discussion at TC39. ES6 specifies that strict mode function calls in tail position should never cause a stack overflow. While this is a useful guarantee for certain programming patterns, the current semantics have two problems. First, since the tail call elimination is implicit, it can be difficult for programmers to identify which functions are actually in tail call position. This means that developers may not discover misplaced attempted tail calls in their programs until they overflow the stack. Second, implementing proper tail calls requires eliding tail call stack frames from the stack, which loses information about execution flow. This in turn has two consequences:

  1. It makes it more difficult to understand during debugging how execution arrived at a certain point since the stack contains discontinuities and
  2. Error.prototype.stack contains less information about execution flow which may break telemetry software that collects and analyzes client-side errors.

Implementing a shadow stack can improve the readability of call stacks, but the V8 and DevTools teams believe that debugging is easiest, most reliable, and most accurate when the stack displayed during debugging is completely deterministic and always matches the true state of the actual virtual machine stack. Moreover, a shadow stack is too expensive performance-wise to turn on all the time.

For these reasons, the V8 team, along with TC39 committee members from Mozilla and Microsoft, strongly support denoting proper tail calls by special syntax. There is a pending TC39 proposal called syntactic tail calls to specify this behavior. We have implemented and staged proper tail calls as specified in ES6 and started implementing syntactic tail calls as specified in the new proposal. The V8 team plans to resolve the issue at the next TC39 meeting before shipping implicit proper tail calls or syntactic tail calls by default. You can test out each version in the meantime by using the V8 flags –harmony-tailcalls and –harmony-explicit-tailcalls.


One of the most exciting promises of ES6 is support for JavaScript modules to organize and separate different parts of an application into namespaces. ES6 specifies import and export declarations for modules, but not how modules are loaded into a JavaScript program. In the browser, loading behavior was recently specified by the new tag. Although additional standardization work is needed to specify advanced dynamic module-loading APIs, Chromium support for module script tags is already in development. You can track implementation work on the launch bug and read more about experimental loader API ideas in the whatwg/loader repository.

ESnext and beyond

In the future, developers can expect ECMAScript updates to come in smaller, more frequent updates with shorter implementation cycles. The V8 team is already working to bring upcoming features such as async / await keywords, Object.prototype.values() / Object.prototype.entries(), String.prototype.padStart() / String.prototype.padEnd() and RegExp lookbehind to the runtime. Check back for more updates on our ESnext implementation progress and performance optimizations for existing ES6 and ES7 features.

We strive to continue evolving JavaScript and strike the right balance of implementing new features early, ensuring compatibility and stability of the existing web, and providing TC39 implementation feedback around design concerns. We look forward to seeing the incredible experiences developers will build with these new features.

— Posted by the V8 team, ECMAScript Enthusiasts

Original URL:

Original article

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑

%d bloggers like this: