Dimple: An object-oriented API for business analytics powered by D3

Simply Powerful

The aim of dimple is to open up the power and flexibility of d3 to analysts. It aims to give a gentle learning curve and minimal code to achieve something productive. It also exposes the d3 objects so you can pick them up and run to create some really cool stuff.


See More Examples

Getting Started

Before you can do anything, you must link d3. Simply add the following to your html:

That’s the organ grinder taken care of, next you need the monkey. Add dimple as follows:

That’s it, you’re ready to get creative! If you don’t know where to start, why not create a blank text document, drop the following in and save it as an html.


    var svg = dimple.newSvg("body", 800, 600);
    var data = [
      { "Word":"Hello", "Awesomeness":2000 },
      { "Word":"World", "Awesomeness":3000 }
    var chart = new dimple.chart(svg, data);
    chart.addCategoryAxis("x", "Word");
    chart.addMeasureAxis("y", "Awesomeness");
    chart.addSeries(null, dimple.plot.bar);

Congratulations, you are now the proud owner of a dimple bar chart! Start playing and see where you end up. You might get some extra inspiration from the examples section.

What’s up Doc(umentation)!

To understand how to use a particular dimple object or method please see the Full API Documentation


John Kiernander


Jose Jimenez
Ken Ip
Robert Stettner
Berk Birand
Stanislav Frantsuzov
Sajith Sasidharan
Robert Paskowitz
Guilherme Simões
Alex Kessaris
Scott Stafford
Neil Ahrendt
Stephen James
Flávio Juvenal
David Zhao
Han Xu
Dan Le
Keith Buchanan

Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/alJjlH8-O8o/

Original article

A Decade of Container Control at Google

March 22, 2016

Timothy Prickett Morgan


Search engine giant Google did not invent software containers for operating systems, but it has helped perfect them for the Linux operating system and brought the orchestration of billions of containers across perhaps millions of servers to something of an art form. It has much to teach, and is still humble enough to know that it has much to learn.

That, perhaps more than anything else, was one of the reasons why Google decided to open source a container management system called Kubernetes nearly two years ago. In the past, Google has been content to deliver papers on how some of its more advanced technology works, showing the way for the rest of the industry when it comes to tackling large-scale analytics, storage, or compute problems. But this method backfires sometimes, as it has with the MapReduce method of computation that inspired rival Yahoo to create Hadoop and spawned an entire industry. Now, it is Google that is incompatible with the rest of the world rather than the world following a standard created by Google and bequeathed to the industry.

So Kubernetes, we think, is an attempt by Google to share techniques and to foster the development of a container control system that could, in theory, maybe someday far off in the future, replace the internal Borg system that has been at the heart of its clusters for many years and that, as its name suggests, has been pulling in different technologies (like the Omega scheduler created several years ago). This time around, though, Google is seeking the help of the industry and is getting fellow hyperscalers as well as HPC experts to help flesh out Kubernetes, which has to span more than just the kind of relatively homogeneous workloads that run – albeit at large scale on clusters with 5,000 to as many as 50,000 nodes – inside of Google.

The key developers who created and extended Borg inside of Google are the same people who have created Kubernetes and who continue to be very influential in this open source project. This includes Brendan Burns, a co-founder of the Kubernetes effort; John Wilkes, principle software engineer at Google who has been working on cluster management systems there for the past eight years; Brian Grant, the technical lead of the Borg container management system, the founder of the Omega add-on to Borg to make it more flexible, and the design lead for Kubernetes; David Oppenheimer, technical lead for Kubernetes; and Eric Brewer, a vice president of infrastructure at Google and a professor at the University of California at Berkeley.

Google has published two technical papers describing its container management system, and it did it backwards. The first one described Omega in October 2013, and Wilkes gave a fascinating presentation at the Large Installation System Administration conference in November 2013 after the publication of the Omega paper. This shed some light on Google’s Borg system, and this disclosure predates the launch of Kubernetes in the summer of 2014. In the wake of Kubernetes, and a certain amount of confusion about how Omega related to Borg, a second paper describing its view of large scale cluster management was released in April 2015, and we talked to Wilkes after that paper was released to get some more insight into Google’s thinking about containers and cluster scheduling.

Now, the top Google techies who work on Borg, Omega, and Kubernetes have published a less technical paper in ACM Queue that describes some of the lessons learned from a decade of container management, providing a little more insight into its thinking and perhaps some indications of where the company would like to see the Kubernetes community push ahead.

Borg, as it turns out, was not the first cluster and application management tool that the search engine giant created. Google, like all companies, has a mix of batch workloads and long-running service workloads, and it partitioned its clusters physically to support these very different kinds of jobs. A program called Babysitter ran the long-running service jobs (like Web and other infrastructure servers) while another called the Global Work Queue ran the batch jobs like the MapReduce big data analytics framework that inspired Hadoop and that is still used today inside Google. But running these applications on different clusters meant that compute capacity was often stranded, and that was unacceptable when Google was growing explosively a decade ago.

Enter software containers, which are by no stretch of the imagination a new concept and were not even a decade ago. One could argue that the PR/SM logical partitions created by clone mainframe maker Amdahl in 1989 were containers, and certainly IBM’s own VM operating system created virtual machines that were, in essence, software containers and still, to this day, underpin the Linux instances on System z mainframes. FreeBSD had jail partitions and Solaris eventually got its own containers, but Google was a Linux shop and it therefore had to do a lot of the grunt work in adding container features to the Linux kernel. The LXC containers that are now part of every Linux distribution were founded on Google’s work, and Docker is an offshoot of this effort.

In a way, Kubernetes is a mix of the approaches taken with the monolithic Borg controller and the more flexible Omega controller (which allowed for multiple workload schedulers to fight for resources rather than just wait in line), but in the end, it is really Google’s third container controller. And in the end, it might be Kubernetes that ends up being Borg if its level of sophistication and scale continues to rise. This, we think, is one of Google’s goals. The other, as we have elucidated before, is for Kubernetes to become a de facto container management standard and therefore making it more likely that enterprises building private clouds will be enticed to use the Google Cloud Platform public cloud and its container service, which is based on Kubernetes.

The big thing that can be gleaned from the latest paper out of Google on its container controllers is that the shift from bare metal to containers is a profound one – something that may not be obvious to everyone seeking containers as a better way – and we think cheaper way – of doing server virtualization and driving up server utilization higher. Everything becomes application-centric rather than machine-centric, which is the nirvana that IT shops have been searching for. The workload schedulers, cluster managers, and container controllers work together to get the right capacity to the application when it needs it, whether it is a latency-sensitive job or a batch job that has some slack in it, and all that the site recovery engineers and developers care about is how the application is performing and they can easily see that because all of the APIs and metrics coming out of them collect data at the application level, not on a per-machine basis.

To do this means adopting containers, period. There is no bare metal at Google, and let that be a lesson to HPC shops or other hyperscalers or cloud builders that think they need to run in bare metal mode. We have heard chatter from people who would know that many of the analytics and data storage services that Amazon Web Services sells as platform services are on bare metal machines, not on its EC2 compute instances with the custom Xen hypervisor underneath, but after reading that, AWS has come back to us and said this is not true. We do not know if AWS is using containers underneath any of its infrastructure, but it clearly has a container service atop it.

The point is, bare metal is not a foregone conclusion, and the tradeoff of a slight performance hit – a whole lot less than full-on server virtualization using a KVM or Xen hypervisor – versus the flexibility of management and deployment that comes through containers is an absolute fair trade. (If Intel wanted to settle the issue, it could add a container management core to each chip and bet done with it, and it could have added two or three specialized cores to Xeon server chips to run hypervisors, too.)

“The isolation and dependency minimization provided by containers have proved quite effective at Google, and the container has become the sole runnable entity supported by the Google infrastructure,” the authors write. “One consequence is that Google has only a small number of OS versions deployed across its entire fleet of machines at any one time, and it needs only a small staff of people to maintain them and push out new versions.”

With Kubernetes, a container is a single runtime with an image of application software to execute on it, and multiple containers that represent the microservices that comprise what we would think of as an application are aggregated into a higher-level construct called a pod. Kubernetes is, first and foremost, a pod management system, keeping the container collectives aligned and running. Borg has a similar architecture, with a Linux container being the lowest level of granularity and an allocation, or alloc for short, being the higher level wrapper for a collection of containers. Borg allows some containers to run outside of allocs, and the authors say “this has been a source of much inconvenience,” probably a very dry understatement indeed. Kubernetes does not let you do this. You can, however, run containers inside of other containers or inside of full-on virtual machines if you want more security and isolation between containers, say in a multi-tenant environment like a public cloud. In fact, this is precisely what Google does with Cloud Platform. A server has a giant container laid down on it, then a KVM hypervisor, and then individual containers are created that expose specific Compute Engine instance types to those buying capacity on that public cloud.

Just like Omega imposed some order on the gathering and use of state information from around Google’s clusters to do a better job of allocating resources to containerized applications running in batch and online mode, Kubernetes is imposing a strict consistency in the APIs that are used to interface with it – unlike Borg, which had a hodge-podge of API types and styles added over the years.

“The design of Kubernetes as a combination of microservices and small control loops is an example of control through choreography – achieving a desired emergent behavior by combining the effects of separate, autonomous entities that collaborate,” Google says in the latest paper. “This is a conscious design choice in contrast to a centralized orchestration system, which may be easier to construct at first but tends to become brittle and rigid over time, especially in the presence of unanticipated errors or state changes.”

The impression we get is that Kubernetes is simply a better, if somewhat less mature, tool than Borg, and that alone should give the entire IT industry pause.

Google is giving the benefit of more than a decade of its experience to the open source community, even after it did nudge the AMPLab at Berkeley early on to foster the development of the Mesos system now being commercialized by Mesosphere. This is probably something that it now regrets doing, but we presume that it was not an easy task for Google to create and release Kubernetes. The only other option was to let Mesos become another de facto standard like Hadoop, and we have seen how that movie plays out. Not in Google’s favor. Google is the only hyperscaler that cannot easily shift from homegrown MapReduce to Hadoop, and over time, that could mean its costs for infrastructure software development go up over time.

Categories: Control, Hyperscale

Tags: Borg, Google, Kubernetes, Omega

Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/_-DgACiKyDA/

Original article

University of Illinois Transmits Record 57Gbps Through Fiber Optic Lines

An anonymous reader quotes a report from Digital Trends: Engineers at the University of Illinois have set a new record for fiber-optic data transmission, breaking previous theories that fiber optics have a limit in how much data they can carry. The engineers transmitted 57Gbps of error-free data at room temperature. The group, led by Professor Milton Feng, improved on its previous work in 2014, when it achieved 40Gbps. The keywords here are “error free,” which is what makes this research unique from others that claim faster speeds. Fang said, “There is a lot of data out there, but if your data transmission is not fast enough, you cannot use data that’s been collected; you cannot use upcoming technologies that use large data streams, like virtual reality. The direction toward fiber-optic communication is going to increase because there’s a higher speed data rate, especially over distance.”

Engadget writes in an update to a similar report: “Reader Tanj notes that this is specifically a record for VCSEL (vertical cavity surface-emitting laser) fiber, not fiber as a whole.”

Share on Google+

Read more of this story at Slashdot.

Original URL: http://rss.slashdot.org/~r/Slashdot/slashdot/~3/6pkHWtTMNto/university-of-illinois-transmits-record-57gbps-through-fiber-optic-lines

Original article

Babel 6 – useless by default

One of the hardest things about learning modern JavaScript programming is wrapping your head around the build systems – all the non-programming stuff you have to get going to run your code.

And for mortal humans it is *really hard* to grasp what the heck this complex ecosystem of build tools does and how they fit together, how to configure them and how to drive them.  REALLY hard.  Beginner JavaScript programmers who wish to do modern development face a massive learning curve just to get started. If you think it’s easy then have climbed the learning curve and done your learning already and forgotten how hard it was.

So the easier the build tools are to install and use, the better.  Ideally, for common use cases, the end user will need to do precisely nothing apart from install the build tool to get basic and even advanced level usage out of it.

And Babel is one of the core parts of that build system.  It compiles ES2015 code to ES5 that can be run on any browser.

Out of the box, Babel doesn’t do anything.  Literally useless by default.  Want to do something?  You MUST configure it.  And of course that means you have to start learning how to drive and configure Babel.

The first step in learning a new system is the hardest and most confusing and most time consuming.  To configure one tiny thing a vast array of questions come up.  Am I really meant to do this?  Should I be changing this setting?  Why do I need to do this? What does it do?  Am I breaking something?  Are there more things to configure? Am I working with the latest information or has the system changed since the documentation I am reading?  Why have I found three sets of conflicting instructions to do this configuration? And the biggest question of all – why do the instructions on this web page not match what I see on my screen? And then of course, “I’ve made the changes as instructed, why doesn’t it work, what’s wrong?”. All these questions come from a hazy fog of ignorance of what the software is, how it holds together, what it’s purpose is and why it works the way it does. You groan inwardly and sigh – fuck, how deep is this “simple” rabbit hole going to be? EVERYONE goes through this phase of learning a new technology.

If a build tool requires the user to make even one single configuration setting then they have imposed a huge learning cost on that user.  Possibly many hours of time and possibly (even probably) failure in trying to reach their goal.  For many users the cognitive load will be too high.  And the user may not even succeed in doing configuration that the developers consider to be basic and trivial.

Conversely, if a build tool requires a user to configure *absolutely nothing*, then they have saved the end user hours and driven them to a “pit of success”.

Too many JavaScript build tools think that developers care about the build tool.  They think that developers want to understand the build tools, and want to learn and configure and program those build tools.  I’m not here to program build tools.  I am here to program application code.  I want my JavaScript build tools to anticipate likely use cases, come pre-configured to do as much as possible out of the box.

Experienced Babel 6 users will dismiss my assertion that configuration is hard. “It’s easy!” they’ll say.  You change one file: .babelrc, npm install your presets, install your plugins and modify your update your webpack config and you’re good to go!  It’s 60 seconds work so you are speaking garbage. “

Experienced developers forget how incredibly hard and time consuming and confusing it is for unexperienced users to do those 60 seconds worth of tasks.

It should only be necessary to configure JavaScript build tools for optimisation of final production code.  Forcing users to configure everything up front is premature optimization, wastes users valuable time, imposes a cognitive load on the user and makes the entire JavaScript development process massively more complex and hard and for some users trying to get started, will kill their enthusiasm and will kill the entire process for them

So when Babel 6 decided that it would do nothing out of the box, it did the opposite of conventional wisdom in software usability, which is to include sane defaults that anticipate the most common use cases and provide them, ideally with zero requirement for setup and configuration.

Babel should come pre configured with the kitchen sink of JavaScript development – async await, decorators etc etc.  They should throw everything in there that a developer might want to use.  The expert use case for Babel is to strip it down to the minimum.  It should not be the basic use case to build it up to usable. I shouldn’t need to even think about how to configure Babel until I’m so far into my development process that I have already scaled the other learning curves that were essential such as learning ES2015 and browser programming.

Babel wants to exist, be seen and heard and talked about.  But in fact it is plumbing that you should really never have to give a thought to unless you’re walking a very unusual road.  Babel’s demands for you to configure it is like your taps in the kitchen needing you to get under the house with the wrench and hook up the hot water yourself, install a hot water tank, install solar panels and set the temperature in order to use it.  Not stuff you should need to do.  Babel should generally be neither seen or heard.  The Internet should not be plastered with blog posts instructing you on how to configure Babel, some right, some wrong, some out of date, some conflicting.  The developers should have configured it once, properly, so you don’t need to think about it.

And finally, the cost to the Babel project itself is support issues and questions and problems from people who haven’t managed to get things to configure, or didn’t even know they were meant to configure – only that something isn’t working.  The Babel project is shooting itself in the foot and making work for itself. The Babel developers should always be focused on how to reduce and remove the need for configuration, to find effective ways to preconfigure, and to find ways to allow people to use Babel with ever less knowledge that it even exists.

OK I’m headed back to see how to configure Babel to use async and await.  I really wish it was configured out of the box. What a waste of time.

P.S. and by the way I do know about Babel presets. That’s too much configuration too.  The right amount of configuration is none.

Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/Gh5d5xuCq8o/babel-6-useless-by-default-lesson-in.html

Original article

RS-232 for Commodore PET and Dialing a BBS Over WiFi

Commodore PET running WordPro Four Plus.

Commodore PET running WordPro Four Plus.

I’ve owned a Commodore PET* 8032 for a few years now. I’ve been able to download and run many different programs for it, like WordPro you see above. But one thing always remained elusive. I’ve long wanted to connect it to a standard RS-232 device and use it as a terminal. The PET’s classic shape, green monochrome monitor, and 80 column display all lend itself perfectly as a terminal.

Like it’s much more popular successors, it too lacked proper RS-232 UART hardware. Adding a modem meant you either had to purchase a IEEE-488 enabled modem (Commodore made the 8010), purchase an add-on board for your PET, or use the existing parallel user port to “bit-bang” RS-232 serial signals. The later is exactly what the Commodore VIC-20, C64 and C128 do– simulate RS-232 on user port pins by the CPU rapidly turning outputs on and off. They even have KERNAL ROM code (albeit broken at high speeds) that did the RS-232 handling for you.

The PET lacks this ROM code but it can added to drive RS-232 TTL signals over the user port. I found two methods that did this– a commercial product and a freeware one.

Before we continue, please– if you attempt any of this, make sure you understand the difference between RS-232 TTL level signals (0v to +5v) and proper RS-232 level signals (-13v to +13v or more). Connecting proper RS-232 level signals to your PET will damage your computer and make you sad. See this explanation for SparkFun about the differences in RS-232 levels.

The first was McTerm which was produced by Madison Computer. I knew of this company since I owned their McPen lightpen system for the VIC-20 and C64 but I didn’t know their pedigree went that far back. It was sold as three parts– software on floppy, a ROM chip that had to be installed inside the PET, and a user port cable that connected to the RS-232 device. I located the software and the ROM online but I’ve never actually seen the user port cable before so this was going to be challenging.

The first step was to create the ROM using an EPROM. On the PET 8032, the ROM slot is UD12 which maps to memory location $9000. The ROM code was only 2 Kbytes but I only had 4 Kbyte EPROMs. That’s OK, I just filled the other half with 0xFF. The next problem was the PET ROM slot expected a 2532 style pinout but my EPROM was a 2732 which has a slightly different pinout. Luckily, this can be overcome by making an adapter carrier to swap the 3 of the pins around. This site was useful in creating the adapter so I won’t go into that here. (Note: There’s two adapters on that site, make sure you’re building the 2732 -> 2532.)

Next was the software, which was easy enough to transfer to a 1541 floppy disk that can be read with the IEEE-488 enabled Commodore 2031 Single Floppy Disk drive. I put it as the first item on the disk so the “shift-run/stop” trick will load and run the first item on the disk.

Finally, I needed to figure out how to make the cable. I was going to need to test the user port pins to locate which ones the program was using. I examined how the VIC-20 and C64 do RS-232 over the serial port first. Immediately, I found that pins B and C were tied together for receive (RX). Pin C is PA0 which is a GPIO pin and B is /FLAG2 which I believe is for an interrupt. This makes sense since you want to immediately begin processing incoming data as soon as possible. The PET user port pin B is CA1 is is also for an interrupt. I had a hunch it may be used the same way.

To test the pins, I tied pins B and C together and connected to a USB RS-232 TTL adapter. I used a terminal program called CoolTerm, set the baud rate properly and tried sending characters. Nothing. I then tried B and D. Nothing. I kept trying until I landed on B and F. This DID give me something on the PET screen. It wasn’t correct, but it was receiving something.

I repeated this hunting for the transmit (TX) pin but this time only on a single pin. I found pin H was being using for transmit but again, it wasn’t recognizable characters from the PET but something was being transmitted.

Next I wanted to troubleshoot the characters not being displayed right. First thing was maybe it was the wrong number of data or stop bits or even parity. I tried many different combinations: 7n1, 7e1, 8e2, etc. None of them seemed to make a different. Typing the alphabet “abcdef..” seemed to return the alphabet but in seemingly reverse order with some other characters interspersed.

I decided to get the scope out and look at the differences between the USB RS-232 and PET signals. I decided on the ‘0’ character since it’s the same for ASCII and PETSCII just in case that might be part of the problem. Below is a comparison of the two.

Top is a Mac and USB Serial TTL cable. Bottom is a Commodore PET transmitting via user port on pin H.

Top is a Mac and USB Serial TTL cable. Bottom is a Commodore PET transmitting via user port on pin H.

Immediately you can see the issue. The Commodore PET is using a logic low for false and logic high for true (which I’ve learned is called “non-inverse”). Standard RS-232 TTL signals are “inverse” of this using logic high for false and logic low for true. This would explain what I’m seeing since the bits are reversed. I connected the pins through a 7404 inverting IC to invert the singals to and from the PET.

Commodore PET 8032 and inverting circuit.

Commodore PET 8032 and inverting circuit.

This yielded partial success. I was now able to send characters to the Commodore PET.

Commodore PET displaying Hello World message sent from a Mac over RS-232.

Commodore PET displaying Hello World message sent from a Mac over RS-232.

Sending characters from the PET to the USB RS-232 TTL adapter revealed that it was setting bit 7 high. If bit 7 was set low, it would be working fine. I’ve still yet to figure this out. If you have an idea, leave a message in the comments.

I later found in the BASIC code of McTerm on line 1070 was a way to use inverted RS-232 which does work without the inverting circuit.

1070 sysa :rem ***** use a for regular modems, a+36 to invert

The second method was found in Transactor Magazine issue 3 volume 6. It included a type in terminal program (simply called “Terminal v11”) and simple instructions for building a user port cable. I believe this program was created by Steve Punter, who also created the only known BBS program for the Commodore PET. Being a type-in freebie in a magazine, it wasn’t as full featured as McTerm but it does do automatic PETSCII/ASCII translation and has file transfers using an early version of the Punter protocol. It is locked to 300 baud however.

Commodore PET Terminal type in program.

A portion of the Commodore PET Terminal type in program.

Next up was the software. I really didn’t relish the idea of reliving that part of my childhood and typing all of those DATA statements. Modern technology to the rescue in the form of a free online OCR service. Much to my surprise, this service worked extremely well. I did have to process each column of code separately by extracting each from the PDF as a JPG. The most OCR errors were in the BASIC program but it was still dramatically lower than what I expected. Between the two ML programs with the DATA statements, those only had a single error! I later found version 12 of Terminal was available here.

This time, the PET user port pins were listed. Pins B and L are for RX and pin C is for TX. I swapped my user port adapter cable around to match this pinout, ran the signals through the inverter circuit and tried it. Immediate success in both directions!

Commodore PET and MacBook Air communicating over a RS-232 serial connection.

Commodore PET and MacBook Air communicating over a RS-232 serial connection.

Now that I have a working RS-232 cable and software for the PET, we can put it to use. I connected it to a SparkFun ESP8266 breakout board. This board connects over WiFi and can support a standard Hayes modem AT command set with the right firmware.

ESP 8266 wired to Commodore PET user port edge connector through a 7404 inverter circuit.

ESP 8266 wired to Commodore PET user port edge connector through a 7404 inverter circuit.

With this adapter, I’m able to “dial” into BBS systems that are accessible via IP. One such board is Level 29 which is run by @FozzTexx.

ATDT bbs.fozztexx.com:23

Commodore PET dialed into Level 29 BBS over WiFi.

Commodore PET dialed into Level 29 BBS over WiFi.

So, was non-inverted RS-232 TTL a standard 30 years ago since two separate terminal programs used it? When did inverted RS-232 TTL become the standard?

So, until I can figure out what’s wrong with McTerm transmitting with bit 7 set, use Terminal instead and you can use RS-232 on your PET.

*Actually, Commodore dropped the PET moniker shortly after they introduced the line and changed it to just CBM. The name PET just fits better I think.

Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/NzwwpodnotI/

Original article

List of Open Source Web Apps, Alternative to Paid Solutions


An awesome curated list of open source crafted web applications – Learn, Fork, Contribute & Most Importantly Enjoy!.

You want to develop an app, write tests for a feature or implement a feature and you don’t know how to go about it, there might just be one app/repository here with the solution to your problem.

Even if you are just a developer, manager or co-founder looking for a sample app to demo or test your ideas, it might just be right here.

Table of Contents



Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/QVPKcSzEE8I/awesome-opensource-webapps

Original article

Complete Node.js CheatSheet

// Node.js CheatSheet. // Download the Node.js source code or a pre-built installer for your platform, and start developing today. // Download: http://nodejs.org/download/ // More: http://nodejs.org/api/all.html // 0. Synopsis. // http://nodejs.org/api/synopsis.html var http = require(http); // An example of a web server written with Node which responds with ‘Hello World’. // To run the server, put the code into a file called example.js and execute it with the node program. http.createServer(function (request, response) { response.writeHead(200, {Content-Type: text/plain}); response.end(Hello Worldn); }).listen(8124); console.log(Server running at; // 1. Global Objects. // http://nodejs.org/api/globals.html // In browsers, the top-level scope is the global scope. // That means that in browsers if you’re in the global scope var something will define a global variable. // In Node this is different. The top-level scope is not the global scope; var something inside a Node module will be local to that module. __filename; // The filename of the code being executed. (absolute path) __dirname; // The name of the directory that the currently executing script resides in. (absolute path) module; // A reference to the current module. In particular module.exports is used for defining what a module exports and makes available through require(). exports; // A reference to the module.exports that is shorter to type. process; // The process object is a global object and can be accessed from anywhere. It is an instance of EventEmitter. Buffer; // The Buffer class is a global type for dealing with binary data directly. // 2. Console. // http://nodejs.org/api/console.html console.log([data], []); // Prints to stdout with newline. console.info([data], []); // Same as console.log. console.error([data], []); // Same as console.log but prints to stderr. console.warn([data], []); // Same as console.error. console.dir(obj); // Uses util.inspect on obj and prints resulting string to stdout. console.time(label); // Mark a time. console.timeEnd(label); // Finish timer, record output. console.trace(label); // Print a stack trace to stderr of the current position. console.assert(expression, [message]); // Same as assert.ok() where if the expression evaluates as false throw an AssertionError with message. // 3. Timers. // http://nodejs.org/api/timers.html setTimeout(callback, delay, [arg], []); // To schedule execution of a one-time callback after delay milliseconds. Optionally you can also pass arguments to the callback. clearTimeout(t); // Stop a timer that was previously created with setTimeout(). setInterval(callback, delay, [arg], []); // To schedule the repeated execution of callback every delay milliseconds. Optionally you can also pass arguments to the callback. clearInterval(t); // Stop a timer that was previously created with setInterval(). setImmediate(callback, [arg], []); // To schedule the “immediate” execution of callback after I/O events callbacks and before setTimeout and setInterval. clearImmediate(immediateObject); // Stop a timer that was previously created with setImmediate(). unref(); // Allow you to create a timer that is active but if it is the only item left in the event loop, node won’t keep the program running. ref(); // If you had previously unref()d a timer you can call ref() to explicitly request the timer hold the program open. // 4. Modules. // http://nodejs.org/api/modules.html var module = require(./module.js); // Loads the module module.js in the same directory. module.require(./another_module.js); // load another_module as if require() was called from the module itself. module.id; // The identifier for the module. Typically this is the fully resolved filename. module.filename; // The fully resolved filename to the module. module.loaded; // Whether or not the module is done loading, or is in the process of loading. module.parent; // The module that required this one. module.children; // The module objects required by this one. exports.area = function (r) { return 3.14 * r * r; }; // If you want the root of your module’s export to be a function (such as a constructor) // or if you want to export a complete object in one assignment instead of building it one property at a time, // assign it to module.exports instead of exports. module.exports = function(width) { return { area: function() { return width * width; } }; } // 5. Process. // http://nodejs.org/api/process.html process.on(exit, function(code) {}); // Emitted when the process is about to exit process.on(uncaughtException, function(err) {}); // Emitted when an exception bubbles all the way back to the event loop. (should not be used) process.stdout; // A writable stream to stdout. process.stderr; // A writable stream to stderr. process.stdin; // A readable stream for stdin. process.argv; // An array containing the command line arguments. process.env; // An object containing the user environment. process.execPath; // This is the absolute pathname of the executable that started the process. process.execArgv; // This is the set of node-specific command line options from the executable that started the process. process.arch; // What processor architecture you’re running on: ‘arm’, ‘ia32’, or ‘x64’. process.config; // An Object containing the JavaScript representation of the configure options that were used to compile the current node executable. process.pid; // The PID of the process. process.platform; // What platform you’re running on: ‘darwin’, ‘freebsd’, ‘linux’, ‘sunos’ or ‘win32’. process.title; // Getter/setter to set what is displayed in ‘ps’. process.version; // A compiled-in property that exposes NODE_VERSION. process.versions; // A property exposing version strings of node and its dependencies. process.abort(); // This causes node to emit an abort. This will cause node to exit and generate a core file. process.chdir(dir); // Changes the current working directory of the process or throws an exception if that fails. process.cwd(); // Returns the current working directory of the process. process.exit([code]); // Ends the process with the specified code. If omitted, exit uses the ‘success’ code 0. process.getgid(); // Gets the group identity of the process. process.setgid(id); // Sets the group identity of the process. process.getuid(); // Gets the user identity of the process. process.setuid(id); // Sets the user identity of the process. process.getgroups(); // Returns an array with the supplementary group IDs. process.setgroups(grps); // Sets the supplementary group IDs. process.initgroups(user, extra_grp); // Reads /etc/group and initializes the group access list, using all groups of which the user is a member. process.kill(pid, [signal]); // Send a signal to a process. pid is the process id and signal is the string describing the signal to send. process.memoryUsage(); // Returns an object describing the memory usage of the Node process measured in bytes. process.nextTick(callback); // On the next loop around the event loop call this callback. process.maxTickDepth; // Callbacks passed to process.nextTick will usually be called at the end of the current flow of execution, and are thus approximately as fast as calling a function synchronously. process.umask([mask]); // Sets or reads the process’s file mode creation mask. process.uptime(); // Number of seconds Node has been running. process.hrtime(); // Returns the current high-resolution real time in a [seconds, nanoseconds] tuple Array. // 6. Child Process. // Node provides a tri-directional popen facility through the child_process module. // It is possible to stream data through a child’s stdin, stdout, and stderr in a fully non-blocking way. // http://nodejs.org/api/child_process.html ChildProcess; // Class. ChildProcess is an EventEmitter. child.stdin; // A Writable Stream that represents the child process’s stdin child.stdout; // A Readable Stream that represents the child process’s stdout child.stderr; // A Readable Stream that represents the child process’s stderr. child.pid; // The PID of the child process child.connected; // If .connected is false, it is no longer possible to send messages child.kill([signal]); // Send a signal to the child process child.send(message, [sendHandle]); // When using child_process.fork() you can write to the child using child.send(message, [sendHandle]) and messages are received by a ‘message’ event on the child. child.disconnect(); // Close the IPC channel between parent and child, allowing the child to exit gracefully once there are no other connections keeping it alive. child_process.spawn(command, [args], [options]); // Launches a new process with the given command, with command line arguments in args. If omitted, args defaults to an empty Array. child_process.exec(command, [options], callback); // Runs a command in a shell and buffers the output. child_process.execFile(file, [args], [options], [callback]); // Runs a command in a shell and buffers the output. child_process.fork(modulePath, [args], [options]); // This is a special case of the spawn() functionality for spawning Node processes. In addition to having all the methods in a normal ChildProcess instance, the returned object has a communication channel built-in. // 7. Util. // These functions are in the module ‘util’. Use require(‘util’) to access them. // http://nodejs.org/api/util.html util.format(format, []); // Returns a formatted string using the first argument as a printf-like format. (%s, %d, %j) util.debug(string); // A synchronous output function. Will block the process and output string immediately to stderr. util.error([]); // Same as util.debug() except this will output all arguments immediately to stderr. util.puts([]); // A synchronous output function. Will block the process and output all arguments to stdout with newlines after each argument. util.print([]); // A synchronous output function. Will block the process, cast each argument to a string then output to stdout. (no newlines) util.log(string); // Output with timestamp on stdout. util.inspect(object, [opts]); // Return a string representation of object, which is useful for debugging. (options: showHidden, depth, colors, customInspect) util.isArray(object); // Returns true if the given “object” is an Array. false otherwise. util.isRegExp(object); // Returns true if the given “object” is a RegExp. false otherwise. util.isDate(object); // Returns true if the given “object” is a Date. false otherwise. util.isError(object); // Returns true if the given “object” is an Error. false otherwise. util.inherits(constructor, superConstructor); // Inherit the prototype methods from one constructor into another. // 8. Events. // All objects which emit events are instances of events.EventEmitter. You can access this module by doing: require(“events”); // To access the EventEmitter class, require(‘events’).EventEmitter. // All EventEmitters emit the event ‘newListener’ when new listeners are added and ‘removeListener’ when a listener is removed. // http://nodejs.org/api/events.html emitter.addListener(event, listener); // Adds a listener to the end of the listeners array for the specified event. emitter.on(event, listener); // Same as emitter.addListener(). emitter.once(event, listener); // Adds a one time listener for the event. This listener is invoked only the next time the event is fired, after which it is removed. emitter.removeListener(event, listener); // Remove a listener from the listener array for the specified event. emitter.removeAllListeners([event]); // Removes all listeners, or those of the specified event. emitter.setMaxListeners(n); // By default EventEmitters will print a warning if more than 10 listeners are added for a particular event. emitter.listeners(event); // Returns an array of listeners for the specified event. emitter.emit(event, [arg1], [arg2], []); // Execute each of the listeners in order with the supplied arguments. Returns true if event had listeners, false otherwise. EventEmitter.listenerCount(emitter, event); // Return the number of listeners for a given event. // 9. Stream. // A stream is an abstract interface implemented by various objects in Node. For example a request to an HTTP server is a stream, as is stdout. // Streams are readable, writable, or both. All streams are instances of EventEmitter. // http://nodejs.org/api/stream.html // The Readable stream interface is the abstraction for a source of data that you are reading from. // In other words, data comes out of a Readable stream. // A Readable stream will not start emitting data until you indicate that you are ready to receive it. // Examples of readable streams include: http responses on the client, http requests on the server, fs read streams // zlib streams, crypto streams, tcp sockets, child process stdout and stderr, process.stdin. var readable = getReadableStreamSomehow(); readable.on(readable, function() {}); // When a chunk of data can be read from the stream, it will emit a ‘readable’ event. readable.on(data, function(chunk) {}); // If you attach a data event listener, then it will switch the stream into flowing mode, and data will be passed to your handler as soon as it is available. readable.on(end, function() {}); // This event fires when there will be no more data to read. readable.on(close, function() {}); // Emitted when the underlying resource (for example, the backing file descriptor) has been closed. Not all streams will emit this. readable.on(error, function() {}); // Emitted if there was an error receiving data. // The read() method pulls some data out of the internal buffer and returns it. If there is no data available, then it will return null. // This method should only be called in non-flowing mode. In flowing-mode, this method is called automatically until the internal buffer is drained. readable.read([size]); readable.setEncoding(encoding); // Call this function to cause the stream to return strings of the specified encoding instead of Buffer objects. readable.resume(); // This method will cause the readable stream to resume emitting data events. readable.pause(); // This method will cause a stream in flowing-mode to stop emitting data events. readable.pipe(destination, [options]); // This method pulls all the data out of a readable stream, and writes it to the supplied destination, automatically managing the flow so that the destination is not overwhelmed by a fast readable stream. readable.unpipe([destination]); // This method will remove the hooks set up for a previous pipe() call. If the destination is not specified, then all pipes are removed. readable.unshift(chunk); // This is useful in certain cases where a stream is being consumed by a parser, which needs to “un-consume” some data that it has optimistically pulled out of the source, so that the stream can be passed on to some other party. // The Writable stream interface is an abstraction for a destination that you are writing data to. // Examples of writable streams include: http requests on the client, http responses on the server, fs write streams, // zlib streams, crypto streams, tcp sockets, child process stdin, process.stdout, process.stderr. var writer = getWritableStreamSomehow(); writable.write(chunk, [encoding], [callback]); // This method writes some data to the underlying system, and calls the supplied callback once the data has been fully handled. writer.once(drain, write); // If a writable.write(chunk) call returns false, then the drain event will indicate when it is appropriate to begin writing more data to the stream. writable.end([chunk], [encoding], [callback]); // Call this method when no more data will be written to the stream. writer.on(finish, function() {}); // When the end() method has been called, and all data has been flushed to the underlying system, this event is emitted. writer.on(pipe, function(src) {}); // This is emitted whenever the pipe() method is called on a readable stream, adding this writable to its set of destinations. writer.on(unpipe, function(src) {}); // This is emitted whenever the unpipe() method is called on a readable stream, removing this writable from its set of destinations. writer.on(error, function(src) {}); // Emitted if there was an error when writing or piping data. // Duplex streams are streams that implement both the Readable and Writable interfaces. See above for usage. // Examples of Duplex streams include: tcp sockets, zlib streams, crypto streams. // Transform streams are Duplex streams where the output is in some way computed from the input. They implement both the Readable and Writable interfaces. See above for usage. // Examples of Transform streams include: zlib streams, crypto streams. // 10. File System. // To use this module do require(‘fs’). // All the methods have asynchronous and synchronous forms. // http://nodejs.org/api/fs.html fs.rename(oldPath, newPath, callback); // Asynchronous rename. No arguments other than a possible exception are given to the completion callback.Asynchronous ftruncate. No arguments other than a possible exception are given to the completion callback. fs.renameSync(oldPath, newPath); // Synchronous rename. fs.ftruncate(fd, len, callback); // Asynchronous ftruncate. No arguments other than a possible exception are given to the completion callback. fs.ftruncateSync(fd, len); // Synchronous ftruncate. fs.truncate(path, len, callback); // Asynchronous truncate. No arguments other than a possible exception are given to the completion callback. fs.truncateSync(path, len); // Synchronous truncate. fs.chown(path, uid, gid, callback); // Asynchronous chown. No arguments other than a possible exception are given to the completion callback. fs.chownSync(path, uid, gid); // Synchronous chown. fs.fchown(fd, uid, gid, callback); // Asynchronous fchown. No arguments other than a possible exception are given to the completion callback. fs.fchownSync(fd, uid, gid); // Synchronous fchown. fs.lchown(path, uid, gid, callback); // Asynchronous lchown. No arguments other than a possible exception are given to the completion callback. fs.lchownSync(path, uid, gid); // Synchronous lchown. fs.chmod(path, mode, callback); // Asynchronous chmod. No arguments other than a possible exception are given to the completion callback. fs.chmodSync(path, mode); // Synchronous chmod. fs.fchmod(fd, mode, callback); // Asynchronous fchmod. No arguments other than a possible exception are given to the completion callback. fs.fchmodSync(fd, mode); // Synchronous fchmod. fs.lchmod(path, mode, callback); // Asynchronous lchmod. No arguments other than a possible exception are given to the completion callback. fs.lchmodSync(path, mode); // Synchronous lchmod. fs.stat(path, callback); // Asynchronous stat. The callback gets two arguments (err, stats) where stats is a fs.Stats object. fs.statSync(path); // Synchronous stat. Returns an instance of fs.Stats. fs.lstat(path, callback); // Asynchronous lstat. The callback gets two arguments (err, stats) where stats is a fs.Stats object. lstat() is identical to stat(), except that if path is a symbolic link, then the link itself is stat-ed, not the file that it refers to. fs.lstatSync(path); // Synchronous lstat. Returns an instance of fs.Stats. fs.fstat(fd, callback); // Asynchronous fstat. The callback gets two arguments (err, stats) where stats is a fs.Stats object. fstat() is identical to stat(), except that the file to be stat-ed is specified by the file descriptor fd. fs.fstatSync(fd); // Synchronous fstat. Returns an instance of fs.Stats. fs.link(srcpath, dstpath, callback); // Asynchronous link. No arguments other than a possible exception are given to the completion callback. fs.linkSync(srcpath, dstpath); // Synchronous link. fs.symlink(srcpath, dstpath, [type], callback); // Asynchronous symlink. No arguments other than a possible exception are given to the completion callback. The type argument can be set to ‘dir’, ‘file’, or ‘junction’ (default is ‘file’) and is only available on Windows (ignored on other platforms) fs.symlinkSync(srcpath, dstpath, [type]); // Synchronous symlink. fs.readlink(path, callback); // Asynchronous readlink. The callback gets two arguments (err, linkString). fs.readlinkSync(path); // Synchronous readlink. Returns the symbolic link’s string value. fs.unlink(path, callback); // Asynchronous unlink. No arguments other than a possible exception are given to the completion callback. fs.unlinkSync(path); // Synchronous unlink. fs.realpath(path, [cache], callback); // Asynchronous realpath. The callback gets two arguments (err, resolvedPath). fs.realpathSync(path, [cache]); // Synchronous realpath. Returns the resolved path. fs.rmdir(path, callback); // Asynchronous rmdir. No arguments other than a possible exception are given to the completion callback. fs.rmdirSync(path); // Synchronous rmdir. fs.mkdir(path, [mode], callback); // Asynchronous mkdir. No arguments other than a possible exception are given to the completion callback. mode defaults to 0777. fs.mkdirSync(path, [mode]); // Synchronous mkdir. fs.readdir(path, callback); // Asynchronous readdir. Reads the contents of a directory. The callback gets two arguments (err, files) where files is an array of the names of the files in the directory excluding ‘.’ and ‘..’. fs.readdirSync(path); // Synchronous readdir. Returns an array of filenames excluding ‘.’ and ‘..’. fs.close(fd, callback); // Asynchronous close. No arguments other than a possible exception are given to the completion callback. fs.closeSync(fd); // Synchronous close. fs.open(path, flags, [mode], callback); // Asynchronous file open. fs.openSync(path, flags, [mode]); // Synchronous version of fs.open(). fs.utimes(path, atime, mtime, callback); // Change file timestamps of the file referenced by the supplied path. fs.utimesSync(path, atime, mtime); // Synchronous version of fs.utimes(). fs.futimes(fd, atime, mtime, callback); // Change the file timestamps of a file referenced by the supplied file descriptor. fs.futimesSync(fd, atime, mtime); // Synchronous version of fs.futimes(). fs.fsync(fd, callback); // Asynchronous fsync. No arguments other than a possible exception are given to the completion callback. fs.fsyncSync(fd); // Synchronous fsync. fs.write(fd, buffer, offset, length, position, callback); // Write buffer to the file specified by fd. fs.writeSync(fd, buffer, offset, length, position); // Synchronous version of fs.write(). Returns the number of bytes written. fs.read(fd, buffer, offset, length, position, callback); // Read data from the file specified by fd. fs.readSync(fd, buffer, offset, length, position); // Synchronous version of fs.read. Returns the number of bytesRead. fs.readFile(filename, [options], callback); // Asynchronously reads the entire contents of a file. fs.readFileSync(filename, [options]); // Synchronous version of fs.readFile. Returns the contents of the filename. If the encoding option is specified then this function returns a string. Otherwise it returns a buffer. fs.writeFile(filename, data, [options], callback); // Asynchronously writes data to a file, replacing the file if it already exists. data can be a string or a buffer. fs.writeFileSync(filename, data, [options]); // The synchronous version of fs.writeFile. fs.appendFile(filename, data, [options], callback); // Asynchronously append data to a file, creating the file if it not yet exists. data can be a string or a buffer. fs.appendFileSync(filename, data, [options]); // The synchronous version of fs.appendFile. fs.watch(filename, [options], [listener]); // Watch for changes on filename, where filename is either a file or a directory. The returned object is a fs.FSWatcher. The listener callback gets two arguments (event, filename). event is either ‘rename’ or ‘change’, and filename is the name of the file which triggered the event. fs.exists(path, callback); // Test whether or not the given path exists by checking with the file system. Then call the callback argument with either true or false. (should not be used) fs.existsSync(path); // Synchronous version of fs.exists. (should not be used) // fs.Stats: objects returned from fs.stat(), fs.lstat() and fs.fstat() and their synchronous counterparts are of this type. stats.isFile(); stats.isDirectory() stats.isBlockDevice() stats.isCharacterDevice() stats.isSymbolicLink() // (only valid with fs.lstat()) stats.isFIFO() stats.isSocket() fs.createReadStream(path, [options]); // Returns a new ReadStream object. fs.createWriteStream(path, [options]); // Returns a new WriteStream object. // 11. Path. // Use require(‘path’) to use this module. // This module contains utilities for handling and transforming file paths. // Almost all these methods perform only string transformations. // The file system is not consulted to check whether paths are valid. // http://nodejs.org/api/fs.html path.normalize(p); // Normalize a string path, taking care of ‘..’ and ‘.’ parts. path.join([path1], [path2], []); // Join all arguments together and normalize the resulting path. path.resolve([from ], to); // Resolves ‘to’ to an absolute path. path.relative(from, to); // Solve the relative path from ‘from’ to ‘to’. path.dirname(p); // Return the directory name of a path. Similar to the Unix dirname command. path.basename(p, [ext]); // Return the last portion of a path. Similar to the Unix basename command. path.extname(p); // Return the extension of the path, from the last ‘.’ to end of string in the last portion of the path. path.sep; // The platform-specific file separator. ” or ‘/’. path.delimiter; // The platform-specific path delimiter, ‘;’ or ‘:’. // 12. HTTP. // To use the HTTP server and client one must require(‘http’). // http://nodejs.org/api/http.html http.STATUS_CODES; // A collection of all the standard HTTP response status codes, and the short description of each. http.request(options, [callback]); // This function allows one to transparently issue requests. http.get(options, [callback]); // Set the method to GET and calls req.end() automatically. server = http.createServer([requestListener]); // Returns a new web server object. The requestListener is a function which is automatically added to the ‘request’ event. server.listen(port, [hostname], [backlog], [callback]); // Begin accepting connections on the specified port and hostname. server.listen(path, [callback]); // Start a UNIX socket server listening for connections on the given path. server.listen(handle, [callback]); // The handle object can be set to either a server or socket (anything with an underlying _handle member), or a {fd: } object. server.close([callback]); // Stops the server from accepting new connections. server.setTimeout(msecs, callback); // Sets the timeout value for sockets, and emits a ‘timeout’ event on the Server object, passing the socket as an argument, if a timeout occurs. server.maxHeadersCount; // Limits maximum incoming headers count, equal to 1000 by default. If set to 0 – no limit will be applied. server.timeout; // The number of milliseconds of inactivity before a socket is presumed to have timed out. server.on(request, function (request, response) { }); // Emitted each time there is a request. server.on(connection, function (socket) { }); // When a new TCP stream is established. server.on(close, function () { }); // Emitted when the server closes. server.on(checkContinue, function (request, response) { }); // Emitted each time a request with an http Expect: 100-continue is received. server.on(connect, function (request, socket, head) { }); // Emitted each time a client requests a http CONNECT method. server.on(upgrade, function (request, socket, head) { }); // Emitted each time a client requests a http upgrade. server.on(clientError, function (exception, socket) { }); // If a client connection emits an ‘error’ event – it will forwarded here. request.write(chunk, [encoding]); // Sends a chunk of the body. request.end([data], [encoding]); // Finishes sending the request. If any parts of the body are unsent, it will flush them to the stream. request.abort(); // Aborts a request. request.setTimeout(timeout, [callback]); // Once a socket is assigned to this request and is connected socket.setTimeout() will be called. request.setNoDelay([noDelay]); // Once a socket is assigned to this request and is connected socket.setNoDelay() will be called. request.setSocketKeepAlive([enable], [initialDelay]); // Once a socket is assigned to this request and is connected socket.setKeepAlive() will be called. request.on(response, function(response) { }); // Emitted when a response is received to this request. This event is emitted only once. request.on(socket, function(socket) { }); // Emitted after a socket is assigned to this request. request.on(connect, function(response, socket, head) { }); // Emitted each time a server responds to a request with a CONNECT method. If this event isn’t being listened for, clients receiving a CONNECT method will have their connections closed. request.on(upgrade, function(response, socket, head) { }); // Emitted each time a server responds to a request with an upgrade. If this event isn’t being listened for, clients receiving an upgrade header will have their connections closed. request.on(continue, function() { }); // Emitted when the server sends a ‘100 Continue’ HTTP response, usually because the request contained ‘Expect: 100-continue’. This is an instruction that the client should send the request body. response.write(chunk, [encoding]); // This sends a chunk of the response body. If this merthod is called and response.writeHead() has not been called, it will switch to implicit header mode and flush the implicit headers. response.writeContinue(); // Sends a HTTP/1.1 100 Continue message to the client, indicating that the request body should be sent. response.writeHead(statusCode, [reasonPhrase], [headers]); // Sends a response header to the request. response.setTimeout(msecs, callback); // Sets the Socket’s timeout value to msecs. If a callback is provided, then it is added as a listener on the ‘timeout’ event on the response object. response.setHeader(name, value); // Sets a single header value for implicit headers. If this header already exists in the to-be-sent headers, its value will be replaced. Use an array of strings here if you need to send multiple headers with the same name. response.getHeader(name); // Reads out a header that’s already been queued but not sent to the client. Note that the name is case insensitive. response.removeHeader(name); // Removes a header that’s queued for implicit sending. response.addTrailers(headers); // This method adds HTTP trailing headers (a header but at the end of the message) to the response. response.end([data], [encoding]); // This method signals to the server that all of the response headers and body have been sent; that server should consider this message complete. The method, response.end(), MUST be called on each response. response.statusCode; // When using implicit headers (not calling response.writeHead() explicitly), this property controls the status code that will be sent to the client when the headers get flushed. response.headersSent; // Boolean (read-only). True if headers were sent, false otherwise. response.sendDate; // When true, the Date header will be automatically generated and sent in the response if it is not already present in the headers. Defaults to true. response.on(close, function () { }); // Indicates that the underlying connection was terminated before response.end() was called or able to flush. response.on(finish, function() { }); // Emitted when the response has been sent. message.httpVersion; // In case of server request, the HTTP version sent by the client. In the case of client response, the HTTP version of the connected-to server. message.headers; // The request/response headers object. message.trailers; // The request/response trailers object. Only populated after the ‘end’ event. message.method; // The request method as a string. Read only. Example: ‘GET’, ‘DELETE’. message.url; // Request URL string. This contains only the URL that is present in the actual HTTP request. message.statusCode; // The 3-digit HTTP response status code. E.G. 404. message.socket; // The net.Socket object associated with the connection. message.setTimeout(msecs, callback); // Calls message.connection.setTimeout(msecs, callback). // 13. URL. // This module has utilities for URL resolution and parsing. Call require(‘url’) to use it. // http://nodejs.org/api/url.html url.parse(urlStr, [parseQueryString], [slashesDenoteHost]); // Take a URL string, and return an object. url.format(urlObj); // Take a parsed URL object, and return a formatted URL string. url.resolve(from, to); // Take a base URL, and a href URL, and resolve them as a browser would for an anchor tag. // 14. Query String. // This module provides utilities for dealing with query strings. Call require(‘querystring’) to use it. // http://nodejs.org/api/querystring.html querystring.stringify(obj, [sep], [eq]); // Serialize an object to a query string. Optionally override the default separator (‘&’) and assignment (‘=’) characters. querystring.parse(str, [sep], [eq], [options]); // Deserialize a query string to an object. Optionally override the default separator (‘&’) and assignment (‘=’) characters. // 15. Assert. // This module is used for writing unit tests for your applications, you can access it with require(‘assert’). // http://nodejs.org/api/assert.html assert.fail(actual, expected, message, operator); // Throws an exception that displays the values for actual and expected separated by the provided operator. assert(value, message); assert.ok(value, [message]); // Tests if value is truthy, it is equivalent to assert.equal(true, !!value, message); assert.equal(actual, expected, [message]); // Tests shallow, coercive equality with the equal comparison operator ( == ). assert.notEqual(actual, expected, [message]); // Tests shallow, coercive non-equality with the not equal comparison operator ( != ). assert.deepEqual(actual, expected, [message]); // Tests for deep equality. assert.notDeepEqual(actual, expected, [message]); // Tests for any deep inequality. assert.strictEqual(actual, expected, [message]); // Tests strict equality, as determined by the strict equality operator ( === ) assert.notStrictEqual(actual, expected, [message]); // Tests strict non-equality, as determined by the strict not equal operator ( !== ) assert.throws(block, [error], [message]); // Expects block to throw an error. error can be constructor, RegExp or validation function. assert.doesNotThrow(block, [message]); // Expects block not to throw an error, see assert.throws for details. assert.ifError(value); // Tests if value is not a false value, throws if it is a true value. Useful when testing the first argument, error in callbacks. // 16. OS. // Provides a few basic operating-system related utility functions. // Use require(‘os’) to access this module. // http://nodejs.org/api/os.html os.tmpdir(); // Returns the operating system’s default directory for temp files. os.endianness(); // Returns the endianness of the CPU. Possible values are “BE” or “LE”. os.hostname(); // Returns the hostname of the operating system. os.type(); // Returns the operating system name. os.platform(); // Returns the operating system platform. os.arch(); // Returns the operating system CPU architecture. os.release(); // Returns the operating system release. os.uptime(); // Returns the system uptime in seconds. os.loadavg(); // Returns an array containing the 1, 5, and 15 minute load averages. os.totalmem(); // Returns the total amount of system memory in bytes. os.freemem(); // Returns the amount of free system memory in bytes. os.cpus(); // Returns an array of objects containing information about each CPU/core installed: model, speed (in MHz), and times (an object containing the number of milliseconds the CPU/core spent in: user, nice, sys, idle, and irq). os.networkInterfaces(); // Get a list of network interfaces. os.EOL; // A constant defining the appropriate End-of-line marker for the operating system. // 17. Buffer. // Buffer is used to dealing with binary data // Buffer is similar to an array of integers but corresponds to a raw memory allocation outside the V8 heap // http://nodejs.org/api/buffer.html new Buffer(size); // Allocates a new buffer of size octets. new Buffer(array); // Allocates a new buffer using an array of octets. new Buffer(str, [encoding]); // Allocates a new buffer containing the given str. encoding defaults to ‘utf8’. Buffer.isEncoding(encoding); // Returns true if the encoding is a valid encoding argument, or false otherwise. Buffer.isBuffer(obj); // Tests if obj is a Buffer Buffer.concat(list, [totalLength]); // Returns a buffer which is the result of concatenating all the buffers in the list together. Buffer.byteLength(string, [encoding]); // Gives the actual byte length of a string. buf.write(string, [offset], [length], [encoding]); // Writes string to the buffer at offset using the given encoding buf.toString([encoding], [start], [end]); // Decodes and returns a string from buffer data encoded with encoding (defaults to ‘utf8’) beginning at start (defaults to 0) and ending at end (defaults to buffer.length). buf.toJSON(); // Returns a JSON-representation of the Buffer instance, which is identical to the output for JSON Arrays buf.copy(targetBuffer, [targetStart], [sourceStart], [sourceEnd]); // Does copy between buffers. The source and target regions can be overlapped buf.slice([start], [end]); // Returns a new buffer which references the same memory as the old, but offset and cropped by the start (defaults to 0) and end (defaults to buffer.length) indexes. Negative indexes start from the end of the buffer. buf.fill(value, [offset], [end]); // Fills the buffer with the specified value buf[index]; // Get and set the octet at index buf.length; // The size of the buffer in bytes, Note that this is not necessarily the size of the contents buffer.INSPECT_MAX_BYTES; // How many bytes will be returned when buffer.inspect() is called. This can be overridden by user modules.

Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/FiQFWjadmPA/985b82968d8285987dc3

Original article

The Way of the Gopher: Making the Switch from Node.js to Golang

The Way of the Gopher

Making the Switch from Node.js to Golang

I’ve dabbled in JavaScript since college, made a few web pages here and there and while JS was always an enjoyable break from C or Java, I regarded it as a fairly limited language, imbued with the special purpose of serving up animations and pretty little things to make users go “ooh” and “aah”. It was the first language I taught anyone who wanted to learn how to code because it was simple enough to pick up and would quickly deliver tangible results to the developer. Smash it together with some HTML and CSS and you have a web page. Beginner programmers love that stuff.

Then something happened two years ago. At that time, I was in a researchy position working mostly on server-side code and app prototypes for Android. It wasn’t long before Node.js popped up on my radar. Backend JavaScript? Who would take that seriously? At best, it seemed like a new attempt to make server-side development easier at the cost of performance, scalability, etc. Maybe it’s just my ingrained developer skepticism, but there’s always been that alarm that goes off in my brain when I read about something being fast and easy and production-level.

Then came the research, the testimonials, the tutorials, the side-projects and 6 months later I realized I had been doing nothing but Node since I first read about it. It was just too easy, especially since I was in the business of prototyping new ideas every couple months. But Node wasn’t just for prototypes and pet projects. Even big boy companies like Netflix had parts of their stack running Node. Suddenly, the world was full of nails and I had found my hammer.

Fast forward another couple months and I’m at my current job as a backend developer for Digg. When I joined, back in April of 2015, the stack at Digg was primarily Python with the exception of two services written in, wait for it, Node. I was even more thrilled to be assigned the task of reworking one of the services which had been causing issues in our pipeline.

Our troublesome Node service had a fairly straightforward purpose. Digg uses Amazon S3 for storage which is peachy, except S3 has no support for batch GET operations. Rather than putting all the onus on our Python web server to request up to 100+ keys at a time from S3, the decision was made to take advantage of Node’s easy async code patterns and great concurrency handling. And so Octo, the S3 content fetching service, was born.

Node Octo performed well except for when it didn’t. Once a day it needed to handle a traffic spike where the requests per minute jump from 50 to 200+. Also keep in mind that for each request, Octo typically fetches somewhere between 10–100 keys from S3. That’s potentially 20,000 S3 GETs a minute. The logs showed that our service slowed down substantially during these spikes, but the trouble was it didn’t always recover. As such, we were stuck bouncing our EC2 instances every couple weeks after Octo would seize up and fall flat on its face.

The requests to the service also pass along a strict timeout value. After the clock hits X number of milliseconds since receiving the request, Octo is suppose to return to the client whatever it has successfully fetched from S3 and move on. However, even with a max timeout of 1200ms, in Octo’s worst moments we had request handling times spiking up to 10 seconds.

The code was heavily asynchronous and we were caching S3 key values aggressively. Octo was also running across 2 medium EC2 instances which we bumped up to 4.

I reworked the code three times, digging deeper than ever into Node optimizations, gotchas, and tricks for squeezing every last bit of performance out of it. I reviewed benchmarks for popular Node webserver frameworks, like Express or Hapi, vs. Node’s built-in HTTP module. I removed any third party modules that, while nice to have, slowed down code execution. The result was three, one-off iterations all suffering from the same issue. No matter how hard I tried, I couldn’t get Octo to timeout properly and I couldn’t reduce the slow down during request spikes.

A theory eventually emerged and it had to do with the way Node’s event loop works. If you don’t know about the event loop, here’s some insight from Node Source:

Node’s “event loop” is central to being able to handle high throughput 
scenarios. It is a magical place filled with unicorns and rainbows, and is the 
reason Node can essentially be “single threaded” while still allowing an 
arbitrary number of operations to be handled in the background.

Not-So Magic Event Loop Blocking (X-Axis: Time in milliseconds)

You can see when all the unicorns and rainbows went to hell and back again as we bounced the service.

With event loop blocking as the biggest culprit on my list, it was just a matter of figuring why it was getting so backed up in the first place.

Most developers have heard about Node’s non-blocking I/O model; it’s great because it means all requests are handled asynchronously without blocking execution, or incurring any overhead (like with threads and processes) and as the developer you can be blissfully unaware what’s happening in the backend. However, it’s always important to keep in mind that Node is single-threaded which means none of your code runs in parallel. I/O may not block the server but your code certainly does. If I call sleep for 5 seconds, my server will be unresponsive during that time.

Visualizing the Event Loop — StrongLoop

And the non-blocking code? As requests are processed and events are triggered, messages are queued along with their respective callback functions. To explain further, here’s an excerpt from a particularly insightful blog post from Carbon Five:

In a loop, the queue is polled for the next message (each poll referred to as a “tick”) and when a message is encountered, the callback for that message is executed. The calling of this callback function serves as the initial frame in the call stack, and due to JavaScript being single-threaded, further message polling and processing is halted pending the return of all calls on the stack. Subsequent (synchronous) function calls add new call frames to the stack…

Our Node service may have handled incoming requests like champ if all it needed to do was return immediately available data. But instead it was waiting on a ton of nested callbacks all dependent on responses from S3 (which can be god awful slow at times). Consequently, when any request timeouts happened, the event and its associated callback was put on an already overloaded message queue. While the timeout event might occur at 1 second, the callback wasn’t getting processed until all other messages currently on the queue, and their corresponding callback code, were finished executing (potentially seconds later). I can only imagine the state of our stack during the request spikes. In fact, I didn’t need to imagine it. A little bit of CPU profiling gave us a pretty vivid picture. Sorry for all the scrolling.

The flames of failure

As a quick intro to flame graphs, the y axis represents the number of frames on the stack, where each function is the parent of the function above it. The x axis has to do with the sample population more so than the passage of time. It’s the width of the boxes which show the total time on-CPU; greater width may indicate slower functions or it may simply mean that the function is called more often. You can see in Octo’s flame graph the huge spikes in our stack depth. More detailed info on profiling and flame graphs can be found here.

In light of these realizations, it was time to entertain the idea that maybe Node.js wasn’t the perfect candidate for the job. My CTO and I sat down and had a chat about our options. We certainly didn’t want to continue bouncing Octo every other week and we were both very interested in a promising case study that had cropped up on the internet:

If the title wasn’t tantalizing enough, the topic was on creating a service for making PUT requests to S3 (wow, other people have these problems too?). It wasn’t the first time we had talked about using Golang somewhere in our stack and now we had a perfect test subject.

Two weeks later, after my initial crash course introduction to Golang, we had a brand new Octo service up and running. I modeled it closely after the inspiring solution outlined in Malwarebyte’s Golang article; the service has a worker pool and a delegator which passes off incoming jobs to idle workers. Each worker runs on it’s own goroutine, and returns to the pool once the job is done. Simple and effective. The immediate results were pretty spectacular.

A nice simmer

Our average response time from the service was almost cut in half, our timeouts (in the scenario that S3 was slow to respond) were happening on time, and our traffic spikes had minimal effects on the service.

Blue = Node.js Octo | Green = Golang Octo

With our Golang upgrade, we are easily able to handle 200 requests per minute and 1.5 million S3 item fetches per day. And those 4 load-balanced instances we were running Octo on initially? We’re now doing it with 2.

Since our transition to Golang we haven’t looked back. While the majority of our stack is (and probably will always be) in Python, we’ve begun the process of modularizing our code base and spinning up microservices to handle specific roles in our system. Alongside Octo, we now have 3 other Golang services in production which power our realtime message system and serve up important metadata for our content. We’re also very proud of the newest edition to our Golang codebase, DiggBot.

This is not to say that Golang is a silver bullet for all our problems. We’re careful to consider the needs of each of our services. As a company, we make the effort to stay on top of new and emerging technologies and to always ask ourselves, can we be doing this better? It’s a constantly evolving process and one that takes careful research and planning.

I’m proud to say that this story has a happy ending as our Octo service has been up and running for a couple months with great success (a few bug fixes aside). For now, Digg is going the way of the Gopher.


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/YkpnmrxfCZk/the-way-of-the-gopher-6693db15ae1f

Original article

Auto-Scaling and Self-Defensive Services in Golang

The Raygun service is made up of many moving parts, each specialized for a particular task. One of these processes is written in Golang and is responsible for desymbolicating iOS crash reports. You don’t need to know what that means, but in short, it takes native iOS crash reports, looks up the relevant dSYM files, and processes them together to produce human readable stack traces.

How I implemented an auto-scaling and self-defensive service in Golang

The operation of the dsym-worker is simple. It receives jobs via a single consumer attached to a Redis queue. It grabs a job from the queue, performs the job, acknowledges the queue and repeats. We have a single dsym-worker running on a single machine, which has usually been enough to process the job load at a reasonable rate. There are a few things that can and has happened with this simple setup which require on-call maintenance:

  • Increased load. Every now and then, usually during the weekend when perhaps people use their iOS devices more, the number of iOS crash reports coming in could be too much for a single job to be processed at a time. If this happens, more dsym-worker processes need to be manually started to handle the load. Each process that is started attaches a consumer to the job queue. The Golang Redis queue library it uses then distributes jobs to each consumer so that multiple jobs can be done at the same time.
  • Unresponsiveness. That is to say that the process is still running, but isn’t doing any work. In general, this can occurr due to infinite loops or deadlocks. This is particularly bad as our process monitor sees that it is still running, and so it is only when the queue reaches a threshold that alerts are raised. If this happens, the process needs to be manually killed, and a new one started. (Or perhaps many, to catch up on the work load)
  • Termination. The process crashes and shuts down entirely. This has never happened to the dsym-worker, but is always a possibility as the code is updated. If this happens, a monitor alerts that the process has died, and it needs to be manually started up again.

It’s not good needing to deal with these in the middle of the night, and sometimes it isn’t so good for the person responsible for the code either.

These things can and should be automated, and so I set out to do so.


So overall, we need auto-scaling to handle variable/increased amounts of load, the ability to detect and replace unresponsive workers in some way, and the ability to detect and restart dead processes. Time to come up with a plan of attack.

Single worker strategy

My first idea was extremely simple. The Golang Redis queue library we use, as you may expect, has the ability to attach multiple consumers to the queue from within a single process. By attaching multiple consumers, more work can be done at once, which should help with implementing the auto-scaling. Furthermore, if each consumer keeps track of when they last completed a job, they can be regularly checked to see if it has been too long since it has done any work. This could be used to implement simple detection of unresponsive consumers. At the time, I wasn’t focused on the dead worker detection, and so started looking into the feasibility of this plan so far.

It didn’t take long to discover that this strategy was not going to cut it – not in Golang at least. Each consumer is managed in a goroutine within the Golang Redis queue library. If an unresponsive consumer is detected then we need to kill it off, but it turns out that one does not simply kill off a goroutine (oh, I should mention, I’m quite new to Golang). For a goroutine to end, it should generally complete its work, or be told to break out of a loop using a channel or some other mechanism. If a consumer is stuck in an infinite loop though, as far as I can tell, there isn’t a way to command the goroutine to close down. If there is, it’s bound to mean modifying the Golang Redis queue library. This strategy is getting more complicated than it’s worth, so lets try something else.

Master worker strategy

My next idea was to write a whole new program that spawns and manages several workers. Each worker can still just attach a single consumer to the queue, but more processes running means more work being done at once. Golang certainly has the ability to start up and shut down child processes, so that helps a lot with auto scaling. There are various ways for separate processes to communicate with each other, so the workers can tell the master process when they last completed a job. If the master process sees that a worker hasn’t done any work for too long, then we get both unresponsive and death detection – more on those later.

Hivemind strategy

And another approach that I briefly thought of is more of a hivemind set up. A single worker process could have both the logic to process a single job at a time, as well as spawning and managing other worker processes. If an unresponsive or dead process is detected, one of the other running processes could assume responsibility for starting up a new one. Collectively they could make sure there was always a good number of processes running to handle the load. I did not look into this at all, so have no idea how sane this is. It could be an interesting exercise though.

In the end, I went with the master process approach. The following is how I tackled each challenge.

Auto scaling

The master process starts by spinning up a goroutine that regularly determines the number of processes that should be running to handle the load. This main goroutine then starts or stops worker processes to match this number. The calculation of the desired worker count is very simple. It considers both the current length of the queue, as well as the rate at which the queue count is changing. The longer the queue is, or the faster jobs are being queued, the more workers that should be spawned. Here is a simplified look at he main goroutine:

  procs := make(map[int]*Worker)
    // Check the health of each worker
    queueLength, rate := queueStats()
    // Calculate desired worker count, then start/stop workers to match
    desiredWorkerCount := calculateDesiredWorkerCount(queueLength, rate, len(*procs))
    if len(*procs) != desiredWorkerCount {
      manageWorkers(procs, desiredWorkerCount)
    time.Sleep(30000 * time.Millisecond)

The master process needs to keep track of the child processes that it starts. This is to help with auto-scaling, and to regularly check the health of each child. I found that the easiest way to do this was to use a map with integer keys, and instances of a Worker struct as the values. The length of this process-map is used to determine the next key to use when adding a worker, and also which key to delete when removing a worker.

func manageWorkers(procs *map[int]*Worker, desiredWorkerCount int) {
  currentCount := len(*procs)
  if currentCount < desiredWorkerCount {
    for currentCount < desiredWorkerCount { StartWorker(procs, currentCount) currentCount++ } } else if currentCount > desiredWorkerCount {
    // Remove workers:
    for currentCount > desiredWorkerCount {
      StopWorker(procs, currentCount 1)

Golang provides an os/exec package for high level process management. The master process uses this package to spawn new worker processes. The master and worker processes are deployed to the same folder, so “./dsym-worker” can be used to start the workers up. However, this does not work when running the master process from a Launch Daemon. The first line in the StartWorker function below is how you can get the working directory of the running process. With this we can create the full path of the worker executable to run it up reliably. Once the process is running, we create an object from the Worker struct and store it in the process-map.

func StartWorker(procs *map[int]*Worker, index int) {
  dir, err := filepath.Abs(filepath.Dir(os.Args[0]))
  cmd := exec.Command(dir + “/dsym-worker”)
  // Some other stuff gets done here and stored in the Worker object
  // such as retrieving the standard in and out pipes as explained later
  worker := NewWorker(cmd, index)
  (*procs)[index] = worker

Determining the desired worker count for the job load, and then starting/stopping workers to meet that number is all there really is to auto-scaling in this case. I’ll cover how we stop workers further down.

Inter process communication in Golang

As mentioned previously, my simple approach for detecting an unresponsive worker is done by each worker reporting the time at which it last completed a job. If the master process finds that a worker has not done a job for too long, then consider it unresponsive and replace it with a new worker. To implement this, the worker processes needs to communicate in some way to the master process to relay the current time whenever it completes a job. There are many ways that this could be achieved:

  • Read and write to a file
  • Set up a local queue system such as Redis or RabbitMQ
  • Use the Golang rpc package
  • Transfer gobbed data through a local network connection
  • Utilize shared memory
  • Set up named pipes

In our case, all that we are communicating is just timestamps, not important customer data, so most of these are a bit overkill. I went with what I thought was the easiest solution – communicating through the standard out pipe of the worker processes.

After starting up a new process via exec.Command as described previously, the standard out pipe of the process can be obtained through:

stdoutPipe, err := cmd.StdoutPipe()

Once we have the standard out pipe, we can run up a goroutine to concurrently listen to it. Within the goroutine, I’ve gone with using a scanner to read from the pipe as seen here:

scanner := bufio.NewScanner(stdoutPipe)
  line := scanner.Text()
  // Process the line here

Code after the scanner.Text() call will be executed every time a line of text is written to the standard out pipe from the worker process.

Unresponsiveness detection

Now that inter process communication is in place, we can use it to implement the detection of unresponsive worker processes. I updated our existing worker to print out the current time using the Golang fmt package upon completing a job. This gets picked up by the scanner where we parse the time using the same format it was printed in. The time object is then set to the LastJob field of the relevant Worker object that we keep track of.

t, err := time.Parse(time.RFC3339, line)

Back in the main goroutine that is regularly iterating the process-map, we can now compare the current time with the LastJob time of each worker. If this is too long, we kill off the process and start a new one.

func CheckWorker(procs *map[int]*Worker, worker *Worker, index int) {
  // Replace the given worker if it hasn’t done any work for too long
  duration := time.Now().Sub(worker.LastJob)
  if duration.Minutes() > 4 {
    KillWorker(procs, index)
    StartWorker(procs, index)

Killing a process can be done by calling the Kill function of the Process object. This is provided by the Command object we got when we spawned the process. Another thing we need to do is delete the Worker object from the process-map.

func KillWorker(procs *map[int]*Worker, index int) {
  worker := (*procs)[index]
    process := worker.Command.Process
    delete(*procs, index) // Remove from process-map
    err := process.Kill() // Kill process
      // Handle error

After killing off the misbehaving worker, a new one can be started by calling the StartWorker function listed further above. The new worker gets referenced in the process-map with the same key as the worker that was just killed – thus completing the worker replacement logic.

Termination detection

Technically, the detection and resolution of unresponsive processes also covers the case of processes that terminate unexpectedly as well. The dead processes won’t be able to report that they are doing jobs, and so eventually they’ll be considered unresponsive and be replaced. It would be nice though if we could detect this earlier.

Attempt 1

When we start a process, we get the pid assigned to it. The Golang os package has a function called FindProcess which returns a Process object for a given pid. So that’s great, if we just regularly check if FindProcess returns a process for each pid we keep track of, then we’ll know if a particular worker has died right? No, FindProcess will always return something even if no process exists for a given pid… Let’s try something else.

Attempt 2

Using a terminal, if you type “kill -s 0 {pid}”, then a signal will not be sent to the process, but error checking will still be performed. If there is no process for the given pid, then an error will occur. This can be implemented with Golang, so I tried it out. Unfortunately running this in Golang does not produce any error. Similarly, sending a signal of 0 to a non existent process also doesn’t indicate the existence of a process.

Final solution

Fortunately, we have actually already written a mechanism that will allow us to detect dead processes. Remember the scanner being used to listen to the worker processes? Well, the standard out pipe object that the scanner is listening to is a type called ReadCloser. As the name suggests, it can be closed, which happens to occur if the worker process at the other end stops in any way. If the pipe closes, the scanner stops listening and breaks out of the scanner loop. So right there after the scanner loop we have a point in code where we know a worker process has stopped.

All we need to do now is determine if the worker shut down as a result of the normal operation of the master process (e.g. killing unresponsive workers, or down-scaling for decreased load), or if it terminated unexpectedly. Before the master processes kills/stops a worker for any reason, it deletes it from the process-map. So, if the worker process is still in the process-map after the scanner stops, then it has not shut down at the hands of the master process. If that is the case, start up a new one to take its place.

if (*procs)[worker.Index] == worker {
  StartWorker(procs, worker.Index)

The functionality of this can easily be tested by using the kill command in a terminal to snipe a worker process. The Mac Activity Monitor will show that a new one replaces it almost instantly.

Graceful shut down

When I first prototyped the auto-scaling behaviour with Golang, I was calling the KillWorker function listed further above to kill off processes when not so many were needed. If a worker is currently processing a job when it is killed off in some way, what happens to the job? Well, until the job has been completed, it sits safely in an unacked queue in Redis for a particular worker. Only when the worker acknowledges the queue that the job has been completed will it disappear. The master process regularly checks for dead Redis connections, and moves any unacked jobs for them back to the ready queue. This is all managed by the Golang Redis queue library we’re using.

This means that when a worker process terminates unexpectedly, no jobs are lost. It also means that killing off processes manually works totally fine. However, it feels kinda rude, and means processing those jobs is delayed. A better solution is to implement graceful shut down – that is to allow the worker process to finish the job they are currently processing, and then naturally exit.

Step 1 – master process tells worker to stop

To start off, we need a way for the master process to tell a particular worker process to begin graceful shut down. I’ve read that a common way of doing this is to send an OS signal such as ‘interrupt’ to the worker process, and then have the worker handle those signals to perform graceful shut down. For now though, I preferred to leave the OS signals to their default behaviours, and instead have the master process send “stop” through the standard in pipe of a worker process.

func StopWorker(procs *map[int]*Worker, index int) {
  worker := (*procs)[index]
  // The standard in pipe was obtained and stored here when the worker was first started
  stdinPipe := worker.StdinPipe
  _, err := (*stdinPipe).Write([]byte(“stopn”))

Step 2 – worker begins graceful shut down

when a worker process receives a “stop” message, it uses the Golang Redis queue library to stop consuming, and sets a boolean field to indicate that it’s ready for graceful shut down. Another boolean field is used to keep track of whether or not a job is currently in progress. The program is kept alive as long as one of these boolean fields is true. If they are both false, then it means it has no jobs to process, is marked for graceful shut down and so the program is allowed to terminate naturally.

func scan(consumer *Consumer) {
  reader := bufio.NewReader(os.Stdin)
    text, _ := reader.ReadString(‘n’)
    if strings.Contains(text, “stop”) {

Step 3 – worker tells master that it’s done

In the master process, we need to stop keeping track of any shut down workers by deleting them from the process-map. We could do this after sending the worker a “stop” message, but what would happen if the last job happened to cause the worker to get stuck in an unexpected infinite loop? To clean this up better, when a worker process has finished its last job and is able to shut down, it prints out a “stop” message. Just like the timestamps, this message gets picked up in the scanner we set up previously. When the master sees this message, it’s fine to stop keeping track of that worker, so delete it from the process-map.

// In the worker process:
func stop(consumer *Consumer) {
  consumer.running = false
  if !consumer.doingJob {

// In the scanner loop of the master process:
if strings.Contains(line, “stop”) {
  delete(*procs, worker.Index)

Who watches the watcher?

At this point, the dsym-worker can now auto-scale to handle increased load, and has self-defensive mechanisms against unexpected termination and unresponsiveness. But what about the master process itself? It is a lot more simple than the worker processes, but is still at risk of crashing, especially as this is my first attempt at this kind of process set up. If it goes down, we’re right back to where we started with on-call goodness. May as well set up a mechanism to automate restarting the master process too.

There are a few ways to make sure that the master process is restarted upon failure. One way could be to use the ‘KeepAlive” option in a Launch Daemon config. Another option could be to write a script that checks the existence of the process, and start it if not found. Such a script could be run every 5 minutes or so from a Cron job.

What I ended up doing was to create yet another Golang program which initially starts the master process, and then restarts it if termination is detected. This is achieved using the same technique that the master process uses. Overall it is very small and simple, with nothing that I know of to go wrong. So far it’s holding up well. Another thing that would help is if I actually fix any issues that could cause the master process to crash… but I digress.

Orphaned workers

If the master process is shut down via an interrupt signal, it is automatically handled in a way that all the child processes get shut down too. This is really handy for deploying a new version, as only the top level process needs to be told to shut down. If the master process out right crashes though, then it’s a different story. All the worker processes hang around and keep processing work, but with nothing to supervise them. When a new master process is started, more workers are spawned, which could cause a problem if this keeps happening without any clean up.

This is an important situation to handle, so here is my simple solution against this. The os package in Golang provides a method called Getppid(). This takes no parameters, so you can simply call it any time and get the pid of the parent process. If the parent dies, the child is orphaned and the function will return 1 – the pid of init. So within a worker process, we can easily detect if it becomes orphaned. When a worker first starts it can obtain and remember the pid of its initial parent. Then, regularly call Getppid, and compare the result to the initial parent pid. If the parent pid has changed change, the worker has been orphaned, so commence graceful shut down.

if ppid != initialParentId {
  stop(consumer) // Begin graceful shut down

Finishing words

And that’s about it. I hope you found this look into how I implemented an auto-scaling and self-defensive service in Golang at least a little bit interesting and not overly ridiculous. So far it’s all working really well, and I have some ideas to make it even more robust. If you know of any existing techniques, processes or libraries that I could have used instead, then I’d love to hear them in the comments below.

All the processes that make up our dsym-worker of course use Raygun to report any errors or panics that occur. This has made tracking down and fixing issues a breeze. Sign up for a free trial of Raygun if you also need error reporting in your services or applications.

Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/4oF_IPwmlSE/

Original article

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑

%d bloggers like this: