ISSE: An Interactive Sound Source Separation Editor

In applications such as audio denoising, music transcription, music remixing, and audio-based forensics, it is desirable to decompose a single- or stereo-channel recording into its respective sources. To perform such tasks, we present ISSE – an interactive source separation editor (pronounced “ice”).

ISSE is an open-source, freely available, cross-platform audio editing tool that allows a user to perform source separation by painting on time-frequency visualizations of sound. The software leverages both a new user interaction paradigm and machine learning-based separation algorithm that “learns” from human feedback (e.g. painting annotations) to perform separation. For more information, please see the about and demos sections of the website and the demo video below.

version: 0.2.0 (alpha-release)

What’s New: Stereo processing, interface updates, bug fixes.

Original URL:  

Original article

Whither Plan 9? History and motivation

Whither Plan 9? History and Motivation

Plan 9 is a
research operating system from
Bell Labs. For several
years it was my primary environment and I still use it
regularly. Despite it’s conceptual and implementation
simplicity, I’ve found that folks often don’t immediately
understand the system’s fundamentals: hence this series of

When I was a young programmer back in high school my primary
environment was a workstation of some type running either Unix
or VMS and X11. After a while I migrated to
FreeBSD on commodity
hardware using the same X11 setup I’d built on workstations. But
eventually the complexity of Unix in general started to get to me:
it happened when they added
to FreeBSD in one of the 4.x releases. “Really?” I thought to
myself. “What’s wrong with
and making a crontab?” Unix-like systems were evolving in a way
that I didn’t like and I realized it was time for me to find
another home.

And I wasn’t the only one who had ever felt that way. It
turns out that circa 1985 the 1127 research group at Bell Labs,
the same group that developed Unix and C after Bell Labs pulled
out of the
Multics project, came
to the conclusion that they’d taken Unix about as far as they
could as a research vehicle.

They were looking at the computing landscape of the 1980s and
realized that the computing world was fundamentally

First, high-bandwidth low-latency local area networks were
becoming ubiquitous.

Second, large time-shared systems were being replaced by
networks of heterogeneous workstations built from commodity
hardware. Related, people were now using machines that had
high-resolution bitmapped graphics displays accompanied by mice
instead of text-only, keyboard-only character terminals.

Third, RISC processors were on the rise and multiprocessor
RISC machines were dramatically outperforming their earlier
uniprocessor CISC ancestors.

Finally, they saw major changes in storage systems: RAID was
gaining traction, tape drives were waning, and optical storage
was looking like it would be a big part of the future. (Of
note, this is one area where they were arguably very, very
wrong. But no one is truly prescient.)

At first, they tried to adapt Unix to this new world, but
they quickly decided this was unworkable. What they wanted was
a Unix built on top of the network; what they found was a
network of small Unix systems, each unique and incompatible with
the rest. Instead of a modern nation state, they had a loose
federation of feudal city states.

It turned out that fundamental design assumptions in their
earlier system made it difficult to gracefully accommodate their
desired changes. For example, the concept of a single
privileged ‘root’ user made it difficult to extend the system to
a network of machines: does having ‘root’ access on one machine
confer it on all machines? Why or why not? Here, an artifact
of a different time was at odds with the new reality.
Similarly, graphics had never been integrated into Unix well:
the system was fundamentally build around the idea of the TTY as
the unit of user interaction and the TTY abstraction permeated
the kernel. Also, the system had been fundamentally designed
assuming a uniprocessor machine; fine-grained locking for
scalability on multiprocessor systems was simply non-existent.
Finally, the filesystem organization made it challenging to
support heterogeneous systems in a coherent manner.

In the end, the amount of work required to bring Unix up to
date was considered not worth the effort. So they decided to
start from scratch and build a new system from the ground up:
this system would became Plan 9.

Plan 9 Fundamentals

To a first-order approximation, the idea behind Plan 9 is to
build a Unix-like timesharing system from the network,
rather than a network of loosely connected time-sharing

To start at the most basic level, a Plan 9 system is a
network of computers that are divided into three classes:

File Servers
This is where your data lives. They provide stable
storage to the network.

These are machines with lots of fast secondary storage
(hard disks or RAID arrays or SSDs or whatever. Historically
speaking this meant RAID arrays built from hard disks: Plan 9
predates SSDs and other commodity-class solid state storage

File server machines have decent if not spectacular
processors, moderate amounts of RAM for caching data from
secondary storage, and a very fast network connection.

They have no user interaction capabilities to speak of:
often one would use a serial console for day-to-day system
administration tasks. Historically, the file server machine
ran a special version of the kernel and didn’t even have a
shell! Rather, there was something akin to a monitor built-in
where the system administrator executed commands to configure
the system, add and remove users and other similar tasks.

More recently, the file server was rewritten so that it
runs as a user-level program executing under the control of a
normal kernel. It is often still run on a dedicated machine,

An unusual innovation at the time was the backup mechanism:
this was built into the file server. Periodically, all
modified blocks on the file server would be written off to a
tertiary storage device (historically, a magneto-optical
jukebox, but now a separate archival service that stores data
on a dedicated RAID array). Of note, historically file
service was suspended while the set of modified blocks was
enumerated, a process that could take on the order of minutes.
Now, the file system is essentially marked copy-on-write while
backups are happening with no interruption in service.

CPU Servers
Shared compute resources.

These are large multiprocessor machines with lots of of
fast CPUs and lots of RAM. They have a very fast network
connection to the file server but rarely have stable storage
of their own (read: they are often diskless, except for
occasionally having locally attached storage for scratch space
to cut down on network traffic).

Like file servers, there is no real user-interaction
hardware attached to the computer itself: the idea is
that you will interact with a CPU server through a Plan 9
terminal (discussed below). Often console access for system
administration was provided through a serial line.

These run a standard Plan 9 kernel, but compiled using a
“cpu” configuration. This mostly affects how resources are
partitioned between user processes and the kernel (e.g.,
buffers reserved by the kernel and the like). The modern file
server typically runs on a CPU server kernel.

The machines a user sits in front of and interacts with.

Terminals have mediocre amounts RAM and CPU power and
middling network interfaces but excellent user-interface
features including a nice keyboard, nice 3-button mouse, and a
nice high resolution bitmapped display with a large monitor.
They are usually diskless.

This is where the user actually interacts with the system:
the terminal is a real computer, capable of running arbitrary
programs locally, subject to RAM and other resource
limitations. In particular, the user runs the window system
program on the terminal as well as programs like text editors,
mail clients, and the usual compliment of filesystem traversal
and manipulation commands. Users would often run compilers
and other applications locally as well.

The terminal, however, is not meant to be a
particularly powerful computer. When the user needs more
computational power, she is expected to use a CPU server.

A user initiates a session with a Plan 9 network by booting a
terminal machine. Once the kernel comes up, it prompts the user
for her credentials: a login name and password. These are
verified against an authentication server — a program running
somewhere on the network that has access to a database of
secrets shared with the users. After successful authentication,
the user becomes the “hostowner”, the terminal connects to the
CPU server, constructs an initial namespace and starts an
interactive shell. That shell typically sources a profile file
that further customizes the namespace and starts the window
system. At this point, the user can interact with the entire


A question that immediately arises from this
description: why write a new kernel for this? Why not just
implement these things as separate user-processes on a mature
Unix kernel?

Over the course of its research lifetime, Unix had acquired a
number of barnacles that were difficult to remove. Assumptions
about the machine environment it was developed on were
fundamental: TTYs were a foundational abstraction. Neither
networking nor graphics had ever really been integrated
gracefully. And finally it was fundamentally oriented towards
uniprocessor CISC machines.

With Plan 9, the opportunity was taken to fix the various
deficiencies listed in the motivation section. In particular,
fine-grained locking was added to protect invariants on kernel
data structures. The TTY abstraction, which was already an
anachronism in the 1970s, was discarded completely: effective
use of the system now required a bitmapped graphical
display and a mouse. The kernel was generally slimmed down and
the vestiges of various experiments that didn’t pan out, or
design decisions that were otherwise obsolete or generally bad,
were removed or replaced.

Device interfaces were rethought and replaced. Networking
and graphics were designed in from the start. The security
model was rethought for this new world.

The result was a significantly more modern and portable
kernel that could target far more hardware than Research Unix
could. Unburdened by the legacy of the past, the system could
evolve more cleanly in the new computing environment.
Ultimately, the same kernel would target MIPS, SPARC, Alpha, x86
and x86_64, ARM, MC68k, PowerPC and i960: all without a
single #ifdef.

The userspace programs that one had come to expect were also
cleaned up. Programs that seemingly made no sense in the new
world were not carried forward: things dealing with the TTY, for
example, were left behind. The window system was rewritten from
scratch to take advantage of the network, various warts on
programs were removed and things were generally polished. New
editors were written or polished for the new system, and the new
UNICODE standard for internationalization was embraced through
the freshly-designed UTF-8 encoding, which was introduced to the
world through Plan 9.

On the development front, a new compiler/assembler/linker
suite was written which made cross-compilation trivial and made
development of a single system across heterogeneous hardware
vastly easier (dramatically increasing system portability), and
some experimental features added to the C programming language
to support Plan 9 development. The standard libraries were
rethought and rewritten with a new formatted-printing library,
standard functions, system calls, etc. Threads were facilitated
through the introduction of an rfork primitive that
could create new processes that shared address spaces (but not

But what about root?

Plan 9 circumvents the “what about root?” question by simply
doing away with the concept: there is no super-user. Instead,
an ordinary user is designated as the “hostowner” of any
particular computer. This user “owns” the hardware resources of
the machine but is otherwise subject to the normal permissions
authorized scheme users are familiar with from Unix: user, group
and other permissions for read, write and execute.

All machines have hostowners: for terminals this is whoever
logged into the machine when the terminal booted. For CPU and
file servers, these are configured by the system administrator
and stored in some sort of non-volatile memory on the computer
itself (e.g., NVRAM).

On CPU servers, the hostowner can create processes and change
their owner to some other user. This allows a CPU server to
support multiple users simultaneously. But the hostowner cannot
bypass filesystem permissions to inspect a user’s read-protected

This begs the question: if there is no super-user, how are
resources put into places where the user expects them, and how
does the user communicate with the system? The answer is
per-process, mutable namespaces.

Namespaces and resource sharing

One of the, if not the, greatest advances of Plan 9 was an
aggressive adaptation and generalization of the Unix “everything
is a file” philosophy. On Unix “everything” is a file — a named
stream of bytes — except when it’s not: for instance sockets
kinda-sorta look like files but they live in a separate
namespace than other file-like objects (which have familiar
names, like /dev/console or /etc/motd). One does not manipulate
them using the “standard” system calls like
etc. One cannot use standard filesystem tools like
on sockets since they aren’t visible in the file namespace
(okay, you kinda-sorta can with Unix domain sockets, but even
then there are pretty serious limitations). Or consider the
system call: this is basically a hook for manipulating devices
in some way; the device itself may be represented by a device
node in /dev, but controlling that device uses this weird
in-band mechanism; it’s a hack.

But on Plan 9, everything looks like a file. Or
more precisely everything is a filesystem and there is a single
protocol (called 9P) for interacting with those
filesystems. Most devices are implemented as a small tree of
files including data files for getting access to the
data associated with a device as well as a ctl (nee
“control”) file for controlling the device, setting its
characteristics and so forth. ioctl(2) is gone.

Consider interacting with a UART controlling a serial port.
The UART driver provides a tree that contains a data file for
sending and receiving data over the serial port, as in Unix, but
also a control file. Suppose one wants to set the line rate on
a serial port, one does so by echoing a string into
the control file. Similarly, one can put an ethernet interface
into full-duplex mode via the same mechanism. Generalizing the
mechanism so that reading and writing a text file applies to
device control obsoletes ioctl(2) and other similar
mechanisms: the TCP/IP stack is a filesystem, so setting options
on a TCP connection can also be done by echoing a command into a
ctl file.

Further, the system allows process groups to have independent
namespaces: some process may have a particular set of resources,
represented as filesystems, mounted into its namespace while
another process may have another set of resources mounted into a
different namespace. These can be inherited and changed, and
things can be ‘bound’ into different parts of the namespace
using a “bind” primitive, which is kind of like mounting an
existing subtree onto a new mount point, except that one can
create ‘union’ mounts that share with whatever was already under
that mount point. Further, bindings can be ordered so that one
comes before or after another, a facility used by the shell:
basically, the only thing in $path on Plan 9
is /bin, which is usually a union of all the
various bin directories the user cares about (e.g.,
the system’s architecture-specific bin, the user’s
personal bin, one just for shell scripts, etc).
Note that bind nearly replaces the need for symbolic links; if I
want to create a new name for something, I simply bind it.

All mounts and binds are handled by something in the kernel
called the “mount driver,” and as long as a program can speak 9P
on a file descriptor, the resources it exposes can be mounted
into a namespace, bound into nearly arbitrary configurations,
and manipulated using the standard complement of commands.

Since 9P is a protocol it can be carried over the network,
allowing one access to remote resources. One mounts the
resource into one’s namespace and binds it where one wishes.
This is how networked graphics are implemented: there’s no need
for a separate protocol like X11, as one simply connects to a
remote machine, imports the “draw” device (the filesystem for
dealing with the graphic’s hardware) from one’s terminal, binds
that over /dev/draw (and similarly with the
keyboard and mouse, which are of course represented similarly),
and runs a graphical program, which opens /dev/draw
and writes to it to manipulate the display. Further, all of the
authentication and encryption of the network connection is
handled by whatever provides the network connection;
authorization for opening files is handled by controlling access
to the namespace, and the usual Unix-style permissions for
owner, group and world. There’s no need for
MIT-MAGIC-COOKIE-1‘s or tunneling over SSH or other
such application-level support: you get all of it for free.

Also, since 9P is just a protocol, it is not tied to devices:
any program that can read and write 9P can provide some service.
Again, the window system is implemented as a fileserver:
individual windows provide their own /dev/keyboard, /dev/mouse
and /dev/draw. Note that this implies that the window system can
run itself recursively, which is great if you’re testing a new
version of the window system. As mentioned before, even the
TCP/IP stack is a filesystem.

Finally since mounts and binds are per-process, both
operations are unprivileged: users can arbitrarily mount and
bind things as they like and subject to the permissions of the
resources themselves. Of course, Plan 9 does rely on some
established conventions and programs might make corresponding
assumptions about the shape of the namespace so it’s
not exactly arbitrary in practice but the mechanism is
inherently flexible.

We can see how this simplifies the system by comparing Plan
9’s console mechanism to /dev/tty under Unix.
Under Plan 9, each process can have its
own /dev/cons (taken from the namespace the process
was started in) for interacting with the “console”: it’s not a
special case requiring explicit handling in the kernel
as /dev/tty is under Unix, it’s simply private to
the namespace. Indeed, under the rio window
system, each window has it’s
own /dev/cons: these are synthesized by the window
system itself and used to multiplex the /dev/cons
that was in the namespace rio was started in.

Note how this changes the view of the network from the user’s
perspective in contrast to e.g. Unix or VMS: I construct the set
of resources I wish to manipulate and import them into
my namespace: in this sense, they become an extension of my
machine. This is in stark to other systems in which resources
are remotely accessed: I have to carry my use to them. For
example, suppose I want to access the serial port of some
remote computer: perhaps it is connected to some embedded
device I want to manipulate. I do this by importing the serial
port driver, via 9P, from the machine the device is connected
to. I then run some kind of communications program locally, on
my terminal, connecting to the resource as if it were local to
my computer. 9P and the namespace abstraction make this
transparent to me. Under Unix, by contrast, I’d have to login
to the remote machine and run the communications program
there. This is the resource sharing model, as opposed
to the remote access to resources model.

However, I still can have access to remote resources.
Consider CPU servers: to make use of a CPU server’s resources, I
run a command on my terminal called cpu which
connects me to a remote machine. This is superficially similar
to a remote login program such as ssh with the
critical difference that cpu imports my existing
namespace from my terminal, via 9P, and makes it accessible to
me on the remote machine. Everything on the remote machine is done
within the context of the namespace I set up for myself locally
before accessing the CPU server. So when I run a graphical
program, and it opens /dev/draw this is really
the /dev/draw from my terminal. It is imperfect
in that it relies on well-established convention, but in
practice it works brilliantly.

The file server revisited

The file server is worth another look, both as an interesting
artifact in its own right as well as an example of an early
component of the system that did not pan out as envisioned at
the outset of the project.

In the first through third editions of Plan 9 the file server
machine ran a special kernel that had the filesystem built in.
This was a traditional block-based filesystem and the blocks
were durably kept on a magneto-optical WORM jukebox. In fact,
the WORM actually held the filesystem structure; magnetic disk
was a cache for data resident on the worm and could be discarded
and reinitialized. The WORM was treated as being infinite (not
true of course, but it was regarded so conceptually). Since
changing platters was necessarily slow and magneto-optical
drives weren’t exactly “fast”, there was a disc acting as a
cache of frequently-used blocks as well as a write buffer. RAM
on the file server machine also acted as a read cache for blocks
on the hard disk, giving two layers of caching: generally, the
working set of commonly used user programs and so forth all fit
into the RAM cache. The overview
describing the system stated that something less than one
percent of accesses missed the cache and had to go to the

To avoid wasting write-once space and for performance, writes
were buffered on disk and automatically sync’ed to the WORM once
a day: at 5am file service was paused and all blocks modified
since the last dump were enumerated and queued for copy. Once
queued, file service resumed. Those blocks were then written to
newly allocated blocks on some platter(s) by a background
process. The resultong daily “dump” was recorded with a known
name and made accessible as a mountable filesystem (via 9P).
Thus, one could ‘cd’ to a particular dump and see a snapshot of
the filesystem as it existed at that moment in time. This was
interesting since, unlike using tape backups on Unix, if you
lost a file you didn’t need anyone to go read it back for you;
you simply cded to where it was and
used cp to copy it back to the active filesystem.
Similarly if you wanted to try building a program with an older
version of a library, you could simply bind the older version
from the dump onto the library’s name and build your program;
the linker would automatically use the older library version
because that’s what was bound to the name it expected in its
namespace. There were some helper commands for looking for a
file in the dump and so forth to make navigating the dump

A few groups outside of Bell Labs actually had the
magneto-optical jukeboxes, but they were rare. However the file
server could be configured to use a hard disk as a
“pseudo-worm”: that is, the file server could treat a disk or a
disk mirror like a WORM even though it wasn’t truly write-once
at the hardware level. Most sites outside of the labs were
configured to use the pseudo-worm.

In the 4th edition a new associative block storage server
called Venti appeared. Venti isn’t a WORM jukebox; it’s an
associatively-indexed archival storage server. Data is stored
in fixed-sized blocks that are allocated from storage “arenas”:
when a user writes data to a venti server the data is split into
blocks, the SHA-1 signature of the block is calculated, a block
of backing store is allocated from an arena, the data is written
there, and the mapping between signature and pair is written into an index. If one wants the
block back one looks up its signature in the index to get the
pair back and then reads that block
from the arena. Naturally, this means that duplicate data is
stored only once in the venti. However, venti arenas can be
replicated for speed and/or reliability.

Arenas are sized such that they can be written onto some kind
of archival media (my vague recollection is that DVDs may have
been popular at the time), but they are stored on hard disks or
some other kind of random-access media (SSDs are popular now).
Venti, however, is not a file server and does not present itself
as one. Rather, it speaks its own protocol and likely
originated out of the observation that magneto-optical jukeboxes
had never quite taken off the way they had initially expected,
were expensive, slow, big, noisy and power-hungry. Hard disks
were getting so cheap that they were about to pass tape in
storage density versus cost and with RAID they were pretty

A filesystem called “fossil” was written that
could be optionally backed by a venti, but it was rather a
different beast than the old file server. In particular, fossil
is just a normal user program that one can run under a normal
Plan 9 kernel (unlike the older file server, which really was a
self-contained program). And unlike the older filesystem which
lived implicitly on the WORM, fossil has to explicitly maintain
state about the associative store in order to be able to
reconstruct the filesystem structure from the venti.
Regardless, it shares many of the traits of the earlier system
and was clearly influenced by it: there is a dump that is
accessed in the exact same way as the older server’s dump
(including the naming scheme) and backups are automatically
integrated in the same way, but using a copy-on-write scheme
instead of suspending service when snapshotting. The
implementation is radically different, however.

Sounds great; so where is it now?

Sadly, Plan 9 has fallen into disuse over the past decade and
the system as a whole has atrophied. For example, it has been
argued that fossil never attained the level of maturity,
reliability, or polish of the older filesystem and that is
largely a fair assessment. I will discuss this more in part 3
of this series.

Plan 9 is still available today, though it is not actively
developed by Bell Labs anymore. The Labs produced four official
Plan 9 editions; the last in 2003, after which they moved to a
rolling release without fixed editions. However, the Plan 9
group at Bell Labs disbanded several years ago. There are
several forks that have arisen to take up some of the slack:

  • 9legacy: This is a
    patch set for Plan 9 from Bell Labs. It contains most of the
    interesting bits of other distributions while retaining the
    flavor the Bell Labs distribution.
  • Harvey: Harvey is an
    attempt to modernize the Plan 9 codebase. It discards the
    traditional Plan 9 compiler suite in favor of GCC and CLANG.
    It is extremely active and making rapid progress.
  • 9atom: This is a
    distribution maintained by long-time 9fan Eric Quanstrom.
  • 9front: A European fork
    with a distinct culture.
  • Plan 9 from
    Bell Labs
    : The “official” Bell Labs distribution is still
    available, though it has been essentially orphaned.

Further, many of the good ideas in Plan 9 have been brought
into other systems.
The Akaros
operating system has imported not just many of the ideas, but
much of the code as well. Even systems like Linux and FreeBSD
have taken many of the good ideas from Plan 9:
the /proc filesystem on both systems is inspired by
Plan 9, and Linux has implemented a form of per-process
namespaces. FUSE is reminiscent of Plan 9’s userspace filesystem

Original URL:  

Original article

Is the DAO going to be DOA?

The DAO is the latest Decentralized, Autonomous Organization to make major waves as they raised over $100 million worth of ETH tokens (12% of all ETH). The question remains, is it a good idea to invest in The DAO and have they learned anything from those who have gone before?

In this article I will talk about the experience gained from BitShares, not to promote BitShares as superior, but to highlight the hard lessons that can be learned from BitShares’ failures and how they might apply to The DAO.


Let me take a moment to share my credentials on the subject of DAOs. The first two words of DAO, Decentralized Autonomous, entered the crypto-currency lexicon after a discussion between my father and I back in 2013. I introduced the concept that a blockchain can be viewed as a DAC. It was because of these articles that Vitalik Buterin, one of the founders of The DAO, started exploring the concepts in a three part series .

The last word of DAC was changed from Company (or Corporation) to Organization in order to avoid unnecessary legal entanglements, but the concept remains the same.

Over the past three years I have worked with the BitShares community to implement the worlds first Decentralized Autonomous Organization based upon many of the same principles as The DAO. Money was raised, tokens were allocated, and token holders were given the ability to vote on how to spend community money and set blockchain parameters.

The latest incarnation of BitShares gave members joint control over a $6 million dollar reserve fund. Advanced consensus and voting systems were implemented to address voter apathy. Powers were divided. Participatory budgeting was adopted. The stakeholders have the ability to vote for hard forks that implement new features. BitShares even adopted fee-backed assets to help fund specific features. Every thing in the blockchain was parameterized and those parameters could be changed by elected committee members. BitShares has been and continues to be one of the most comprehensive examples of a self-governing DAC / DAO.

There is only one major technological difference between The DAO and BitShares, with The DAO, the blockchain smart contracts can change without all of the nodes having to upgrade. This difference is relatively trivial and ultimately irrelevant to the potential success or failure of The DAO. The reason this technical difference is irrelevant is because success and failure of a DAO/DAC depends not on the technology used, but entirely on how a community interacts with the technology and each other.

Smart Contracts cannot fix Dumb People

BitShares had all of the tools, the talent, and the money to do great things if only the BTS holders could agree on what should be done, who should be paid, and how much should be spent. So what lessons have we learned from BitShares’ experiment and how is The DAO doing anything meaningfully different to address it?

Poor Voter Participation

One of the first things we learned from BitShares is that the vast majority (90%+) of stakeholders did not participate in voting. This is due to the fact that voting requires time, energy, and skills that most investors lack. How many people have the economic, technical, and entrepreneurial skills to vote responsibly?

In order to boost participation BitShares 2.0 introduced proxy voting which centralized decision making into about a dozen elected proxies. Even with proxy voting, most people ultimately chose their preferred proxy along party/philosophical lines rather than considering individual proposals.

The DAO currently requires a level of voter participation that is much higher than BitShares has ever seen for a worker proposal. Even with proxies the highest level of consensus stakeholders have achieved is just a tad over (20%). This means The DAO is expecting much higher levels of participation of voters without using proxies. Unless the majority of DAO stake is held in a few active hands, this will be very hard to achieve.

Perhaps even more interesting with The DAO, once you vote for something you are no longer allowed to split your ETH out and form a new DAO. This means that you have much to lose by voting and much to gain by not voting. With an initial quorum of 20% it will be very challenging to get enough agreement, especially with the downsides associated with actually voting.

Anti-Spending Movement

It didn’t take long for the BitShares community to realize that funding projects today would cause a short-term fall in the value of BitShares. Unable to bear the short-term paper-loss and psychological impact of a lower market cap, people started electing proxies that would vote against all spending proposals.

With The DAO the same principles are at work. Every time a project is funded, the amount of ETH backing the DAO tokens falls and is replaced with speculative IOU from a contractor. What is worse, when the ETH is sold to fund the projects the value of all ETH falls. Since The DAO keeps its savings in ETH, the actual cost of funding a proposal includes any loss value caused by selling ETH.

Considering many of the investors in the DAO also hold ETH there is a conflict of interest in their voting preferences. Most individuals will see the short-term cost (loss of liquidity) of authorizing spending to be much higher than the long-term benefit. After all, authorizing a $1 million dollar project will cause the DAO to lose 1% of its capital today and would likely move the Etheruem price by more than 1% down as everyone attempts to front-run the sell pressure created by the project. In the long-run the project may add value to Etheruem and the DAO, but the long-run is often years away.

Smart speculators know they can make the most money by not tying up their capital during the no-growth phase. They will sell today and buy back in closer to the completion the project.

Not everyone will agree with the value an approved project will bring to The DAO. So while those who vote to approve it see $1 million dollars being invested to create $10 million of value, many more will see that $1 million dollars being wasted with no chance of return. Who is right? Well odds are in favor of it being wasted as 9 in 10 startups fail.

Fortunately The DAO allows non-voters to split and reclaim their ETH by splitting their funds out.

Death by 1000 Splits

The DAO has tentatively raised $100 million dollars worth of ETH, but so far the investors have taken no real risk. Every single person who has purchased DAO tokens has the ability to reclaim their ETH so long as they never vote. The end result is a massive marketing campaign that totally misrepresents what has been invested and what hasn’t. Considering there is no real risk being taken beyond the risk of holding ETH and that there is the potential for a large gain it is no wonder so many people have participated.

So what happens next? Everyone seeking a zero risk return will abstain from voting. If greater than 80% fall in this category, then nothing will pass. There is a very real possibility that this will happen.

Will there be any proposals in the first place? To prevent proposal spam all new proposals must make a deposit that gets forfeited if a quorum (20%) is not reached. In this case there will be no proposals unless the proposers are already certain they will win. To be certain will require conducting non-binding polls outside The DAO. What happens if people vote in non-binding polls but then refuse to vote for the actual proposal? Free profits for The DAO when the deposit fee is forfeited. This may or may not be an issue depending upon whether the required deposit is small enough to risk losing. Anything less than $100 is probably OK.

Once the first project gets approved a new moral hazard is created. Lets assume it is approved with the minimum 20%. The DAO will receive reward shares in the funded project and those shares will be divided equally among all participants. The non-voters will get the rewards and can then split their funds. The voters on the other hand will be unable to split. They take 100% of the risk and only get 20% of the reward, where as the non-voters get 80% of the reward and minimal risk.

The DAO is complicated and I admit that I am not sure I fully understand how, when, and where ETH can be split relative to payouts and rewards. It may well be that non-voters end up funding 80% of the proposal and can only reclaim a fraction of their original investment after the proposal is funded.

Regardless of which way it is actually implemented, there are more benefits to be gained by not-voting and splitting your ETH out of The DAO than by voting and keeping your ETH in The DAO. Liquidity is valuable.

Ignore the Technology

Fancy technology can obscure our assessment of what is really going on. The DAO solves a single problem: the corrupt trustee or administrator. It replaces voluntary compliance with a corporation’s charter under threat of lawsuit, with automated compliance with software defined rules. This subtle change may be enough to bypass regulatory hurdles facing traditional trustee’s and administrators, but it doesn’t solve most of the problems the regulations were attempting to address.

What The DAO doesn’t solve is all of the other problems inherent with any joint venture. These are people problems, economic problems, and political problems. In some sense, The DAO creates many new problems caused by its ridged rules and expensive machine-enforced process for change.

The DAO doesn’t solve the “group trap” where by losers subsidize winners. It disempowers the individual actor and forces him to submit to group decision making. It doesn’t make raising money cheaper for companies, it just adds blockchain-enforced bureaucratic and political processes.

Ask yourself if you would still invest in The DAO if its rules were written into the charter of a traditional VC firm. Ask yourself it it would not be simpler to keep your ETH and simply vote with your investment dollars for individual blockchain IPO’s where the IPO rules are enforced by the blockchain. Now ask yourself, what value is The DAO providing to your capital in exchange for all of the added restrictions it places on your capital.

A traditional VC firm is run by experts who study potential investments in depth and get paid proportional to their success. People give money to a VC firm because they trust the management of the firm and accept reduced profits because the VC firm is adding value. The DAO is just a committee of non-professional voters who have relatively little ability to do proper due diligence.


My opinion is that The DAO will be DOA (Dead on Arrival). The theory of jointly deciding to fund efforts will face the reality of individual self interest, politics, and economics. There will be rapid defecting (splitting) as people realize there is little to be gained by banding together under the structure of The DAO and much to be lost.

It might not happen at first, but over time the Etheruem community will learn the hard way what the BitShares community has already discovered. Creating social systems to jointly fund development of projects and investments is challenging. Ultimately, technology can only aid in communication, it cannot fix the fundamental incompatibilities between individual self interest and community decision making.

Original URL:  

Original article

OpenAI Team Update

We’d like to welcome the latest set of team members to OpenAI (and we’re still hiring!):


  • Marcin Andrychowicz. Marcin received 3 gold medals in the IOI, and has been a top participant in programming competitions such as TopCoder and ACM-ICPC. He’s been in deep learning for a year and has already made strong progress on neural memory architectures.

  • Rafał Józefowicz. Rafał began his career in competitive programming and the finance industry. He’s now been in deep learning for a year and a half, and his results include the state-of-the-art language model.

  • Kate Miltenberger. Kate has a versatile background, with experience across operations, office administration, user research, community, and support. She previously helped run smoothly.

  • Ludwig Pettersson. Ludwig was previously Stripe’s Creative Director, where he built and led the design team.

  • Jonas Schneider. Jonas did much of the engineering heavy lifting on OpenAI Gym. A recent college graduate, he was previously an intern at Stripe, where he helped build Stripe CTF3.

  • Jie Tang. Jie was an engineer at Dropbox for almost five years, where he led the team responsible for the core file sync technology running on hundreds of millions of desktops. Prior to that he worked in Pieter Abbeel’s robotics lab at Berkeley, working on autonomous helicopters, RGBD perception, and Starcraft bots.


  • Prafulla Dhariwal. Prafulla was a gold medalist in the IMOIPhO, and IAO. He’s currently an undergraduate at MIT, performing research on learning of invariant representations for speech and vision tasks.
  • Paul Christiano. Paul is a PhD student at Berkeley who has written extensively about AI safety. He received best paper and best student paper awards at STOC for research on optimization and online learning.

Original URL:  

Original article

XML Sitemap – Moderately Critical – XSS – SA-CONTRIB-2016-030


The XML Sitemap module enables you to create sitemaps which help search engines to more intelligently crawl a website and keep their results up to date.

The module doesn’t sufficiently filter the URL when it is displayed in the sitemap.

This vulnerability is mitigated if the setting for “Include a stylesheet in the sitemaps for humans.” on the module’s administration settings page is not enabled (the default is enabled).

CVE identifier(s) issued

  • A CVE identifier will be requested, and added upon issuance, in accordance with Drupal Security Team processes.

Versions affected

  • XML Sitemap 7.x-2.x versions prior to 7.x-2.3.

Drupal core is not affected. If you do not use the contributed XML Sitemap module, there is nothing you need to do.


Install the latest version:

Also see the XML Sitemap project page.

Reported by

Fixed by

Coordinated by

Contact and More Information

The Drupal security team can be reached at security at or via the contact form at

Learn more about the Drupal Security team and their policies, writing secure code for Drupal, and securing your site.

Follow the Drupal Security Team on Twitter at

Drupal version: 

Original URL:  

Original article

Open Sourcing Twitter Heron

Last year we announced the introduction of our new distributed stream computation system, Heron. Today we are excited to announce that we are open sourcing Heron under the permissive Apache v2.0 license. Heron is a proven, production-ready, real-time stream processing engine, which has been powering all of Twitter’s real-time analytics for over two years. Prior to Heron, we used Apache Storm, which we open sourced in 2011. Heron features a wide array of architectural improvements and is backward compatible with the Storm ecosystem for seamless adoption.

Everything that happens in the world happens on Twitter. That generates a huge volume of information in the form of billions of live Tweets and engagements. We need to process this constant stream of data in real-time to capture trending topics and conversations and provide better relevance to our users. This requires a streaming system that continuously examines the data in motion and computes analytics in real-time.

Heron is a streaming system that was born out of the challenges we faced due to increases in volume and diversity of data being processed, as well as the number of use cases for real-time analytics. We needed a system that scaled better, was easier to debug, had better performance, was easier to deploy and manage, and worked in a shared multi-tenant cluster environment.

To address these requirements, we weighed the options of whether to extend Storm, switch to another platform, or to develop a new system. Extending Storm would have required extensive redesign and rewrite of its core components. The next option we considered was using an existing open-source solution. However, there are a number of issues with respect to making several open systems work in their current form at our scale. In addition, these systems are not compatible with Storm’s API. Rewriting the existing topologies with a different API would have been time consuming, requiring our internal customers to go through a very long migration process. Furthermore, there are different libraries that have been developed on top of the Storm API, such as Summingbird. If we changed the underlying API of the streaming platform, we would have had to rewrite other higher-level components of our stack.

We concluded that our best option was to rewrite the system from the ground-up, reusing and building upon some of the existing components within Twitter.

Enter Heron.

Heron represents a fundamental change in streaming architecture from a thread-based system to a process-based system. It is written in industry-standard languages (Java/C++/Python) for efficiency, maintainability, and easier community adoption. Heron is also designed for deployment in modern cluster environments by integrating with powerful open source schedulers, such as Apache Mesos, Apache Aurora, Apache REEF, Slurm.

One of our primary requirements for Heron was ease of debugging and profiling. Heron addresses this by running each task in a process of its own, resulting in increased developer productivity as developers are able to quickly identify errors, profile tasks, and isolate performance issues.

To process large amounts of data in real-time, we designed Heron for high scale, as topologies can run on several hundred machines. At such a scale, optimal resource utilization is critical. We’ve seen 2-5x better efficiency with Heron, which has saved us significant OPEX and CAPEX costs. This level of efficiency was made possible by both the custom IPC layer and the simplification of the computational components’ architecture.

Running at Twitter-scale is not just about speed, it’s also about ease of deployment and management. Heron is designed as a library to simplify deployment. Furthermore, by integrating with off-the-shelf schedulers, Heron topologies safely run alongside critical services in a shared cluster, thereby simplifying management. Heron has proved to be reliable and easy to support, resulting in an order of magnitude reduction of incidents.

We built Heron on the basis of valuable knowledge garnered from our years of experience running Storm at Twitter. We are open sourcing Heron because we would like to share our insights and knowledge and continue to learn from and collaborate with the real-time streaming community.

Our early partners include both Fortune 500 companies, including Microsoft, and startups who are already using Heron for an expanding set of real-time use cases, including ETL, model enhancement, anomaly/fraud detection, IoT/IoE applications, embedded systems, VR/AR, advertisement bidding, financial, security, and social media.

“Heron enables organizations to deploy a unique real-time solution proven for the scale and reach of Twitter,” says Raghu Ramakrishnan, Chief Technology Officer (CTO) for the Data Group at Microsoft. “In working with Twitter, we are contributing an implementation of Heron that could be deployed on Apache Hadoop clusters running YARN and thereby opening up this technology to the entire big data ecosystem.”

We are currently considering moving Heron to an independent open source foundation –. if you want to join this discussion, see this issue on GitHub. To join the Heron community, we recommend getting started at, joining the discussion on Twitter at @heronstreaming and viewing the source on GitHub.


Large projects like Heron would not have been possible without the help of many people.

Thanks to: Maosong Fu, Vikas R. Kedigehalli, Sailesh Mittal,Bill Graham, Neng Lu, Jingwei Wu, Christopher Kellogg, Andrew Jorgensen, Brian Hatfield, Michael BarryZhilan Zweiger, Luc Perkins, Sanjeev Kulkarni, Siddarth Taneja, Nikunj Bhagat, Mengdie Hu, Lawrence Yuan, Zuyu Zhang, and Jignesh Patel who worked on architecting, developing, and productionizing Heron.

Thanks to the open source and legal teams: Sasa Gargenta, Douglas Hudson, Chris Aniszczyk.

Thanks to early testers who gave us valuable feedback on deployment and documentation.


[1] Twitter Heron: Streaming at Scale, Proceedings of ACM SIGMOD Conference, Melbourne, Australia, June 2015.

[2] Storm@Twitter, Proceedings of ACM SIGMOD Conference, Snowbird, Utah, June 2014.

Original URL:  

Original article

Genius’ Web Annotations Undermined Web Security

New reader BradyDale shares an article on the Verge: Until early May, when The Verge confidentially disclosed the results of my independent security tests, the “web annotator” service provided by the tech startup Genius had been routinely undermining a web browser security mechanism. The web annotator is a tool which essentially republishes web pages in order to let Genius users leave comments on specific passages. In the process of republishing, those annotated pages would be stripped of an optional security feature called the Content Security Policy, which was sometimes provided by the original version of the page. This meant that anyone who viewed a page with annotations enabled was potentially vulnerable to security exploits that would have been blocked by the original site. Though no specific victims have been identified, the potential scope of this bug was broad: it was applied to all Genius users, undermined any site with a Content Security Policy, and re-enabled all blocked JavaScript code. Vijith Assar dives deep into how Genius did this :The primary way Genius annotations are accessed on the web is by adding “” in front of any URL as a prefix. The server reads the original content behind the scenes, adds the annotations, and delivers the hybrid content. The Genius version of the page includes a few extra scripts and highlighted passages, but until recently it also eliminated the original page’s Content Security Policy. The Content Security Policy is an optional set of instructions encoded in the header of the HTTP connection which tells browsers exactly which sites and servers should be considered safe — any code which isn’t from one of those sites can then be ignored.

Share on Google+

Read more of this story at Slashdot.

Original URL:  

Original article

How to Create a Local Red Hat Repository

There are many reasons you may want a local Red Hat Enterprise Linux repository. Bandwidth is a major factor as downloading updates from the Internet can be time and bandwidth consuming. Whatever your reason, this tutorial will walk you through the process of getting your local repository setup.

Original URL:  

Original article

The Meson Build System

Meson is an open source build system meant to be both extremely
fast, and, even more importantly, as user friendly as possible.

The main design point of Meson is that every moment a developer
spends writing or debugging build definitions is a second wasted. So
is every second spent waiting for the build system to actually start
compiling code.


  • multiplatform support for Linux, OSX, Windows, Gcc, Clang, Visual Studio and others
  • supported languages include C, C++, Fortran, Java, Rust
  • build definitions in a very readable and user friendly non-turing complete DSL
  • cross compilation for many operating systems as well as bare metal
  • optimized for extremely fast full and incremental builds without sacrificing correctness
  • built-in multiplatform dependency provider that works together with distro packages
  • fun!

Original URL:  

Original article

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑

%d bloggers like this: