CockroachDB Skitters into Beta

We introduced Cockroach Labs last June with a simple yet ambitious mission: Make Data Easy.

We’ve spent the intervening months moving CockroachDB from an alpha stage product to launching CockroachDB beta. In the process, the team has nearly tripled in size and development has accelerated to a blistering pace. We’ve supplemented our original investment led by Peter Fenton of Benchmark with an additional round of funding, led by Mike Volpi of Index Ventures. We’re lucky to also count GV (formerly Google Ventures), Sequoia, FirstMark, and Work–Bench as investors. cockroachDB_beta_hero

Why CockroachDB?

It’s bog standard in our industry for the data architecture you start with to remain the one you still have in five years, love it or hate it. In light of this, it makes sense to choose your initial architecture carefully as it can save tremendous resources down the road. Our goal in building CockroachDB was an open source database better suited to the fast-evolving challenges that companies will face over the next decade. We believe those needs encompass three crucial capabilities: scalability, survivability, and SQL, while always maintaining strong consistency.


The data underlying most businesses continues to expand faster than traditional databases can keep up with. The challenge goes beyond what’s immediately obvious. It turns out that data has no trouble expanding to fill available capacity: most companies are busy collecting data on their data! With data growing faster than improvements to the underlying database hardware, horizontal scalability is a requirement.


In today’s leveraged business environment, downtime has never been so expensive. A SaaS company with 500 enterprise customers experiencing a five minute outage is actually causing 2,500 minutes, or nearly two days of disruption. A database should survive even datacenter outages, and it should do so without manual intervention and with perfect data fidelity (strong consistency).


SQL is the lingua franca of the database world. Or at least it was for about 35 years until 2004, when Google announced a new database called BigTable which eschewed the old standards in pursuit of simplicity and scalability. BigTable and the many databases which followed in its footsteps never envisioned or agreed on a consistent API for developers. It’s our industry’s own tower of Babel, with similar results. But providing a common standard is not all SQL is good for, especially as project complexity increases. Developers need transactions and well-defined schemas, and while many database users are able to write SQL queries, they may not be able to program map reduces. SQL has been getting the job done for 45 years now, it’s widely understood, and it’s a platform in its own right, with a substantial ecosystem of tools and educational resources built around it. For these reasons, we believe it’s the most productive API, and our ongoing challenge is to make it work well enough to justify that claim.

The Road to CockroachDB Beta

In addition to our open source contributors, Cockroach Labs currently has more than 20 full time employees building CockroachDB. The system has been evolving so quickly that it’s become increasingly difficult to keep up. The project has rapidly matured in functionality, stability and performance over the past three quarters.

Our original intent was to deliver the beta end of summer last year, as a transactional key-value store. We reconsidered after becoming convinced that SQL was a necessary part of CockroachDB’s identity. The consensus was that without it, developers would end up having to build too much of the missing functionality themselves. We also worried about missing the opportunity to precisely define CockroachDB, instead leaving our users with more questions than answers. Will it have SQL eventually? Is it NoSQL? Am I supposed to build my own indexes?

The decision to include SQL in our beta release added two quarters to our timeline. And before you wonder how that’s possible in only two quarters, it’s important to note that we stopped short of supporting joins or of parallelizing the execution of distributed SQL queries. Nevertheless, the stage is set: we are a scalable SQL database. The beta SQL support is a functional and appropriate starting point.

What Does “Beta” Mean?

We’re deliberately announcing a beta for CockroachDB because nothing is better than supporting real world use cases to sharpen focus and efficiently direct resources. ”Beta” means software with more bugs and potential performance or stability issues. That’s a pretty good description of CockroachDB right now, and this is the expectation we’d like you to start with. However, the key differentiator of CockroachDB beta from our alpha release is a commitment that future changes will be backwards compatible.

What’s in CockroachDB Beta?

We’re excited to announce that this beta release contains nearly everything found in the original design document and many previously-unanticipated features, not least of which is the addition of SQL. Crucially, everything necessary to support scalability, survivability, and strong consistency is in place. The system self organizes without requiring external services, self heals as nodes are lost or damaged, and automatically rebalances to maintain equilibrium as new nodes are added.

We provide distributed transactions with serializable and snapshot isolation and have implemented an online schema change system that allows indexes to be added without requiring any downtime or table locking. We’ve also added extensive support for keeping time series of metrics including op latency and counts, network and disk I/O, and host memory and cpu usage. We surface this information to operators through our fast-evolving administrative UI:


One of the most notable features of CockroachDB is just how simple it is to deploy. CockroachDB is a single binary which requires only the location of one or more storage devices to manage. Starting a multi-node cluster is as simple as starting the first node on its own and then pointing each additional node at the first node or any other node which has already joined the cluster. There are no external dependencies required. No global configuration, no distributed file system, no bundle of resources or install scripts. There are no config files and we’ve strictly limited the available command line flags to those which are useful and not just tunable knobs for the sake of having knobs. Securing a cluster is similarly straightforward.

What Comes Next?

For those of us working on The Roach, this is where the fun truly begins. We’re proud to have implemented a design with such promise for building the next generation of products and services. We can’t imagine a future without a sane, scalable, and performant database solution, and we intend to build it. CockroachDB is meant to work as well for a three-node cluster as a Silicon Valley data farm, and to provide a straight path between the two.

In the near future we’ll focus on expanding the SQL capabilities to include joins and distributed query execution; we’ll continue to add better production and administration features; and we’ll improve stability and performance. True success will mean CockroachDB joining the ranks of other open source platforms and geek household names like Postgres, MySQL, and Hadoop. We think we’re on to something here and can’t wait for new users to help shape the direction.

Download the CockroachDB beta, deploy a test cluster, and build a test app. The team is committed to supporting new users and debugging issues as they arise, so please don’t hesitate to contact us with questions or concerns. The best way to ask questions in real time is on Gitter or in the CockroachDB User Group. If you’d like to file an issue or feature request, please use our GitHub issues.

Happy Roaching!

Original URL:  

Original article

Algorithms as Microservices

We recently wrote about how the Algorithm Economy and containers have created a fundamental shift in software development. Today, we want to look at the 10 ways algorithms as microservices change the way we build and deploy software.

10 ways the algorithm economy and containers are changing how we build and deploy software today

Peter Sondergaard from Gartner has been the main thought leader of the Algorithm Economy, and how companies can use algorithms to extract value from their data.

Peter Sondergaard Senior Vice President, Gartner “Data is inherently dumb. Algorithms are where the real value lies.”

Google, Facebook, Amazon, Netflix and others are using algorithms to create value, and impact millions of people a day.

Algorithmic intelligence is at the core of today’s most important companies

The algorithm economy and containers allow developers to run algorithms as microservices, which means code can be written in any programming language, and then seamlessly united across a single API.

Three fundamental shifts in technology: â—Ź The Algorithm Economy â—Ź Containers â—Ź Microservices

The algorithm economy enables a marketplace where easy-to-integrate algorithms can be made available and easily stacked together to manipulate data, extract key insights, and solve problems efficiently. 

The Algorithm Economy The next wave of innovation, where developers can produce, distribute, and commercialize their code

Containers wrap applications, services, and their dependencies into a lightweight package that runs like a virtual machine. 

Containers Lightweight virtualization that bundles all the application logic, dependencies, libraries, etc. into a single package running in the cloud

Microservices decouple modules from a monolithic codebase, reducing fragility in the codebase, and ensuring each service acts a smart endpoint.

Microservices An architecture where the various functions of an app are unbundled into a series of decentralized modules, each organized around a specific business capability

When algorithms run as microservices, we ensure code is dependency-free, interoperable, and composable.

Algorithms as containerized microservices ensure interoperability

Code is always live, and available to use without ever having to manage or provision servers. 

Code is always “on,” and can auto-scale in the cloud without ever having to configure, manage, or maintain servers and infrastructure

By running algorithms as microservices, we also allow companies to focus on their data, while the algorithm economy supplies the algorithms needed.

The algorithm economy allows for the building blocks of algorithmic intelligence to be made accessible, and discoverable through marketplaces and communities

The fundamental shift of container technology, the algorithm economy, and algorithms packaged as microservices creates an environment where rapid prototyping has never been easier due to a reduction in the infrastructure needed to build and deploy apps.

Containerizing algorithms as microservices makes code accessible via an API, and hosted on scalable, serverless infrastructure in the cloud

Liked this? Get our Algorithms as Microservices deck here.

Original URL:  

Original article

Visual C++ for Linux Development

Today we’re making a new extension available that enables C++ development in Visual Studio for Linux. With this extension you can author C++ code for Linux servers, desktops and devices. You can manage your connections to these machines from within VS. VS will automatically copy and remote build your sources and can launch your application with the debugger. Our project system supports targeting specific architectures, including ARM. Read on for how to get started with our new Linux projects.

Today we only support building remotely on the Linux target machine. We are not limited by specific Linux distros but we do have dependencies on the presence of some tools. Specifically, we need openssh-server, g++, gdb and gdbserver. Use your favorite package manager to install them, e.g. on Debian based systems:

sudo apt-get install openssh-server g++ gdb gdbserver


Download the Visual C++ for Linux Development extension or get it from the extension manager in Visual Studio. Today we do have a dependency on the Android Tools for Visual Studio. If you already have VS installed you can add those by going to Add Remove Programs, modify Visual Studio and select them under Visual C++ Mobile Development.

To get started create a new project by going to Templates > Visual C++ > Cross Platform > Linux.


Today we have three templates available; Blink for IoT devices like the Raspberry Pi, Console Application as a bare application, and Empty for you to add sources and configure from a clean slate.

Your First VS Linux Project

Let’s get started by creating a Console app. After creating your project from that template set a break point on the printf statement then hit F5 or the Remote GDB Debugger button. By default, the Console Application is set to a debug/x64 configuration. If your remote target is x86 or ARM you’ll want to change those options first. In this example I’m using a x64 Ubuntu VM.


Since this is our first time targeting a Linux machine you will be prompted for connection information.  This is triggered by building the project.

Connect to Linux - first connection

We support both password and certificate base authorization, including use of passphrases with certificates. Upon a successful connection we save your connection information for subsequent connections. You can manage your saved connections under Tools > Options > Cross Platform > Linux. Yes, passwords/passphrases are encrypted when stored. We plan to support connecting without saving the connection information in a future update.

Upon connecting, your sources will be copied to the remote Linux machine and we will invoke gcc to build the sources with the options from the project properties. After the build successfully completes, your code will be launched on the remote machine and you will hit the break point you set earlier.

printf break

Linux Project Properties

Let’s take a look at the project properties to understand where things got deployed on the remote Linux machine.

Remote settings no connections

Under remote settings, you will see the remote root is set to ~/projects/ by default and that we are setting the remote project directory to match our project name in that location. If we take a look on the Linux machine, we’ll find main.cpp as well as our build artifacts in ~/projects/ConsoleApplication1.


Looking at the General settings for the project, you can see how our output and intermediate directories were configured. Additionally, you’ll see that this project was configured as an application – thus our executable is under bin/x64/Debug/ as ConsoleApplication1.out. Notice that for configuration types we also support static and dynamic libraries.

Linux IoT Projects

Now let’s take a look at an IoT device, the Raspberry Pi. You can use any type of Pi running Raspbian. For our blink sample we use wiringPi – if you don’t have this setup you can either install it via apt or from source. To add a new connection, go to Tools > Options and search for Linux. Now click add to connect to your Raspberry Pi.


Go to project properties and take a look under Build Events at Remote Post-Build Events.

Remote post build event

You can use this to execute a command on the remote Linux target after build. This template comes preconfigured to export the GPIO pin for the LED so that we don’t have to run our executable as root.

Now connect an LED to pin 17 on your Raspberry Pi as shown here.


Open main.cpp and set a breakpoint on the delay call after the first digitalWrite and hit F5. You should see your LED light up and execution will pause at your breakpoint. Step through your code over the next digitalWrite call and you will see your LED turn off.

Visit our IoT Development page to stay current on all of our offerings in this space.

We’ve covered headless and device Linux applications, what about desktop? Well, we have something special here: we’re going to launch an OpenGL app on a Linux desktop. First make sure your Linux desktop has been configured for OpenGL development. Here are the apt packages we used: libgles1-mesa, libgles1-mesa-dev, freeglut3, freeglut3-dev.

Now create an empty Linux project and go grab the source for Spinning Cube from Julien Guertault’s OpenGL tutorial. Extract it and add main.c to your project. To enable Intellisense you will need to add the OpenGL headers to the VC++ Directories, you can get them from the OpenGL Registry.  Now go to your project properties and add export DISPLAY=:0.0 to the Pre-Launch command.

Linker input

Now, under Linker Input add the library dependencies: m;GL;GLU;glut.

Also, make sure your remote settings are for the right machine.


Now hit F5.


A couple of interesting places to put breakpoints are around line 80 where the cube rotation is set (try changing the alpha value) or in KeyboardFunc where you can inspect the values of the pressed key.

Go Write Some Native Linux Code

We hope you are as excited by the possibilities this opens up as we are.

Install the Visual C++ for Linux Development extension, try it out and let us know what works for you, what doesn’t or if you encounter any issues. If your focus is IoT remember to check out our IoT Development page to stay current on happenings there. You can reach us here through the blog, on the extension page on the gallery, via the VS Feedback channel, or find our team @visualc or me, @robotdad, on Twitter.

– Marc Goodner

Original URL:  

Original article

Why Microsoft Making Linux Apps Run on Windows Isn’t Crazy

Why Microsoft Making Linux Apps Run on Windows Isn’t Crazy

For web developers stuck on Windows, Linux tools could make their lives easier. Also: Bash on Windows. The post Why Microsoft Making Linux Apps Run on Windows Isn’t Crazy appeared first on WIRED.

Original URL:  

Original article

Microsoft is bringing the Bash shell to Windows 10

Here is an announcement from Microsoft Build you probably didn’t see coming: Microsoft today announced that it is bringing the GNU project’s Bash shell to Windows. Bash (Bourne Again SHell) has long been a standard on OS X and many Linux distribution systems, while the default terminal for developers on Windows is Microsoft’s own PowerShell.

More importantly than bringing the shell over to Windows, developers will now be able to write their .sh Bash scripts on Windows, as well (or use emacs to edit their code). Microsoft noted that this will work through a new Linux subsystem in Windows 10 that Microsoft worked on with Canonical.


The idea here is clearly to position Windows as a better operating system for developers who want to target other platforms besides Microsoft’s own. Under its new CEO Satya Nadella, the company has quickly embraced the idea that it wants to target all developers and platforms — not just its own. While seeing Microsoft doing anything even remotely associated with a rival operating system like Linux was unthinkable only a few years ago, the company now offers support for Linux on Azure, has open sourced numerous of its technologies and even plans to bring its flagship database product SQL Server to Linux in the near future.

Bash will arrive as part of the Windows 10 Anniversary Update this summer, but it’ll be available to Windows Insiders before that. And looking ahead, Microsoft says it may bring other shells to Windows over time, too.


Original URL:  

Original article

Ubuntu on Windows

See also Scott Hanselman’s blog here

I’m in San Francisco this week, attending Microsoft’s Build developer conference, as a sponsored guest of Microsoft.

That’s perhaps a bit odd for me, as I hadn’t used Windows in nearly 16 years.  But that changed a few months ago, as I embarked on a super secret (and totally mind boggling!) project between Microsoft and Canonical, as unveiled today in a demo during Kevin Gallo‘s opening keynote of the Build conference….

An Ubuntu user space and bash shell, running natively in a Windows 10 cmd.exe console!

Did you get that?!?  Don’t worry, it took me a few laps around that track, before I fully comprehended it when I first heard such crazy talk a few months ago 🙂

Here’s let’s break it down slowly…

  1. Windows 10 users
  2. Can open the Windows Start menu
  3. And type “bash” [enter]
  4. Which opens a cmd.exe console
  5. Running Ubuntu’s /bin/bash
  6. With full access to all of Ubuntu user space
  7. Yes, that means apt, ssh, rsync, find, grep, awk, sed, sort, xargs, md5sum, gpg, curl, wget, apache, mysql, python, perl, ruby, php, gcc, tar, vim, emacs, diff, patch
  8. And most of the tens of thousands binary packages available in the Ubuntu archives!
“Right, so just Ubuntu running in a virtual machine?”  Nope!  This isn’t a virtual machine at all.  There’s no Linux kernel booting in a VM under a hypervisor.  It’s just the Ubuntu user space.
“Ah, okay, so this is Ubuntu in a container then?”  Nope!  This isn’t a container either.  It’s native Ubuntu binaries running directly in Windows.
“Hum, well it’s like cygwin perhaps?”  Nope!  Cygwin includes open source utilities are recompiled from source to run natively in Windows.  Here, we’re talking about bit-for-bit, checksum-for-checksum Ubuntu ELF binaries running directly in Windows.
[long pause]
“So maybe something like a Linux emulator?”  Now you’re getting warmer!  A team of sharp developers at Microsoft has been hard at work adapting some Microsoft research technology to basically perform real time translation of Linux syscalls into Windows OS syscalls.  Linux geeks can think of it sort of the inverse of “wine” — Ubuntu binaries running natively in Windows.  Microsoft calls it their “Windows Subsystem for Linux”.  (No, it’s not open source at this time.)
Oh, and it’s totally shit hot!  The sysbench utility is showing nearly equivalent cpu, memory, and io performance.
So as part of the engineering work, I needed to wrap the stock Ubuntu root filesystem into a Windows application package (.appx) file for suitable upload to the Windows Store.  That required me to use Microsoft Visual Studio to clone a sample application, edit a few dozen XML files, create a bunch of icon .png’s of various sizes, and so on.
Not being Windows developer, I struggled and fought with Visual Studio on this Windows desktop for a few hours, until I was about ready to smash my coffee mug through the damn screen!
Instead, I pressed the Windows key, typed “bash“, hit enter.  Then I found the sample application directory in /mnt/c/Users/Kirkland/Downloads, and copied it using “cp -a“.  I used find | xargs | rename to update a bunch of filenames.  And a quick grep | xargs | sed to comprehensively search and replace s/SampleApp/UbuntuOnWindows/. And Ubuntu’s convert utility quickly resized a bunch of icons.   Then I let Visual Studio do its thing, compiling the package and uploading to the Windows Store.  Voila!

Did you catch that bit about /mnt/c…  That’s pretty cool…  All of your Windows drives, like C: are mounted read/write directly under /mnt.  And, vice versa, you can see all of your Ubuntu filesystem from Windows Explorer in C:UsersKirklandAppDataLocalLxssrootfs

Meanwhile, I also needed to ssh over to some of my other Ubuntu systems to get some work done.  No need for Putty!  Just ssh directly from within the Ubuntu shell.

Of course apt install and upgrade as expected.

Is everything working exactly as expected?  No, not quite.  Not yet, at least.  The vast majority of the LTP passes and works well.  But there are some imperfections still, especially around tty’s an the vt100.  My beloved byobu, screen, and tmux don’t quite work yet, but they’re getting close!

And while the current image is Ubuntu 14.04 LTS, we’re expecting to see Ubuntu 16.04 LTS replacing Ubuntu 14.04 in the Windows Store very, very soon.

Finally, I imagine some of you — long time Windows and Ubuntu users alike — are still wondering, perhaps, “Why?!?”  Having dedicated most of the past two decades of my career to free and open source software, this is an almost surreal endorsement by Microsoft on the importance of open source to developers.  Indeed, what a fantastic opportunity to bridge the world of free and open source technology directly into any Windows 10 desktop on the planet.  And what a wonderful vector into learning and using more Ubuntu and Linux in public clouds like Azure.  From Microsoft’s perspective, a variety of surveys and user studies have pointed to bash and Linux tools — very specifically, Ubuntu — be available in Windows, and without resource-heavy full virtualization.

So if you’re a Windows Insider and have access to the early beta of this technology, we certainly hope you’ll try it out!  Let us know what you think!

If you want to hear more, hopefully you’ll tune into the Channel 9 Panel discussion at 16:30 PDT on March 30, 2016.


Original URL:  

Original article

Microsoft now lets you turn any Xbox One into a development kit

O92A2929 The Xbox One is about to become a far more interesting (and accessible) platform for indie game developers — and regular users will soon be able to use their console to chat with Microsoft’s Cortana personal assistant, too. Ever since Microsoft launched its Xbox One console, the company promised it would allow any developer to develop apps for it — but until now, you… Read More

Original URL:  

Original article

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑

%d bloggers like this: