GitBook: A modern publishing toolchain

Version Control, GitBook is based on GIT scm. A simple “git push” is enough to publish a new version.

Markdown, books are written using the markdown or AsciiDoc syntax. TeX support is planned.

Simple to update, publish and update your books easily using Git or the web book editor.

Responsive, books can be read on all devices, laptops, tablets, phones, kindles, etc.

E-book readers, books are readable on the Amazon Kindle, Nook and other readers (PDF, ePub, Mobi).

GitHub, write your book on GitHub and publish it in seconds through GitBook.

Choose your price, or accept donations, from $0 (or free) to $100; let everybody buy your book from all the main marketplaces.

Communicate and , focus on your readers, update and engage them with the progress of your work.

You keep the rights to your book, not us. So you can do a deal with a publisher at any time.

Transfer your revenue directly to your bank account (Only US is supported right now) or PayPal.

Create discounts, define promotions, design viral offers and make your customers happy.

Personalize your landing page with personalized branding and custom domain names.


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/xuJgk7hpI6M/

Original article

For the First Time in a Millennium, a Top Predator Returns

For the First Time in a Millennium, a Top Predator Returns

Top predators have been removed from many a food web, throwing entire ecosystems into disarray. Conservationists want to put them back, but it’s a complicated problem blending ecology, geography, and social tact.

The post For the First Time in a Millennium, a Top Predator Returns appeared first on WIRED.




Original URL: http://feeds.wired.com/c/35185/f/661370/s/446f5d33/sc/4/l/0L0Swired0N0C20A150C0A30Cfirst0Etime0Emillennium0Etop0Epredator0Ereturns0C/story01.htm

Original article

Gigaom: The Life and Death of a Venture-Funded Media Startup

Gigaom during its heady days

Gigaom: The Life and Death of a Venture Funded Media Startup

As most know, we lost Gigaom last week. It was a sudden passing of a widely beloved tech media company, and it’s been touching to watch the Twitter tributes from fellow media and tech readers wide and far.

But now that the memorials have largely been done and the soul(s) of the company is rising up and on to better places, it’s time to examine the body and determine the cause — or causes — of death.

As a former employee, I have a perspective that’s probably a bit more informed and nuanced than many. Not authoritative in any sense, but a perspective that is informed by the knowledge which came as a result of four years at the company as a VP and the last two as an external research analyst partner.

Like most, I was shocked when the news came out. After all, Gigaom had just topped up the tank a year ago with an $8 million funding round. Om, in a post announcing his transition to full-time VC, talked about how the company was doing well. Paul Walborsky, my former boss, talked on record about how the company was growing. All of this seemed to make sense and largely fit the picture that I was getting as a former employee semi-tied into the company through friends and ex-coworkers and through working with them as an external research partner.

The news also made me sad for what we had lost. Gigaom had gone big, had tried to build something new and different, all the while adhering to an editorial ethos that remained a part of its DNA until the very end. Despite its collapse, Gigaom’s story was a worthy one, one worth telling, and because of this I figured I would do just that, hopefully providing the necessary context to help myself and others figure out what happened.

I found as I reflected further I should not have been completely surprised by what happened. It’s not unlike when someone suddenly passes and signs of ill health that were previously ignored come into sharper focus. And while I don’t have the body in front of me to examine — no one save the company’s officers, VCs and now Silicon Valley Bank, its main creditor, have the benefit of the company’s financials — I think I can connect some dots from my own knowledge of the business and some of the evidence put into the public domain by former employees.

In the press, some had pointed at research as one of the main causes of Gigaom’s demise. I do think research and its cost model played a significant role. But I don’t think it was the only cause. Gigaom died a premature death due to many reasons, and if there’s any one overriding cause I’d point to the massive amount of VC funding and the resulting company cost model that was put in place to attempt to scale — across all of its business units — up to meet the high growth expectations that are incumbent with venture investment.

But I’m getting ahead of myself. Let’s start with the beginning.

Gigaom And The Tech Media Landscape in 2007–2009

Back in 2007, the tech blogosphere had started to come out of its early wild west days, but it still wasn’t completely corporatized. Sure, AOL had started to roll up companies like Weblogs (owner of Engadget), but they hadn’t yet bought Techcrunch, while independents like VentureBeat, ReadWriteWeb and Gigaom were cranking out good work.

Gigaom in the early days, pre-VC money

Gigaom was punching above its weight, earning lots of respect for an editorial ethos instilled by Om. That ethos — which was weaved into the company’s fiber and never went away (which is part of the reason for the sadness expressed by so many in the tech community over the past week) — meant the company was becoming a highly influential read among all the serious tech players in Silicon Valley.

This was in part because the writers were really good — Om, Katie, Liz, Stacey — but also because it contrasted so sharply with the general direction of tech media. As a nation, we had just begun to embrace our inner Buzzfeed (or one could argue, at that time, our inner HuffPo), as many blogs began to use provocative headlines and chase page views in ways that Gigaom then -and until the end — eschewed.

This avoidance of gimmicks and the seriousness of the journalists who worked for Om gave Gigaom serious street cred in Silicon Valley. As the same time, the company wasn’t widely known outside of the tech insider community, and never did cross over into a mass market tech media brand in the way Techcrunch or Engadget had. This was fine, however. Being an insider brand works, particularly if you can have influence and sway with the companies and folks who are masters of the tech universe. And Gigaom did.

Gigaom decided to try and build a business early that leveraged this premium tech insider brand, and do so in ways that went beyond simple ad dollars. The first step in this direction was events. And while many in media would say events are a fairly common first step for any fledgling media startup, most would attest Gigaom’s events were special exactly because they were such a tech insider brand.

At Gigaom events you rubbed elbows with the who’s who of the tech world, whether it was Reed Hastings of Netflix to Werner Vogels of Amazon to Jack Dorsey of Twitter/Square.

But it went beyond tech star power. Thanks to Surj Patel (Gigaom’s former Events VP) and the editorial team, these events were extremely well done. And unlike many in the events business, Gigaom avoided pay to play arrangements, meaning sponsors couldn’t buy a speaking spot. I remember how angry some would get about this policy, but the editorial/business firewall was sacrosanct and many sponsorships turned down because of it.

All this meant Gigaom’s events were profitable out of the gate. Over time, the company would expand and soon had the industry’s signature cloud computing event in Structure, and later the premier big data event in Structure Big Data.

The Gigaom team at Mobilize 2008

But even as events were successful and helped the company diversify from ad revenue, they were not inherently scalable. A startup could only do so many events and keep a blog running. Events are hard work. So Paul Walborksy, who was brought on as CEO in 2007, started to look towards new business lines that were scalable.

Like paid content.

Project Condor

Paul was brought on by Om and the board to steer the broader business ship and its strategy while Om would be the company’s editorial leader across all of our businesses. Paul is a former Wall Street guy who was adept at seeing new opportunities in information services and media more broadly, so empowered by Om and the board, he started to look around for new business opportunities.

Paul knew ad CPMs were heading down over time and, if Gigaom were to survive, it would have to so by getting paid for its content. While some blogs, like Ars Technica, had been successful in creating subscription models featuring ad-free content and long-form extras, Om and Paul felt they should keep all of the blog content and the community conversation (which, at the time before so much of the conversation around blog media moved “off-site” to Twitter, was mainly the comments section), in front of any paywall.

Paul believed that there was an opportunity for Gigaom to convert readers interested in its various technology categories like cleantech, mobile and cloud to deeper reads in the form of research reports. An early business plan was put together, called Project Condor, and a special projects editor in Celeste LeCompte, was assigned to work on what was now officially a ‘skunkworks project’.

This was in late 2008 and early 2009, and it was soon after this I was brought on board. I had early conversations with Om going back to 2007, prior to his heart attack, about Gigaom possibly heading into research, but this incarnation envisioned by Paul was much more evolved and, in my mind, disruptive.

Still in beta: early Gigaom research (then Gigaom Pro) site

At this time, the early business plan was based on a premise that research could be somewhat democratized if Gigaom could make it available to individual subscribers at a low price point. Traditional technology market research from the likes of Gartner and Forrester had been expensive and largely unobtainable for anyone without the backing of a corporate budget. A typical research report from one of these companies cost two or three thousand dollars, and a larger research service subscription could easily cost anywhere from twenty five to fifty thousand dollars.

But how would this work? It’s not like Gigaom could create research out of nothing. Someone had to write reports and Gigaom, while it has just taken a $4.5 million series C funding round in late 2008, wasn’t about to start hiring expensive industry analysts with six figure salaries.

Instead,we saw an opportunity to leverage a growing trend in market research, which was the increasing number of analysts thriving outside of the confines of traditional research companies. The arrival of push-button publishing on the web had given analysts and consultants more ways to reach customers than in the past, and many star analysts who started at ‘big research’ had started to strike out on their own. Others, many who had never worked for traditional research but had deep domain expertise, saw an opportunity to provide advisory services to companies.

We thought, what if we could provide a platform for some of these independents to reach a wider audience through a research service from Gigaom? We believed that if we could pay these independents to write reports on a freelance basis, they would get the exposure of being part of a ‘virtual analyst network’ at Gigaom and we would get to tap into their expertise without having to pay the high salaries that often come with such knowledge and backgrounds. Win-win.

And so the plan was set. Over a four month period — from the time I can on in early February 2009 to late May 2009 — we were in stealth mode, recruiting analysts, setting up research projects, and building a website. The core editorial team was just Celeste LeCompte and I, while the project and tech team involved a super capable Jaime Chen as the product lead on the site development side, with assists from WordPress super-ninja Mark Jaquith and Gigaom original web guru, Chancey.

Analysts I approached were receptive. They liked the idea of aligning with Gigaom and also being part of a fairly new approach to market research, and, of course, they also liked that fact we would pay them to write the report.

We also thought at the time we could leverage Gigaom’s own stable of writers by having them contribute pieces on a regular basis. These pieces were called “Long Views”, which would allow the writers on the blog to stretch their legs a bit with bigger word counts and deeper analysis then was typical for the blogs at the time. This was important because we knew that the idea of individual brands mattered, and we thought readers of Katie Fehrenbacher and Liz Gannes, to name a couple, could be convinced to possibly pay money to see deeper analysis.

We launched Gigaom Research — then called Gigaom Pro — on May 28, 2009. Om wrote a big post on it, and the story was picked up pretty widely in places like the New York Times. In retrospect it’s amazing how much work we got done in that short amount of time, as we launched with tens of reports and over 20 analysts in the network.

Gigaom Pro mention in New York Times

A part of the launch that got a lot of focus was the price point. We made a decision to launch at what seemed to many a ridiculously low price of $79 per year. But because we were doing something that we believe had largely never been done before, we were trying to gain significant conversion of our readership — probably around 2–3 million monthly uniques at the time — to the research product. We thought if we could make the price so low, it would enable the ‘true fans’ of Gigaom to subscribe and support while providing immense value in the form of research.

Some would argue that by going so low, we were “anchoring” the price at a low level, which would make it difficult to raise over time. Others also felt that there was a possible perceived value problem by going so low, that by putting a price of less than a hundred bucks on an annual subscription would make research buyers think the quality would be low, that the offering wasn’t comparable to what you would find at other research houses.

These are both valid arguments, but they were risks we were willing to take. We ultimately knew we wanted to be disruptive, and we knew we could only do so trying to create a product that had never really been created before.

Scaling With Venture Capital

With the context on the blog, events and research side, it’s worth taking a quick look at the company and its financing as a whole. At the beginning of 2009, the company had $4.5 million in the bank and what would be, by the middle of the year, three lines of business.

The large majority of the employees were still on the editorial side. We had a network of blogs, each manned by an editor and sometimes two, and an some editorial staff of a couple to support this. On the research side, there were two of us — and we had a handful of freelance analysts writing reports. On the events side, it was a few folks and help from an external events team.

When you take on financing as a media startup, you’re expected to grow quickly. The investors are hoping for the usual 10x return, which in startup media is extremely hard to do. Some of the early exits for blogs were not astronomical prices — $20-$35 million for WeBlogs (about 10x revenues)in 2005 and Ars sold for about $25 million to Conde Nast in 2008. Eventually Techcrunch would sell for between $25 and $40 million to AOL.

Techcrunch’s & HuffPo’s exits to AOL were setting valuations for blog networks

When you look at these valuations, it starts to show the difficulty of getting a 10x return on a blog of Gigaom’s audience size. Gigaom had a smaller readership than these sites — in 2009 it was probably 2–3 million monthly uniques — and getting 10x off of nearly $6 million (the company had taken earlier funding rounds of $325 thousand and $1 million) means you’re looking at a $60 million payout. Add in another $2.5 million in late 2010, and now you’re looking at a $80 million plus exit.

That’s a tall order. It also tells you why Paul and the board started looking for new business models (research) to get to that type of payday. Outside of Huffington Post (which sold for over $300 million to AOL), that type of acquisition price would be near the top of blog exit valuations to that point.

Where’d The Money Go?

And now — without the benefit of detailed financials — let’s look at where the money was eventually spent. In short, everywhere.

The editorial team on the blog side grew. According to Mathew Ingram, it was a 22 person editorial staff by the time Gigaom shut its doors. On the research side, the business eventually went beyond two people and we hired sales people and some research directors. Sales and marketing grew. On the technology and web site, the team grew. In a recent tweet by Casey Bisson, one of the company’s former lead developers, he shouted out five others on site engineering and QA Throw in a couple product management types and soon your looking at 8–10 people, a development team larger than some small venture funded software startups.

The company also had significant costs locked up in real estate — when it had acquired PaidContent in 2011, the company assumed the rent of the company’s Manhattan office space. It also had office space in high-rent San Francisco.

In other words, Gigaom had a lot of overhead across all the business units. Events had the least — in part because the company always made significant use of external agencies to help pull them off — but overall the company had grew staff, had rising fixed costs in the form of real estate, and continued to service interest payments on its growing debt load over these last few years.

One of the areas of staff growth was sales people for Gigaom Research. In 2010, as individual subscriptions were not hitting our targets, we decided to go after enterprise money. Originally it was a fairly modest effort, with myself doing some of the original deals and eventually we brought on a couple salespeople.

But in the last few years that grew significantly. I left the company as a full time employee in late 2012, but after that the sales staff continued to grow. Taking a quick look at Linkedin and searching “Gigaom sales” and there looks like there were anywhere from 7–10 research sales people. There were also sales folks for events, ads. That’s a lot of sales people.

Over time, increasing the sales and marketing mix towards research did make sense. According to an interview made by Paul, he said that research made up 60% of the company’s revenue. In the same article, revenue were estimated to be $15 million, so that translates to about a $9 million research business.

Certainly, it’s worth noting that this $9 million is a hard won $9 million. There’s lots of sales people you have to pay, and over time the cost of paying freelance analysts went up as you put out more research. And no doubt, this type of business is a pretty radical departure from the original vision of Gigaom Research, which was centered around a model of high volume of individual subscriptions to consumers.

Should Gigaom have continued to invest in research? I’d suggest that with the large amount of venture capital it had taken — and the expectation of an earn out for investors — that Paul and the board decided it had no choice.

But in reality it did have a choice. We all know they could have decided to build a more modest business, one centered around the blog and events and possibly trying to increase the individual research subscription model. However, I think because the board wanted to see growth, rapid growth, to justify its investments, the decision was made to continue to grow all aspects of the business, including the now corporate-sales centric research model.

The End

Looking at all the information before us, some would say that the movement towards a corporate-sales research business was the cause of Gigaom’s eventual demise. I’d say certainly it certainly contributed to it, but I’d also say let’s not confuse cause and effect.

What I mean by this is all the over-investment and high operating expense built into in all aspects of the business — not just research — was an effect of the company taking lots of venture capital and venture debt and the expectations that go along with that. Gigaom had an editorial staff of 22 for a blog that had 6.5 million monthly visitors. It had 8–10 product and website people. It had research directors, sales people and others. They experimented with new events like Structure Europe in 2013 (an event that only happened once — a good sign it lost money). Add in freelance analysts, temporary event staff, and other costs and you have an expensive, high-opex business.

All in the name of growing and scaling to hit revenue targets that probably were not reachable. Revenue targets that were, no doubt, chased in the name of hitting a certain multiple to recoup the investment made by the venture capitalists who wanted a 10x return.

Gigaom shut down abruptly on March 9, 2015

The most confusing thing for me initially as someone who left a few years ago is the quickness of the demise so soon after an $8 million funding round. That’s a lot of money to burn through in just one year, particularly after the company had survived for over 7 years to that point on a total of $12 million.

But as I’ve thought about it and talked to others the last few days, things seem a bit clearer. I’ve been told by some that a balloon payment on debt owed to Silicon Valley Bank came due. That may be true, and if is it doesn’t change things — it’s just another sign that things caught up to them after years at running at a loss and they were never able to turn things towards a business that ran in the black, or at least kept operating losses low enough that the burn rate and debt borrowed was manageable.

Wrapping It All Up

Gigaom is a company that started going down the venture capital and (presumably) venture debt path early on. By late 2008, before I came on board, they had taken on nearly $6 million in venture funding. They were running at a loss the entire time, as they continued to take new funding up to nearly the very end.

I think that the fact they were heavily capitalized early on forced the company to look towards new monetization model outside of ads — which were and continued to experience declining value-per-reader — and research was the big bet they made. The company went big by trying to create a disruptive research model, based on a belief that a) Gigaom’s core readership could be converted at a decent rate to create a decent subscription model and b) individual subscribers would be open to buying research.

The entire company — across all the divisions (blog, research, events) — was expected to grow significantly over time to the point where revenue would outgrow operating expenses, the burn rate would lower, and the company could eventually turn a profit.

Over the years, the operating expenses grew across the blog and research — and related support staff in the form of sales, marketing, technology/development — the entire time. We do know this growing cost-base that resulted in higher operating expenses was supporting higher revenues. We also know the majority of this revenue growth was due to research — this can be deduced from the fact the revenue mix shifted from 0% of revenues in 2008 and single digits % in 2009 to to 60% by 2014.

We can assume the losses were mounting the entire time, and we could speculate that the late-stage round they took in early 2014 was, in large part, going towards debt service. And if debt was indeed a factor, the loan may have been structured where a balloon payment was just too big to overcome even with the significant final funding round.

Most of us will never know the actual specifics of the capital on hand, where it all went, and why it got to point of no return so quickly at the end. All I can say is the company was playing a dangerous game the entire time, using a combination of venture capital and debt that forced decisions across the business to continue invest heavily to result in growth, hoping that a level of scale would happen to where eventually revenues growth would outrun expenses to the point that debt was manageable and that there would be an exit.

But neither a manageable burn rate nor an exit ever happened. There’s a good chance these two things are related. I do know — it’s a pretty open secret in the tech media landscape — that Gigaom had talked to various suitors and some deals were almost consummated. I personally believe that the high amount of money invested in the company combined with the expectations of a venture board meant there was expectation of a high return in a high purchase price. It’s clear now that since no one ultimately bought the company, the difference between expected return and the value put on the company by potential acquirers remained too wide.

And so the investors, the stock option holding employees (including myself) and others with an interest in a Gigaom exit got nothing. Money paid toward exercising now worthless stock options is now gone, and some of us are just hoping to get some of the money owed to use for work we have done.

Perhaps the biggest mystery — and sadness — in all of this is the decision to simply shut the company down. I know they owe money to a creditor who wants it, but why then did keep operating ‘business-as-normal’ for pretty much the entire last year? Why couldn’t the company have gone to a much reduced staff six months ago, to manage the capital burn and keep the lights on? Why won’t they explore options such as operating under bankruptcy or something that would let Gigaom continue?

We don’t have these answers and we may never. All of this is a bit sad and hard to understand the reasoning. Having watched the outpouring of sadness over the loss of Gigaom over the past week, clearly many saw value in the company, felt that it’s reputation and credibility of creating good, thoughtful content in a Buzzfeedified media world was something worth keeping.

Apparently not.

I think a lot of the reasons Gigaom was so respected — its dedication to editorial integrity, the deep analysis across the blogs and the research, the intimate hi-touch events — all part of the reason it also ultimately couldn’t reach the audience and ultimate scale demanded of it from heavy venture capital. By not chasing page views, by not heading down the path of cheap headlines (Buzzfeed, et al) and relentless self-promotion of its products with its highest page-view editorial (Business Insider, et al), the company essentially put a speed ‘governor’ on its growth. But in the end, it was this conscious decision to chase quality, to be deliberate, to protect the brand and speak to core set of readers who wanted good and thoughtful content was what made Gigaom so treasured.

It’s too bad the company wasn’t as thoughtful and deliberate about how it managed its money.


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/LfjToNOHTqA/gigaom-the-life-and-death-of-a-venture-funded-media-startup-eb3fbdc4e732

Original article

Law Librarians and the Technology-Ready Law Student

Christine M. Stouffer, Director of Library Services at Thompson Hine LLP in Cleveland, has a nice article in the February issue of the AALL Spectrum. It’s called, “Closing the Gap: Teaching ‘Practice-Ready’ Legal Skills,” and talks about the “widening gap between legal education and real-world legal practice skills” and the role that law librarians can play in narrowing that perceived gap.

Stouffer touches on the January 2014 report from the American Bar Association Task Force on the Future of Legal Education. She provides a good review of this report and I would recommend reading this article for that alone. But, and I bet you won’t be surprised by this, I really wanted to focus in on a specific section titled, “Technology: Skills-Based Course Offerings.”

She starts off by noting that,

“… some proactive academic librarians are already involved in improving law student technology skills for entry into the practice of law. This is a trend that has momentum, especially in law schools where a ‘skills’ course is a requirement for graduation, such as UC Irvine and Valparaiso University School of Law. If a law school curriculum already has such a requirement, this is a ripe opportunity for academic law librarians to seize. It may still require some pitching to the curriculum committee, but several law school librarians have had success by stressing the abundance of evidence in surveys, workplaces, and now the ABA in supporting the need to prepare law graduates on a practical and technological level. These effective academics have demonstrated to their curriculum committees that the law librarians are well-suited to teach these skills to law students.”

She continues by identifying some of the many and “rapidly changing technology products and services that now dominate the practice of law” including things like e-filing and e-discovery requirements, “practise solutions” such as case-management tools aimed at enabling collaboration between in-office lawyers and their colleagues working around the globe, software to prepare contracts, patent litigation and other commercial “template products.” “The proliferation of these technologies is staggering in actual legal practice,” Stouffer reports, “which compels the need for exposure during law school.”

She concludes the section by suggesting that many “substantive and skills-based topics lend themselves to carefully structured MOOCs [Massive Open Online Courses].” This is a great idea and would also foster collaborative opportunities for law librarians as well as creating learning resources that can benefit law students everywhere.

Almost exactly a year ago Jeff Schmitt wrote an excellent piece on the many “top schools (and the best-and-brightest faculty)” that have begun to develop law school courses for MOOCs. “The MOOC Revolution: Law Schools” includes detailed descriptions of a number of law related MOOCs along with a nice list of additional MOOCs that are available. He gives you a good idea of what’s out there and how MOOCs work.

Schmitt expresses a few concerns though and does a nice job of comparing what’s currently available via a MOOC and what you’d get in the law school experience. Bottom line? He doesn’t think MOOCs are positioned to “replace law school,” at least not at this point in time. But MOOCs do have some advantages like offering supplementary material that can help students get some exposure to certain legal topics or help fill in any knowledge gaps they might have. MOOCs can also provide opportunities for law schools to promote their “educational brand.” And as he notes in his introduction, one of the really nice things about MOOCs is most of them are free!

Returning to Stouffer’s article I’ll end with her excellent recommendations:

  1. Law firm librarians should collaborate with local (or virtual) academic librarians to form cooperative alliances;
  2. Academic librarians should work with faculty and administrators in their own institutions to integrate real-life research, resources and projects during the entire academic year;
  3. Law school administrators should be encouraged to realize the importance of law library professional in furthering the goal of infusing practice-ready skills throughout the law school experience.

It’s a great article, but is Stouffer right? Is there a “skills gap” impacting law student success when they enter legal practice? If so, can law schools narrow this gap by tapping the skills of their law librarians?


Original URL: http://www.slaw.ca/2015/03/16/law-librarians-and-the-technology-ready-law-student/

Original article

Incuriosity Will Kill Your Infrastructure

A long while back, the folk working at Boundary at the time coined a phrase that I’ve loved ever since then:

The idea it covers has helped me and many other folk since then. It’s a dense phrase, so here’s the idea:

  • running a modern software system can be hard work
  • if you see things that don’t make sense to you, you have to investigate them later, because that’s a sign towards a thing that will mess you up
  • and the obvious counterpart: being actively curious about “fishy” things will lead to a more stable and happy infrastructure.

It’s about getting ahead of the game, not about getting paged at 3am and putting your head out with the fire extinguisher every day of the week.

Paying attention to small niggling things that you don’t quite understand pays off with avoided pages.

An example would be handy right about now

I have a good story involving this that I hit recently:

Some Background

First, a little bit of background about Riak (the data store Yeller uses to store exception data).

Riak’s solution for resolving concurrent writes isn’t locking, or transactions, or CAS, or Last Write Wins like many more traditional databases. Instead, it uses Vector Clocks to detect concurrent writes. Vector Clocks let you know

hey, you did a modification that didn’t historically relate to these other operations”

So, Riak, upon detecting concurrent writes, stores both copies of the writes, and then a read after that will return both values. Further writes can say “I descend from these two parent values”, just like a git merge commit. At that point, all concurrent copies that are resolved are cleaned up.

As such, you have to keep track of vector clocks inside your codebase. The typical way to do this is to always read-modify-write every piece of data, typically by supplying a function to the Riak client library. If you don’t do this, you get “sibling explosion”, in which you store many many copies of a particular value, and then destroy the network, riak’s memory use and your client application’s latency by reading in all those thousands of values during each get request.

Guess where this one is going.

Some Meat

A few weeks ago, I shipped a patch to Yeller that caused a sibling explosion:

You can see the p99 on the number of siblings on the client side explode dramatically.

Incuriosity Killed The Infrastrcture

The real meat of this saying, is how you have to pay attention to weirdness in your infrastructure. Missing the warning signs that things are wrong, or putting it off because “I have more pressing things to do” hurts down the road.

With this sibling explosion, I managed to avoid seeing it in the following cases:

The main dashboard

On the main dashboard for the data ingest of Yeller, I track the “modification” time for each of the main buckets – the p99 time taken for a full read-modify-write cycle. This was chugging along at a happy 0 – no values were hitting it at all.

I looked at that and went “oh, that’s weird, must be a graphite bug”, and continued on.

Distributed JVM Profiling

Courtesy of Riemann, Yeller has a distributed JVM profiler. Whilst investigating another performance issue, I saw that the profiled trace contained a call to my riak library’s get call, which it shouldn’t contain (because everything should go through the modify call)

Again I said to myself “huh, that’s a little weird, but guess it might be that somewhere unimportant or something”

Then I got paged

At 3am

Latency on one of Yeller’s key web pages had spiked like mad. Weird.

I dug in, using Yeller’s inbuilt webapp profiler. It was saying that the read time on this bucket was up in the hundreds of milliseconds. That was extremely odd. So I turned to the graphite dashboard for that riak bucket and saw this shitshow as the number of conflicting writes:

I fixed the bug (it was a small coding error), and kicked myself. I should have spotted it way earlier, well before getting paged.

Incuriosity had hurt my infrastructure pretty damn badly.

After a deploy and letting the system run for a while, things returned to normal:

Luckily I still spotted this early enough that things weren’t irrevocably broken (the only customer impact from the whole issue was a slow page load or two, which isn’t the best thing in the world, but it’s not awful either).

Takeaways for your operations

The basic principle that’s super important here is:

Sometimes going throug this only teaches you that your understanding of how your system is flawed, but that’s still incredibly valuable – you’re learning this up front, rather than at 3am whilst trying to debug something else.

Paying attention to “fishy” things lets you get ahead of the game with your infrastructure – instead of reacting to fires all the time, you can detect symptoms before they affect customers.

This is a blog about the development of
Yeller, the Exception
Tracker with Answers.

Read more about Yeller here

I’ve put together a course on debugging production web
applications. It covers a whole heap of techniques to speed up debugging.
Shortcuts you can take, changes you can make to your systems to make
debugging easier, common mistakes to avoid and a bunch more.

I won’t spam you – I’ll send you exactly 8 emails for the course, and that’s it.


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/AqHN3khtwvo/2015-03-16-incuriosity-killed-the-infrastructure.html

Original article

Gogs, an alternative to Gitlab

tl;dr Gitlab is a great git hosting service, almost as powerful as Github. But, is there something out there that’s comparable to Gitlab/Github, yet simpler to manage ? I think Gogs does the job.

Introduction

These days, Github has become the preferred platform to host code. With its many great features, ease of use and access, almost all software developers are happily using it.

Also, since Google Code hosting project is closing down, you can expect more projects being driven to it.

But what if you’re writing some Android app, maybe you’re building the next great iOS game or in general, you’re writing some code that you don’t want to be exposed to the general public ?

You could certainly purchase access to private Github repositories, but most certainly you’d rather want to invest your capital in more pressing matters.

This is where software such as Gitlab, and Gogs, come very handy.

They provide a service very similar to Github, with the difference that you can host them in your own servers, even in your own workstation.

Read on for some more insight.

Gitlab

Gitlab is a powerful git service, with features that rival Github itself. It’s a mature project and it’s being continuously updated.

gitlab

They recently acquired Gitorius (another Github-like service), so you can only assume that the feature set will expand (check the press clip about the acquisition).

Installation has been undoubtedly improved since the ‘manual’ days, which was time-consuming and very error prone.

Now there’s a Linux deb/rpm package available (called the Omnibus), which handles all dependencies and simplifies the process.

Upgrading is a bit more convoluted, especially if you’re coming from a version that’s prior to the last, but all in all, it’s not that complicated.

Nevertheless, you can feel a lot of stuff is going on behind the scenes. You will be running Sidekiq, Unicorn, Nginx, Ruby (plus all its gems) and then Gitlab itself.

Customizing the install is not that simple and if something should go wrong, there are many moving parts, where you would have to go looking for problems.

Enter the One binary

On the other hand, we have Gogs. A single binary is all you need to run it.

It’s built with Go, so you automatically get cross-platform compatibility.

It runs on Windows, Mac, Linux, ARM, etc.

gogs

Installation simply requires unzipping the release archive into a chosen folder. That’s it. Upgrading works the same: just unzip the release archive.

That’s the beauty of Go’s binary deployments, you can target multiple platforms at once.

Gogs has a really low footprint, so it’s easy on system resources (it can run on a Raspberry Pi).

You could run it as is, with the default configuration, or do some minimal tweaking.

The default configuration file is located in /conf/app.ini, but the documentation suggests to write your changes in /custom/conf/app.ini, so that when you upgrade, your customizations are preserved (since conf/app.ini is overwritten).

There are three sensible changes you could consider:

[repository]
ROOT = !! this is the location where you want to keep the repositories !!
[database]
PATH = !! this is the location of your database (sqlite3 by default) !!
  • Public key to enable commit over ssh

sshkey

Note that currently, you need to run an ssh server (openssh will do fine), the same as Gitlab.

Comparison

Let’s compare both products to see how they match up in terms of feature set. I’ll throw in Github, as a reference.

Gogs

Gitlab

Github

Dashboard & File Browser yes yes yes
Issue Tracking, Milestones & Commit keywords yes yes yes
Organizations support yes yes yes
Wiki no yes yes
Code Review no yes yes
Code Snippets no yes yes
Web Hooks yes yes yes
Git Hooks yes * Enterprise * Enterprise
LDAP Group Sync no * Enterprise * Enterprise
Branded Login Page no * Enterprise * Enterprise
       
Language Go Ruby Ruby
Platform Cross-Platform Linux * Virtual Machine
License MIT MIT Proprietary
Resource Usage Low Medium/High Medium/High

Code Review (and pull requests) is arguably the most important missing feature. It’s at the top of the list among Gogs Github issues and Gogs’s main developer (Unknwon) is working on it.

But all said, you have a very functional private Git host service.

Running a Gogs docker

I previously described how I ‘dockerized’ my home server environment, so it’s only fitting that I would run gogs as a Docker container.

So let’s do it step by step.

I have an apps folder in my server home directory (/home/kayak/apps) and create subfolders per each app I deploy as a Docker container.

Download and unzip the latest version (use the archive that corresponds to your platform)

$ cd /home/kayak/apps
$ wget http://gogs.dn.qbox.me/gogs_v0.5.13_linux_amd64.zip
$ unzip gogs_v0.5.13_linux_amd64.zip
$ rm gogs_v0.5.13_linux_amd64.zip

Optional | Customize the configuration

$ cd gogs
$ mkdir -p custom/conf
$ cd custom/conf
$ nano app.ini
[repository]
ROOT = !! this is the location where you want to keep the repositories !!

[database]
PATH = !! this is the location of your database (sqlite3 by default) !!

NOTE: At this point, you could simply run gogs web and you’d have it running normally, not as docker container.

Let’s create our Dockerfile

$ cd /home/kayak/apps/gogs
$ nano Dockerfile
FROM ubuntu:14.04

ENV DEBIAN_FRONTEND noninteractive

RUN sed 's/main$/main universe multiverse/' -i /etc/apt/sources.list && 
	apt-get update && apt-mark hold initscripts && 
	apt-get install -y sudo openssh-server git && 
	apt-get clean

EXPOSE 22 3000

RUN addgroup --gid 501 kayak && adduser --uid 501 --gid 501 --disabled-password --gecos 'kayak' kayak && adduser kayak sudo

WORKDIR /home/kayak
ENV HOME /home/kayak

ENTRYPOINT ["/home/kayak/boot"]

The Dockerfile is based on the latest Ubuntu server LTS version (14.04).

We then install sudo, openssh and git, expose ports 22 (for ssh) and 3000 (for the gogs web interface).

Additionally, I generally create a user (kayak in this case) with the same uid/gid as my user in my Mac box, to prevent issues with access permissions.

Finally, the boot shell script is called to get things running.

$ touch boot
$ chmod +x boot
$ nano boot
#!/bin/bash

sudo -u kayak -H touch /home/kayak/.ssh/authorized_keys
chmod 700 /home/kayak/.ssh && chmod 600 /home/kayak/.ssh/authorized_keys

# start openssh server
mkdir /var/run/sshd
/usr/sbin/sshd -D &

exec sudo -u kayak /home/kayak/gogs web

What this does is run the ssh daemon and then run gogs, as kayak user (rather than root, which is the default).

Let’s build the image

$ cd /home/kayak/apps/gogs
$ docker build --rm -t apertoire/gogs .

Once the image is built, we can run it with

$ docker run -d --name gogs 
-v /etc/localtime:/etc/localtime:ro 
-v /home/kayak/apps/gogs:/home/kayak 
-p 62723:22 
-p 3000:3000 
apertoire/gogs

You can check the command line to see that it’s running.

gogs

Now you can open the web interface, and it will show an install page (for first-time run)

gogsinstall

Once you have completed the install, you’ll have a functional Gogs service.

gogsweb

Conclusion

Gogs is a lightweight, easy to set up, cross-platform git hosting service, with features favorably comparable to Gitlab/Github.

It’s not as mature as the other two, but it’s still incredibly capable.

It’s also open source, so you can contribute to improve it.

I replaced my Gitlab installation with Gogs a couple of months ago and haven’t looked back.

I’m hosting 42 repositories and have found performance to be extremely good.

I definitely recommend Gogs as your git self-hosting service.

Final Notes

Hope you found the article interesting.

Please leave your comments here or send a tweet.


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/l1as5K7r6ns/

Original article

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑

%d bloggers like this: