For professors assigning group work for credit, there is a website that can help allocate the share of credit to be given to each person. It’s called Spliddit (spliddit.org), and was developed by a Carnegie Mellon computer scientist. The idea is not ‘about doing the calculations for you, it’s about using a fair method that […]

Original URL: http://bestpracticeslegaled.albanylawblogs.org/2016/01/21/spliddit-org/  

Original article

Why We Use Om, and Why We’re Excited for Om Next

It’s an exciting time to be a frontend developer. Facebook’s React turned our ideas about rendering UIs on their heads. Om, in particular, has opened up a new way of thinking about how UIs work. The past couple of years have been a rush of new ideas and growth.

But right now, things are getting really good. This year saw the release of Facebook’s GraphQL and Netflix’s Falcor, and hot on their heels comes a project that borrows the best ideas from each of them: Om Next. Om Next is the successor to the current version of Om (known these days as “Om Now”). It keeps the best things about Om, throws out what didn’t work well, and replaces it with a much better approach.

Om Next is currently in alpha, but once it’s released, we plan to use it at CircleCI. I’d like to explain why that’s so exciting for us. But first, we need to look at why we went down this path at all.

Why React?

Since the dawn of time (okay, since the dawn of JavaScript), web developers have struggled with a particular problem: when new data arrives, how do you update the UI? That is, if you’re displaying a list,

  • Artichokes
  • Cabbage
  • Eggplant

and two new items are added—”Broccoli” between the first and the second, and “Dill” between the second and third—how do you update the DOM to reflect that change? We have two options, and each of them is troublesome:

  1. We could insert new list items into the existing DOM, but finding the right place to insert them is error-prone, and each insert will cause the browser to repaint the page, which is slow.

  2. We could throw out the entire list and rebuild it in one go, but re-rendering large amounts of DOM is also slow.

To make client-side web development possible, we have to solve—or at least mitigate—one of these problems. Until recently, most JS frameworks concentrated on the first problem, either by making the DOM easier to navigate, or by tying the elements to controller objects which take responsibility for keeping them up to date. These approaches have certainly mitigated Problem #1, but they haven’t solved it. Often, as in Backbone, they spread out the problem over lots of little components. Each has only a small chunk of DOM to wrestle with, but it still has to wrestle. And we still have lots of little DOM updates repainting the page.

React solves the second problem. Using React, we pretend that re-rendering the entire page is fast and easy. React, for its part, lets us pretend that we’re rerendering the page, when in fact, we’re “rendering” a tree of JavaScript objects, called the “Virtual DOM”, a complete specification of what we’d like the page to look like. Then React compares new Virtual DOM to the previous one to figure out exactly what has to change in the real DOM, and then it changes only those bits, all at once. The result is fast and easy.

(If you look closely, there’s a trick in there. React only looks like it’s solving the second problem, but in the end it really solves the first: it asks us to pretend to re-render the entire list, so that it can find the right places to insert the new items automatically.)

It’s hard to beat the simplicity of a system like this. In many cases, the view code is simply a function which takes data and returns DOM (or at least, Virtual DOM). It’s stateless and declarative. Until you need a bit of state, that is, which React can also handle. Today, I can’t imagine using anything else.

But at CircleCI we didn’t just run with React, we also switched our codebase to ClojureScript, and wrote our frontend in Om. ClojureScript was an easy sell: our backend was already written in Clojure, and there were already several React wrappers for ClojureScript. But why Om in particular?

Why Om?

Om was, if not the first ClojureScript React library, one of the first. But by the time we began moving to React, in mid-2013, there were already several, including Reagent and Quiescent. What did Om offer us that the others didn’t?

While React is concerned with how data flows into each component on the page, Om is also concerned with how your data is stored. Om is opinionated about the application state in a way that React isn’t. The major selling points for Reagent and Quiescent are that they’re more flexible than Om. But we liked Om’s opinions, and we liked what we got in exchange for agreeing with it.

  1. Application state as a single, immutable data structure. This is both the cost of entry and (depending on your opinions) a benefit itself. The entire application state is stored as a single data structure, in a single atom. Changing the state of the application always involves swap!ing the atom. That means that the application state always changes (atomically) from one consistent state to another.

    Om’s creator, David Nolen, likes to show off how easy this makes it to implement undo: just remember a of list old states, and you can reset! the state atom to any of them at any time. We liked it for another reason, though: we wanted to serialize the application state so users could send our support team a snapshot of their CircleCI interface. We’d pop in the data they sent, the page would deserialize it and reset! it into the state atom, and—presto-change-o—we’d see exactly what the customer saw.

    We do another trick, too: we keep part of the application state tree in the browser’s Local Storage. This is an easy way to keep track of “sticky” things after you close the app, like which repos you’ve collapsed or expanded in the sidebar, or how you prefer to sort your branches. We watch the path [:settings :browser-settings] and sync those values to a key in Local Storage. Then, on page load, we pull the data out of Local Storage and swap! it back into the application state. If we want a value to persist, we just store it in that part of the tree, and it becomes “sticky” automatically.

    Sticky settings stored in the app state tree.

    Sticky settings automatically synched to Local Storage.

  2. Cursors. Om needed a way to navigate that giant data structure, and it landed on cursors. Suppose you’ve got that grocery list from above, reflected in your application state.

    (def app-state
      (atom {:grocery-list [{:name "Artichokes"}
                            {:name "Cabbage"}
                            {:name "Eggplant"}]}))

    Suppose the user clicks an Edit button and changes “Artichokes” to “Avocadoes”. Somehow, you need to swap! the single atom that holds the entire page’s state, updating the correct element to the new value. Something like:

    (swap! app-state assoc-in [:grocery-list 0 :name] "Avocadoes")

    Except, the Edit button is drawn as part of the list item component. That list item shouldn’t know that it represents item 0 of its containing list, or that the list is stored at :grocery-list. It also shouldn’t know that the app state is stored in a var called app-state.

    Here’s the fix: instead of passing your component regular data ({:name "Artichokes"}), Om has you pass a cursor. A cursor contains the three pieces of information you need: the data itself ({:name "Artichokes"}), the path to that data ([:grocery-list 0]), and the atom it all lives in (app-state). It magically acts like it’s just the data, so (:name item-cursor) is "Artichokes", but Om’s om/update! function knows how to get the other two parts. Now, all you do is:

    (om/update! item-cursor :name "Avocadoes")

    Om hides the bits your component doesn’t need to care about.

Stretching Om’s Seams

Om (that is, Om “Now”) was a great start, but it hasn’t quite held up to large-scale ClojureScript applications like ours. (And how could it: it was written before they existed.) At CircleCI, we’ve been discovering the places where Om’s model breaks down.

The Conundrum of Cursors: Most data is not a tree.

Om’s cursor-oriented architecture requires you to structure your application state as a tree, specifically a tree that matches the structure of your UI. Your root UI component, which contains the entire app, is given a root cursor to the entire state. Then it passes subtrees of the state (cursors) to its subcomponents. Above, the root component would pass (:grocery-list root-cursor) to some grocery-list component, which would pass each element of that list to some grocery-item component. The structure of the state is the structure of the UI. For simple apps, that works great.

But consider our [:settings :browser-settings] trick from before. What part of the UI renders all the “sticky” settings? None: any part of the UI may need to store a setting or two in that branch to make it “sticky”. Remember, the branch picker needs to store whether each repo’s branch list is collapsed or expanded. If the branch picker was only passed the list in [:dashboard :branch-list], how would it read and write to [:settings :browser-settings]? We hit this problem repeatedly from a lot of angles. As it turns out, UI elements sometimes have cross-cutting concerns. Their data doesn’t always map to a tree.

How did we solve this problem? Most of our components take the entire app state as their data. Parent components don’t pass their children subcursors with just the bits they care about, they pass them the whole enchilada. Then we have a slew of paths defined in vars, which we use to extract the data we want. It’s not ideal. But it’s what we’ve had to do.

The Management of Mutations: Components don’t just change their own data.

When it comes time to change data, Om once again solves the simple version of the problem well, but quickly breaks down. As we saw above, if the grocery item “Artichokes” wants to change its name, that’s easy: it can use om/update! on its cursor. But what if it wants to delete itself? It can’t. That wouldn’t be changing the item, that would be changing the list. The grocery list component could delete the item by updating its cursor to a version with “Artichokes” removed:

(om/update! list-cursor 
  (into [] (remove #(= "Artichokes" (:name %))) list-cursor))

But in the UI, the “Delete” button should really be part of the grocery item component. Just as our data doesn’t map perfectly to our UI, neither do our mutation operations.

The usual solution to this problem in Om is core.async channels. The grocery list component would set up a channel and pass it to each list item. To delete itself, an item component would put a message on that channel. Meanwhile, the list component would run a go block to listen for the message, and when it got it, it would om/update! its cursor.

If that sounds like a complex solution to a common problem, you’re right: it is. But that’s what people do. Even Om’s TodoMVC example resorts to this.

In the CircleCI app, we do the same thing, but we do it on a much larger scale. We have a pair of multimethods, frontend.controllers.controls/control-event and frontend.controllers.controls/post-control-event!, which handle messages like this for the entire app. (The first one transforms the state tree as necessary, and the second one performs any side effects.) There’s a lot of architecture here that we built ourselves. We put in a good deal of effort to avoid Om’s own approach (cursors), because they don’t fit our needs at this scale.

The Delivery of Data: How does the data get into the app state?

Om works really well when all of the data lives on the client. Once it’s backed by a server, things get tricker. For small apps, like a grocery list, you may be able to load all of the data you’ll need when the page loads. But in an app like CircleCI, we can’t load everything you might ever want to see upfront. We load things on demand. When you go to your dashboard, we load the latest builds. When you go to a build page, we fetch that build’s details. As the components on the screen change, we discover we need different information.

Today, our hook for this is the navigation action. Our navigation system uses routing from Secretary, and layers on top of that a multimethod dispatch system very similar to the control-event system we saw above, called navigation-event. When the user navigates to a build page, for instance, we hit post-navigated-to! :build, which fires off any API calls we need to fetch the build’s details. When those API calls return, they swap! the new data into the state tree.

Meanwhile, our components are trying to render a part of the state tree that’s still empty. That’s fine: while it’s empty, they show a loading spinner. When the API system swap!s in the data, Om rerenders the components on the page, displaying the build.

It’s a good system, but it has one major flaw: it’s the build page component which knows what data it needs to display, but it’s the navigation system which knows what data to fetch. Those are miles away from each other in the codebase. If we want the build page to display something new, we have to add it to the UI on the build page and add an API call to the navigation system. If we remove something from the build page, we could easily forget to remove it from the navigation system, and make unnecessary API calls every time a user views a build.

Luckily for us, the future is bright.

Why Om Next?

Om Next is the upcoming revision to Om. It’s really more of a reboot. David Nolen and the Om team have taken the principles behind Om, applied the experience of the last few years, and built something new. It’s currently in alpha, and not production ready, but it should be ready in a couple of months. When it is, we plan to migrate to it. Why bother? I’m glad you asked.

The tree is really a graph.

In Om Next, each component gets to declare exactly what data it needs. It declares a query, using a syntax similar to Datomic’s pull query syntax. Unlike a simple path through a tree, a query like this can navigate a graph. (In fact, Om Next’s queries are analogous to, and partly inspired by, Facebook’s GraphQL.)

For instance, perhaps some component needs to display the start times of the builds previous to the builds that were recently initiated by the current user. You’d never store that information in a tree under the path [:current-user :initiated-builds 0 :previous :start-at]. You wouldn’t nest your actual build records inside other builds’ :previous references like that:

  [{:id 146
    :repo-name "circleci/frontend"
    :start-at #inst "2015-12-17T17:13:59.167-00:00"
    :previous {:id 144
               :repo-name "circleci/frontend"
               :start-at #inst "2015-12-17T17:05:58.144-00:00"
               :previous {:id 141
                          :repo-name "circleci/frontend"
                          :start-at #inst "2015-12-17T17:05:13.512-00:00"
                          :previous ; and so on...
   {:id 145
    :repo-name "circleci/docker"
    :start-at #inst "2015-12-17T17:09:25.961-00:00"
    :previous {:id 143
               :repo-name "circleci/docker"
               :start-at #inst "2015-12-17T17:05:36.797-00:00"
               :previous {:id 138
                          :repo-name "circleci/docker"
                          :start-at #inst "2015-12-17T17:04:51.124-00:00"
                          :previous ; and so on...

That would be silly. You’d store it in some list of builds; say, [:builds 145 :start-at]. Your data would look something like this:

{:current-user {:initiated-builds [[:build/by-id 146]
                                   [:build/by-id 145]]}
 :build/by-id {146 {:id 146
                    :repo-name "circleci/frontend"
                    :start-at #inst "2015-12-17T17:13:59.167-00:00"
                    :previous [:build/by-id 144]}
               145 {:id 145
                    :repo-name "circleci/docker"
                    :start-at #inst "2015-12-17T17:09:25.961-00:00"
                    :previous [:build/by-id 143]}
               144 {:id 144
                    :repo-name "circleci/frontend"
                    :start-at #inst "2015-12-17T17:05:58.144-00:00"
                    :previous [:build/by-id 141]}
               143 {:id 143
                    :repo-name "circleci/docker"
                    :start-at #inst "2015-12-17T17:05:36.797-00:00"
                    :previous [:build/by-id 138]}
               ;; and so on...

You can’t navigate that with a cursor. Once you narrow in on :current-user and its :initiated-builds, you can’t get to build #146: it’s in a different branch of the tree. And once you narrow in on build #146, you can’t back out to find its previous build, #145. It turns out your data isn’t really a tree, it’s a graph.

Queries let you navigate the graph of your date in all sorts of directions. The way Om Next does this is pretty clever: it takes something like the structure above and denormalizes the data you need into the places you expect it. Your original data is a graph, but what your UI sees is a tree, a tree that matches the structure of your UI perfectly. In Om Next, your query might be something like:

[{:current-user [{:initiated-builds [{:previous [:start-at]}]}]}]

And, depending on how you set things up (Om Next is exceptionally flexible), you might end up with a tree like:

  [{:previous {:start-at #inst "2015-12-17T17:05:58.144-00:00"}}
   {:previous {:start-at #inst "2015-12-17T17:05:36.797-00:00"}}]}}

Notice that, in the original data, each build references a previous build, but in this response we don’t have infinitely deep recursion. Why? Because we’re only getting what we asked for in the query. We asked to go one level deep, and that’s what we got.

In Om Now, you stored your app state in a tree which had to match the shape of your UI. In Om Next, you store your app state in any shape that makes sense, and let Om (with some help from you) convert that data into a tree that matches your UI on the fly. When the shape of your UI changes, the shape of the query changes with it, so the shape of the data it receives changes automatically. Your UI does not drive the shape of your data. That’s a huge win.

Mutation is a first-class operation.

Since you no longer receive data in the form in which it’s stored, you no longer operate on that data directly the way you used to: by om/transact!ing cursors. Instead, Om Next asks you to define a set of mutation operations which can change your application’s state. These mutations are named, and what they do is defined outside of the components themselves, as part of what Om Next calls the “parser”.

Does that sound familiar? That’s exactly what we’ve already built in CircleCI, in frontend.controllers.controls, only this version doesn’t involve core.async shenanigans and doesn’t require maintaining a big custom architecture no one else has ever seen. Apparently we were on the right track, but it’s so much nicer to have this built into Om itself.

Your components know what they need.

Remember our problem with getting data from the server? We had to hook into navigation events and then guess what data our components we were about to display. No more. In Om Next, our components have queries! We can ask them what they need. Rather than tying our API calls to navigation events, we tie our API calls to parts of our app state (like [:current-user :initiated-builds]). If we try to show a component that needs to know the current user’s initiated builds, that triggers an API call that asks the server for the data. If we stop using that component, we stop making that request. Automatically.

And because the mapping between data and API is centralized in the parser, we can batch our requests into fewer API calls, improving performance. We can even change which APIs deliver which data without touching the UI components at all!

So that’s what all the fuss is about

Om Next is exciting for us for a number of reasons. It vindicates some major decisions we made early on, it takes responsibility for a lot of architecture we’ve had to custom-build, and it will dramatically simplify the way we write most of our frontend application. Of course, the hard part now is waiting for it to be ready.

In the meantime, we’ll be working out the best way to gradually migrate our app from Om Now to Om Next, driving out bugs and hopefully improving the migration path for everyone. Like I said: it’s an exciting time to be a frontend developer!

If you’re excited by all this and want to try playing with it yourself, I heartily recommend Tony Kay’s om-tutorial. There are also several more focused tutorials available on the Om wiki. (All of the above are works in progress, since Om Next is still in flux, but they’re still worth checking out.)

Want to start shipping better code, faster? Sign up with CircleCI for free!

Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/14aIerAhMI0/  

Original article

The Weather On Android Just Got A Whole Lot Better

If you are and Android user and want to know whether it’ll rain or snow tomorrow, just searching for ‘weather’ on Google always gave you a quick and easy way to find out. But while Google would happily show you basic weather info, this was never the most exciting of experiences. Starting today, however, you’ll see a far more graphical and in-depth weather experience… Read More

Original URL: http://feedproxy.google.com/~r/Techcrunch/~3/no7VtkCvJEg/  

Original article

Backdoor Account Found On Devices Used By White House, US Military

An anonymous reader writes: A hidden backdoor account was discovered embedded in the firmware of devices deployed at the White House and in various US Military strategic centers, more precisely in AMX conference room equipment. The first account was named Black Widow, and after security researchers reported its presence to AMX, the company’s employees simply renamed it to Batman thinking nobody will notice. AMX did remove the backdoor after three months. In its firmware’s official release notes, AMX claimed that the two accounts were only used for debugging, just like Fortinet claimed that its FortiOS SSH backdoor was used only internally by a management protocol.

Share on Google+

Read more of this story at Slashdot.

Original URL: http://rss.slashdot.org/~r/Slashdot/slashdot/~3/CDfjB3ziTQM/backdoor-account-found-on-devices-used-by-white-house-us-military  

Original article

Docker Acquires Unikernel Systems as It Looks Beyond Containers

Docker today announced the acquisition of Unikernel Systems, a Cambridge, UK-based startup that aims to bring unikernels to the masses (or at least the masses of developers).

Docker plans to integrate support for unikernels into its own tools and services as it’s starting to look at technologies beyond containers to help developers build even more efficient microservices architectures. The price of the acquisition was not disclosed.

The basic idea behind unikernels is to strip down the operating system to the absolute minimum so it can run a very specific application. Nothing more, nothing less. This means you would compile the necessary libraries to run an application right into the kernel of the operating system, for example.


The result of this is a very small and fast machine that has fewer security issues than traditional operating systems (because you strip out so much from the operating system, the attack surface becomes very small, too).

Because of this, unikernels are great for applications where security and efficiency are paramount (think secure government systems, real-time trading platforms and IoT applications).

So why is Docker interested in all of this? Docker founder and CTO Solomon Hykes acknowledged that this is likely the “most obscure” of Docker’s acquisitions, but he also told me that he sees it as the company’s most exciting one to date.

The 13-people strong Unikernel Systems team is largely comprises developers who previously worked on the Xen hypervisor. Unikernel Systems is a major contributor to the overall unikernel ecosystem and its open source components. Hykes tells me Docker will continue to be very active in this community.

With this acquisition, Docker is bringing a lot of deep technical knowledge into the fold. “Expect the Docker platform to be much more aggressive in solving problems lower in the stack,” Hykes said. “[This acquisition] gives us a lot more firepower to solve these problems.”

But that’s only partly what this acquisition is about. While you probably only think “containers” when you hear about Docker, the company now seems to think about Docker as an ecosystem that isn’t just about containers. In this view, Docker is mostly about moving the microservices movement forward, and if you look at it through this lens, then unikernels are a logical next step for Docker.

With containers, developers “got a taste of small,” Hykes told me. In his view, unikernels are “the next step in shrinking the payload from VMs to containers to unikernels.”

He does not, however, believe that one has to inevitably replace the other. Using unikernels means you have to make some tradeoffs — mostly around compatibility and tooling. Docker plans to integrate unikernel support into its own tools in the near future. “What nobody wants is three completely separate sets of tools,” he noted. “One for VMs, one for containers and one for unikernels.”

If you paid attention at DockerCon Europe last year (you did, right?), then all of this may be a little less of a surprise. At the event, Docker actually showed a brief demo of how its tools could be used to manage unikernel deployments. Anil Madhavapeddy, the CTO of Unikernel Systems and the project lead of MirageOS (an open source library operating system for building unikernels), ran that on-stage demo.

Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/NHzmohQyQM4/  

Original article

Unikernel Systems Joins Docker

I’m happy to announce today that Unikernel Systems is part of Docker!

Unikernels compile your source code into a custom operating system that includes only the functionality required by the application logic. That makes them small, fast, and improves efficiency. Unikernel Systems was formed last year to build tools that allow developers to take advantage of a growing number of unikernel projects.

At Docker, we’re excited to have Unikernel Systems as part of the team. Unikernels are an important part of the future of the container ecosystem. Comprised of pioneers from the Xen project, the open-source virtualization platform that fuels the majority of workloads on public clouds, and developers with experience in modern day application-centric programming languages, the Unikernel Systems team brings to Docker vast knowledge and a rich heritage in developing next-generation infrastructure technologies.The Unikernel Systems team will continue to support the growing unikernel community while working closely with the rest of Docker to make sure unikernels integrate well with Docker tools.

This video by Anil Madhavapeddy explains a bit more about unikernels.

And check out this video of Anil presenting at DockerCon EU on running unikernels with Docker.

To find out more about unikernels, and contribute to existing projects, check out the project page on Unikernel.org.

Unikernel Systems’ Amir Chaudhry and Richard Mortier will present at our upcoming Docker Online Meetup on Wednesday, January 27th at 9 am PST / 6 pm CET.

Register now to learn more about unikernels!

Learn More about Docker

Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/45GTY4gOFoU/  

Original article

Patreon Gains $30M Series B Funding to Support Growth

Hiring the right developers is an issue for many startups in Silicon Valley, but Patreon CEO Jack Conte tells TechCrunch the speed he needs to hire is a major issue for his fast-growing subscription-based artist funding platform.

“We need to bring in so many people so fast. We need to keep up with hiring and keep up with making all of the things,” Conte said in a recent phone interview.

Patreon just closed on a fresh round of $30 million in Series B funding, led by Thrive Capital, to help hire more folks to make those “things.”

Conte and Patreon co-founder Sam Yam started the site in 2013 as a way to support artists in their pursuit of a decent living while doing what they love. The startup has since grown to nearly 50 people mostly working out of a SOMA warehouse space in downtown San Francisco.

Bigger players such as YouTube are now capitalizing on a similar idea. YouTube Red, the newish subscription service offering a revenue split with content rights holders, might attract Patreon’s core users – creative types hoping to make a living on their talent.

However, Conte believes Youtube is a complimentary service to what Patreon provides and only strengthens the future potential for growth on his platform.

“It’s a very different product,” Conte said of Youtube. “The idea of paying a subscription instead of ads is just very different from paying to support an artist.”

There are now more than 17,000 artists within 15 categories on Patreon, with photography, animation, and crafts growing more than 80 percent in 2015.

Conte plans to use the new funds to hire more engineers and within the product team. He’ll also put some of it towards the mobile app, new creator tools and in getting a handle on a now bustling office.

Thrive is a new and strategic investor in this round, along with Allen and Company. Other participants include Charles River Ventures, Index Ventures, Accomplice and Freestyle Capital.

The Series B round puts the total now raised for Patreon at $47.1 million.

Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/vnqM0xdqn6Q/  

Original article

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑

%d bloggers like this: