Unsupervised Learning with Even Less Supervision Using Bayesian Optimization

By: Ian Dewancker, Research Engineer

In this post on integrating SigOpt with machine learning frameworks, we will show you how to use SigOpt and XGBoost to efficiently optimize an unsupervised learning algorithm’s hyperparameters to increase performance on a classification task.

As we previously discussed, fully supervised learning algorithms require each data point to have an associated class or output. In practice, however, it is often the case that relatively few labels are available during training time and labels are costly or time consuming to acquire. For example, it might be a very slow and expensive process for a group of experts to manually investigate and classify thousands of credit card transaction records as fraudulent or legitimate. A better strategy might be to study the large collection of transaction data without labels, building a representation that better captures the variations in the transaction data automatically.

Unsupervised Learning

Unsupervised learning algorithms are designed with the hope of capturing some useful latent structure in data. These techniques can often enable dramatic gains in performance on subsequent supervised learning tasks, without requiring more labels from experts. In this post we will use an unsupervised method on an image recognition task posed by researchers at Stanford [1] where we try to recognize house numbers from images collected using Google street view (SVHN). This is a more challenging problem than MNIST (another popular digit recognition data set) as the appearance of each house number varies quite a bit and the images are often cluttered with neighboring digits:

image

Figure 1: 32×32 cropped samples from the classification task of the SVHN dataset. Each sample is assigned only a single digit label (0 to 9) corresponding to the center digit. (Sermanet [6])

In this example, we assume access to a large collection of unlabelled images $X_u$, where the correct answer is not known, and a relatively small amount of labelled data $(X_s, y)$ for which the true digit in each image is known (often requiring a non-trivial amount of time and money to collect). Our hope is to find a suitable unsupervised model, built using our large collection of unlabelled images, that transforms images into a more useful representation for our classification task.

Unsupervised and supervised learning algorithms are typically governed by small sets of hyperparameters $(lambda_u, lambda_s)$, that control algorithm behavior. In our example pipeline below, $X_u$ is used to build the unsupervised model $f_u$ which is then used to transform the labelled data $(X_s, y)$ before the supervised model $f_s$ is trained. Our task is to efficiently search for good hyperparameter configurations $(lambda_u, lambda_s)$ for both the unsupervised and supervised algorithms. SigOpt minimizes the classification error $E(lambda_u, lambda_s)$ by sequentially generating suggestions for the hyperparameters of the model $(lambda_u, lambda_s)$. For each suggested hyperparameter configuration a new unsupervised data representation is formed and fed into the supervised model. The observed classification error is reported and the process repeats, converging on the set of hyperparameters that minimizes the classification error.

image

Figure 2 : Process for coupled unsupervised and supervised model tuning

Data scientists use their domain knowledge to select appropriate unsupervised and supervised models, but the task of selecting the best parameters for these models is often daunting and non-intuitive. Even simple unsupervised models—such as the whitening strategy we discuss below—often introduce tunable parameters, leaving potentially good models on the table simply because a winning configuration was never found.

SigOpt offers Bayesian optimization as a service, capable of efficiently searching through the joint variations $(lambda_u, lambda_s)$ of both the supervised and unsupervised aspects of machine learning systems (Figure 2.) This allows experts to unlock the power of unsupervised strategies with the assurance that each model is reaching its full potential automatically. The gains achieved by methods like SigOpt are additive with feature engineering, allowing for better results and faster iteration with less trial and error.

Show Me The Code

The source code for this experiment and setup script are available here. SigOpt can quickly identify the optimal configuration of these complicated models, much faster than traditional methods such as grid search and random search, especially when more than 2 or 3 hyperparameters are at play. Finding optima more quickly can also drastically save on computational resource costs (e.g. AWS instances) while still achieving comparable or better model performance.

Unsupervised Model

We start with the initial features describing the data: raw pixel intensities for each image. The goal of the unsupervised model is to transform the data from its original representation to a new (more useful) learned representation without using labeled data. Specifically, you can think of this unsupervised model as a function $f : mathbb{R}^N rightarrow mathbb{R}^J$. Where $N$ is the number of features in our original representation and $J$ is the number of features in the learned representation. In practice, expanded representations (sometimes referred to as a feature map) where $J$ is much larger than $N$ often work well for improving performance on classification tasks [2].

Image Transform Parameters ($s$, $w$, $K$)

A simple but surprisingly effective transformation for small images was proposed in a paper by Coates [1] where image patches are transformed into distances to $K$ learned centroids (average patches) using the k-means algorithm, and then pooled together to form a final feature representation as outlined in the figure below:

image

Figure 3: Feature extraction using a w-by-w receptive field and stride s. w-by-w patches separated by s pixels each, then map them to K-dimensional feature vectors to form a new image representation. These vectors are then pooled over 4 quadrants of the image to form classifier feature vector. ( Coates [1] )

In this example we are working with the 32×32 (n=32) converted gray-scale (d=1) images of the SVHN dataset. We allow SigOpt to vary the stride length ($s$) and patch width ($w$) parameters. The figure above illustrates a pooling strategy that considers quadrants in the 2×2 grid of the transformed image representation, summing them to get the final transformed vector. We used the suggested resolution in [1] and kept $text{pool}_r$ fixed at 2. $f(x)$ represents a $K$ dimensional vector that encodes the distances to the $K$ learned centroids, and $f_i(x)$ refers to the distance of instance $x$ to centroid $i$. In this experiment, $K$ is also a tunable parameter. The final feature representation of each image will have $J = K * text{pool}_r^2$ features.

Whitening Transform Parameter ($epsilon_{text{zca}}$)

Before generating the image patch centroids and any subsequent patch comparisons to these centroids, we apply a whitening transform to each patch. When dealing with image data, whitening is a common preprocessing transform which removes the correlation between all pairs of individual pixels [3]. Intuitively, it can be thought of as a transformation that highlights contrast in images. It has been shown to be helpful in image recognition tasks, and may also be useful for other feature data. The figure below shows several example image patches before and after the whitening transform is applied.

image

Figure 4: Comparison of image patches before and after whitening ( Stansbury [7] )

The whitening transformation we use is known as ZCA whitening [4]. This transform is achieved by cleverly applying the eigendecomposition of the covariance matrix estimate to a mean adjusted version of the data matrix, so that the expected covariance of the data matrix becomes the identity. A regularization term $epsilon_{text{zca}}$ is added to the diagonal eigenvalue matrix, and $epsilon_{text{zca}}$ is exposed as a tunable parameter to SigOpt.

# fit cov estimate to data matrix X (n x m, n samples, m feats)
cov = LedoitWolf().fit(X)
D, U = numpy.linalg.eigh(cov.covariance_)
V = numpy.sqrt(numpy.linalg.inv(numpy.diag(D+eps_zca)))
Wh = numpy.dot(numpy.dot(U,V),U.T) # ZCA whitening transform matrix
mu = numpy.mean(X, axis=0)
X_whitened = numpy.dot(X-mu, Wh)
image

Centroid Distance Sparsity Parameter ($text{sparse}_p$)

Each whitened patch in the image is transformed by considering the distances to the learned $K$ centroids. To control this sparsity of the representation we report only distances that are below a certain percentile, $text{sparse}_p$, when considering the pairwise distances between the current patch and the centroids. Intuitively this acts as a threshold which allows for only the “close” centroids to be active in our representation.

# compute distances between patches and all centroids
Z = k_means.transform(img_ptchs)
tau = numpy.percentile(Z, sparse_p, axis=1, keepdims=True)
Z = numpy.maximum(0, tau - Z)

The figure below illustrates the idea with a simplified example. A whitened image patch (in the upper right) is compared against the 4 learned centroids after k-means clustering. Here, let’s imagine we have set the percentile threshold to 50, so only the distances in the lower half of all centroid distances persist in the final representation, the others are zeroed out  

image

Figure 5: Sparsity transform; distances from a patch to centroids above 50th percentile are set to 0

While the convolutional aspects of this unsupervised model are tailored to image data, the general approach of transforming feature data into a representation that reflects distances to learned archetypes seems suitable for other data sets and feature spaces [8].

Supervised Model

With the learned representation of our data, we now seek to maximize performance on our classification task using a smaller labelled dataset. While random forests are an excellent, and simple, classification tool, better performance can typically be achieved by using carefully tuned ensembles of boosted classification trees.

Gradient Boosting Parameters ($gamma, theta, M$)

We consider the popular library XGBoost as our gradient boosting implementation. Gradient boosting is a generic boosting algorithm that incrementally builds an additive model of base learners, which are themselves simpler classification or regression models. Gradient boosting works by building a new model at each iteration that best reconstructs the gradient of the loss function with respect to the previous ensemble model. In this way it can be seen as a sort of functional gradient descent, and is outlined in more detail below. In the pseudocode below we outline building an ensemble of regression trees, but the same method can be used with a classification loss function $L$

image

Algorithm 1: Pseudocode for supervised gradient boosting using regression trees as base learners

Important parameters governing the gradient boosting algorithm include $M$, the number of base learner models in the ensemble, and $gamma$ the learning rate, which controls the relative contribution of each new base learner in the final additive model. Each base learner is also governed by it’s own set of parameters $theta$. Here we consider classification trees as our base learners, governed by a familiar set of parameters managing tree growth and regularization (e.g., max depth, sub_sample). We expose these parameters to SigOpt to optimize simultaneously with the parameters governing the unsupervised transformation discussed previously.    

Classification Performance

To compare model performance, we use accuracy, which can be understood as a measure of the probability of correctly classifying for each image in the test set. For example, a model that correctly recognizes the house numbers for 91% of the images would result in an accuracy score of 0.91.

We compare the ability of SigOpt to find the best hyperparameter configuration to the industry standard methods of random search, which usually outperforms grid search and manual search (Bergstra [9]) and a baseline of using an untuned model.

Because the underlying methods used are inherently stochastic we performed 10 independent hyperparameter optimizations using both SigOpt and random search for both the purely supervised and combined models. Hyperparameter optimization was performed on the accuracy estimate from a 80/20 cross validation fold of the training data (73k examples). The ‘extra’ set associated with the SVHN dataset (530K examples) was used to simulate the unlabelled data $X_u$ in the unsupervised parts of this example.

For the unsupervised model 90 sequential configuration evaluations (~50 CPU hrs) were used for both SigOpt and random search. For the purely supervised model 40 sequential configuration evaluations (~8 CPU hrs) were used for both SigOpt and random search. In practice, SigOpt is usually able to find good hyperparameter configurations with a number of evaluations equal to 10 times the number of parameters being tuned (9 for the combined model, 4 for the purely supervised model). The same parameters and domains were used for XGBoost in both the unsupervised and purely supervised settings. As a baseline, the hold out accuracy of an untuned scikit-learn random forest using the raw pixel intensity features.

After hyperparameter optimization was completed for each method we compared accuracy using a completely held out data set (SHVN test set, 26k examples) using the best configuration found in the tuning phase. The hold out dataset was run 10 times for each best hyperparameter configuration for each method, the mean of these runs is reported in the table below. SigOpt outperforms random search with a p-value of 0.0008 using the unpaired Mann-Whitney U test.

SigOpt (xgboost + Unsup. Feats) Rnd Search (xgboost + Unsup. Feats) SigOpt (xgboost + Raw Feats) Rnd Search (xgboost + Raw Feats) No Tuning (sklearn RF + Raw Feats)
Hold out ACC 0.8601 (+49.2%) 0.8190 0.7483 0.7386 0.5756

Table 1: Comparison of model accuracy on held out (test) dataset after different tuning strategies

The chart below tracks the optimization path of SigOpt vs random search optimization strategies when tuning the unsupervised model (Unsup Feats) and only the supervised model (Raw Feats). We plot the interquartile range of the best seen cross validated accuracy score on the training set at each objective evaluation during the optimization. As mentioned above, 90 evaluations were used in the optimization of the unsupervised model and 40 in the supervised setting. SigOpt outperforms random search in both settings on this training data (p-value 0.005 using the same Mann-Whitney U test as before).

image

Figure 6: Optimization traces of CV accuracy using SigOpt and random search

Closing Remarks

Unsupervised learning algorithms can be a powerful tool for boosting the performance of your supervised models when labelling is an expensive or slow process. Tuning automatically brings each model to its full potential. SigOpt was built to help with this non-intuitive task. As this example demonstrates, careful parameter tuning can enable engineering and data science teams to better leverage their unlabelled data and build more predictive data products in less time and lower cost. Sign up for a free evaluation today and get the most from your models!   

Additional Reading

SigOpt effectively optimizes machine learning models across a variety of datasets and algorithms. See our other examples:

References

[1]: Adam Coates, Honglak Lee, Andrew Y. Ng. An Analysis of Single-Layer Networks in Unsupervised Feature Learning. International conference on artificial intelligence and statistics (AISTATS). 2011. [PDF]

[2]: Yoshua Bengio. Deep Learning of Representations for Unsupervised and Transfer Learning. JMLR Workshop Proceedings: Unsupervised and Transfer Learning. 2012 [PDF]

[3]: Alex Krizhevsky, Geoffrey Hinton. Learning multiple layers of features from tiny images. 2009 [PDF]

[4]: Adam Coates, Andrew Y. Ng. Learning Feature Representations with K-means. 2012 [PDF]

[5]: Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, Andrew Y. Ng Reading Digits in Natural Images with Unsupervised Feature Learning. NIPS Workshop on Deep Learning and Unsupervised Feature Learning. 2011 [PDF]

[6]: Pierre Sermanet, Soumith Chintala and Yann LeCun. Convolutional Neural Networks Applied to House Numbers Digit Classification. Pattern Recognition International Conference (ICPR). 2012. [PDF]

[7]: Dustin Stansbury. The Statistical Whitening Transform. The Clever Machine. 2014. [LINK]

[8]: Sander Dieleman and Benjamin Schrauwen. Multiscale Approaches to Music and Audio Feature Learning. International Society for Music Information Retrieval Conference. 2013. [PDF]

[9]: James Bergstra and Yoshua Bengio. Random search for hyper-parameter optimization. The Journal of Machine Learning Research (JMLR). 2012 [PDF]


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/iitFGCw96Wc/sigopt-for-ml-unsupervised-learning-with-even

Original article

Legal Technology Panels at SXSW 2016

Several panels related to legal technology are being presented at SXSW 2016 Conference in Austin, Texas, USA, 11-20 March 2016:

Thanks to friends at New York Legal Hackers for part of this list.

If you know of other legal-technology-related panels being presented at SXSW 2016, please feel free tell us about them in the comments to this post.

Filed under: Conference resources, Uncategorized Tagged: Legal applications of blockchain technology, Legal communication, Legal crowdsourcing, Legal evidence information systems, Legal informatics conferences, Legal narrative, Open legal data, Technology for access to justice, Video in legal communication


Original URL: https://legalinformatics.wordpress.com/2016/03/11/legal-technology-panels-at-sxsw-2016/

Original article

Using Google’s Python Client Library to Authorise Your Desktop App with OAuth2


In a previous post I mentioned how I’m going to stop using Blogger’s built in post editor due to the horrendous HTML is produces. Well, I had no luck finding a desktop blogging client that worked well. The existing blogging clients either don’t work on linux or development was stopped some ten years ago.

As such, I am now developing my own desktop blogging client in Python. You can view the project on GitHub.

One of the things that was a bit a pain to figure out was authenticating the desktop client with Google’s API using OAuth 2.0. Personally I don’t think it’s very well explained on Google’s website and find the site uncomfortable to navigate. So, for the convenience of all of you that want to connect to and authorise a desktop application with Google’s API, here’s how to do it.

GETTING THE CLIENT LIBRARY

First off, we need to download the Python client library. I’m going to assume that everyone reading this blog is using pip, if you aren’t… Start using it. If you are one of the elite using Python 3 (did I ruffle a few feathers?), then lucky you, you should already have in installed. If you don’t have it, google is your friend.
In our terminal we execute the following command:

$ pip install --upgrade google-api-python-client

…And that’s it, well done.

CREATING A NEW GOOGLE APIS CONSOLE PROJECT AND DOWNLOADING YOUR CLIENT SECRET

To use Google’s API, we need to have a google account (you have one, right?) for accessing the Google Developer’s Console with. 
We then navigate to the Developer Console’s projects page and create a new project for our application by clicking the ‘Create project’ button and filling in the form that pops up.

Enter your projects name and hit create.

Then we get redirected to our project’s dashboard. On this dashboard there is a large blue box saying ‘Use Google APIs’ which we click.
Click this to be taken to the Google APIs page.
We then get taken to a page which displays all of the APIs available to us; there are lot’s and select the API that we are planning on using, I will be using blogger as an example.
Once we’ve selected the API we will be using, we will again be redirected to another page, on this page there will be a button that says ‘Enable’, clicking this lets us use the selected API. 
We are then presented with a warning box that prompts us to create credentials, which is exactly what we will do.
Click the ‘Go to Credentials’ button.
We then get taken to a new page with a few options for us to fill in. We select the version of the API we want and for ‘Where will you be calling the API from?’ select ‘Other UI (e.g. Windows, CLI tool)’. Also, we will be accessing user data so select the ‘User data’ option and click the ‘What credentials do I need?’ button.
Fill in the appropriate details and hit the blue button.
Then go through the next two steps; ‘Create an OAuth 2.0 client ID’ and ‘Set up the OAuth 2.0 consent screen’ and input the information that applies to us. Download the credential information if you like and click the done button.
We get taken to a page listing credentials. Next to the credential we just created there is a download button, press that to download the ‘client secret’ which we will need later and move it to the root directory of your project.
Download the client secret by clicking the circled button.
That’s all we need to do with the Google Developer’s Console. Next, onto the code.

USING GOOGLE’S PYTHON CLIENT LIBRARY TO AUTHORISE YOUR APPLICATION

The code consists of four steps:
  1. Getting an authorisation code
  2. Exchanging the authorisation code for credentials
  3. Creating an httplib2.Http object and authorising it using the credentials
  4. Creating an API service object to make calls to the API

THE CODE

def get_credentials():
    """Gets google api credentials, or generates new credentials
    if they don't exist or are invalid."""
    scope = 'https://www.googleapis.com/auth/blogger'

    flow = oauth2client.client.flow_from_clientsecrets(
            'client_secret.json', scope,
            redirect_uri='urn:ietf:wg:oauth:2.0:oob')

    storage = oauth2client.file.Storage('credentials.dat')
    credentials = storage.get()

    if not credentials or credentials.invalid:
        auth_uri = flow.step1_get_authorize_url()
        webbrowser.open(auth_uri)

        auth_code = input('Enter the auth code: ')
        credentials = flow.step2_exchange(auth_code)

        storage.put(credentials)

    return credentials

def get_service():
    """Returns an authorised blogger api service."""
    credentials = self.get_credentials()
    http = httplib2.Http()
    http = credentials.authorize(http)
    service = apiclient.discovery.build('blogger', 'v3', http=http)

    return service


WHAT’S GOING ON

scope = 'https://www.googleapis.com/auth/blogger'

flow = oauth2client.client.flow_from_clientsecrets(
    'client_secret.json', scope,
    redirect_uri='urn:ietf:wg:oauth:2.0:oob')
First in get_credentials() we create a client object from the client_secret.json file that we downloaded earlier. We need to specify the scope, and the redirect_uri.

The scope declares the API and the level of access that we will be using, you can find a list of scopes here. The redirect_uri is how the response will be sent to our application, for more information on redirect uris read this.

storage = oauth2client.file.Storage('credentials.dat')
credentials = storage.get()

This code allows us to store the credentials so we don’t have to re-authorise the client every time. It creates a Storage object and loads credentials.dat, which contains our credentials. If it doesn’t exist it gets created. It then gets the credentials from the storage object.

if not credentials or credentials.invalid:
    auth_uri = flow.step1_get_authorize_url()
    webbrowser.open(auth_uri)

Then it checks to see if the credentials either don’t exist or are for some reason invalid. If this check passes it means that new credentials are required so it generates an authorisation url and then opens it in the system’s default browser.

auth_code = input('Enter the auth code: ')
credentials = flow.step2_exchange(auth_code)

storage.put(credentials)

On the page that is opened in the default browser, the user is presented with a code and then prompted to input it into the application. We then ask the user to enter the authorisation code that they were given and use it to generate credentials. Obviously there are better and more user-friendly ways to handle the inputting of the code. The credentials are then stored for later.

credentials = self.get_credentials()
http = httplib2.Http()
http = credentials.authorize(http)

Next, in get_service() we get our credentials using the get_credentials() function we made earlier and use it to authorise an httplib2.Http object which the client library will use to issue HTTP requests.

service = apiclient.discovery.build('blogger', 'v3', http=http)

Finally we create the API service object which we can use to interact with the API and we are finished.

Now your application will be able to authorise itself, connect to your api of choice and interact with it via Google’s client library. It took me a while to navigate googles documentation and a little bit of trial and error to get this working properly, but hopefully this post will allow you to skip over that and get right into the actual building of your application.

If you liked this post or found it helpful, please share it so other people can find it too. Also, don’t forget to subscribe to my posts feed so you don’t miss anything.


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/zLV5EczRVaU/google-python-library-oauth2.html

Original article

Hard Tech is Back

First of all, congrats to Kyle, Dan, and the rest of the Cruise
team.  You all have made amazing progress and we look forward to seeing
more in the future. 

A popular criticism of Silicon Valley, usually levied by people
not building anything at all themselves, is that no one is working on or
funding “hard technology”.  While we disagree with this
premise—many of the most important companies start out looking trivial—we want
to be clear that we’re actively looking to fund more hard tech companies, and
would love to see more get started.

At YC, we started funding these sorts of companies in earnest in 2014,
to widespread commentary that this was a silly waste of time. 
Cruise, which we funded that winter, is getting acquired by GM.  From the Summer 2014 batch, 3 of the 4 companies who have raised the most money
since graduating YC are “hard tech” companies.

We expect many more big wins.  The YC model works much
better for these sorts of companies than most people, including
ourselves, thought.

So, if you’re thinking about starting one, we’d like to
talk.  And we think we can help. (You’ll probably find a lot of other
people willing to help too, although unfortunately you’ll still face major
fundraising challenges.  But in many ways, it’s easier to start a hard
company than an easy company—more people want to join the mission.)

Leave the Medium thought pieces about when the stock market is
going to crash and the effect it’s going to have on the fundraising environment
to other people—it’s boring, and history will forget those people anyway.
  There has never been a better time to take a long-term view and use
technology to solve major problems, and we’ve never needed the solutions more
than we do right now.

Different YC partners have different interests, but I’m
particularly excited about AI (both general AI and narrow AI applied to
specific industries, which seems like the most obvious win in all of startups
right now), biotech, and energy. 

We
hope to hear from you.


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/CeB-DIKlm8I/hard-tech-is-back

Original article

Dell releases new XPS 13 Developer Edition, launches Linux-based Precision laptops worldwide

Intel Skylake Dell XPS 13 Developer Edition

On the laptop side, Dell may be best known for its Windows devices, but, as some of you may already know, it also offers some killer Linux-based alternatives for prosumers. It all started out nearly four years ago with Project Sputnik, which led to the release of the first-gen XPS 13 Developer Edition, a Ubuntu-flavored version of the popular ultrabook, in late-2012.

Fast forward to today and Project Sputnik is more than just a one device effort, as Dell has expanded the reach of the program to also include some of its professional-grade laptops. Now, the company steps it up a notch by introducing the Intel Skylake refresh of XPS 13 Developer Edition, and making the Ubuntu-toting Precision laptops available worldwide.

What are the highlights of the new XPS 13 Developer Edition? Well, Project Sputnik lead Barton George says that it can be had with sixth-generation Core i7 processors, with a Core i5 option also on the cards, solid state drives with up to 1 TB of storage, up to 16 GB of RAM, InfinityEdge display (with fullHD and QHD+ versions), Ubuntu 14.04 LTS (Long Term Support) and all the “necessary hardware drivers, tools and utilities” one might need.

If you are not familiar with what LTS actually means, it is a branch of Ubuntu which Canonical supports for five years from its release. In contrast, the standard version of the operating system is guaranteed to receive updates for at least nine months. LTS is, therefore, a better option for Project Sputnik devices, because such laptops are aimed at professionals who seek reliability over cutting-edge software features.

George notes that Ubuntu 16.04 LTS (codenamed Xenial Xerus) will make its public debut in April, but there is no “date for when factory installation will become available” although support is planned. Those who wish to upgrade are advised to follow Canonical’s instructions, which are available here. Among the changes that Ubuntu 16.04 LTS brings are Linux 4.4, Python 3.5 and Golang 1.6.

When I first talked about Project Sputnik, I noted that Dell made a popular chose by opting for Ubuntu. This distribution is still among the most popular, currently ranking third on Distrowatch.com, behind Mint and Debian..

The new XPS 13 Developer Edition can now be purchased in US, with Canadian and European availability “being ready for launch as we speak”,  according to George. Prices start at $1,549 for a sixth-generation Core i7-6560U version with 8 GB of RAM, a 256 GB SSD and the QHD+ InfinityEdge display.

Regarding the worldwide availability of Ubuntu-based Precision laptops, Dell says that we are looking at Precision 5510, Precision 3510, Precision 7510 and Precision 7710 workstations. These devices can all be customized depending on the customer’s needs, but only the first two are available as of right now; the other two models will be offered “within a week”, according to George.

George also says that customers will see a number of over-the-air patches for these systems, which were not available early enough to be included in the shipping software.


Original URL: http://feeds.betanews.com/~r/bn/~3/vPYYTnxfbzU/

Original article

Amazon eyes up education, plans a free platform for learning materials

3936916711_2f592a0aae_b Back in 2013, Amazon acquired (and continued to operate) online math instruction company TenMarks to gain a foothold in the online education space. Now it looks like Amazon is taking those learnings to the next level. The e-commerce giant plans to launch a free platform for schools and other educators to upload, manage and share educational materials. Signs indicate that the platform… Read More


Original URL: http://feedproxy.google.com/~r/Techcrunch/~3/FyglWVl6NPk/

Original article

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑

%d bloggers like this: