Up until now, Seldon’s main focus has been to provide a production-grade recommendation engine – i.e. to suggest articles, videos, products to people based on behavioural and contextual data.
In version 0.93, a general prediction endpoint is now available to developers. This major new feature allows easy integration of classification and regression machine learning models into the Seldon platform for runtime scoring. In this initial release, we provide the ability to load and score Vowpal Wabbit classification models inside the Seldon server.
You can update models in production with no downtime. We have also created a simple microservice REST API to make it straightforward to integrate existing machine learning toolkits. As an example, we show how to integrate Vowpal Wabbit running a model in daemon mode.
We plan to provide further examples of integrating toolkits via the microservice REST API as well as further extend the Seldon server itself to load and store a range of popular models.
Seldon AWS AMI Private Access Program
Our Seldon AWS AMI private access program continues to grow as more users choose to get up and running quickly using the AMI. To participate, register for access.
Users that have registered and received access to the Seldon AWS AMI will get access enabled to new AMI releases when they become available. Check the Seldon user group for details of the latest launch URLs.
Seldon VM 0.93 release
We are pleased to announce a new and exciting version of the Seldon virtual machine. This release makes the Seldon Platform more accessible and easier to get up and running.
Both these virtual machines now use Ubuntu and allows developers to extend and customize the system as necessary.
The size of the Vagrant download has now been significantly reduced by utilizing Docker Hub for additional content.
Seldon source code is available on Github under an Apache 2.0 license – please watch, star and fork the project to stay updated between releases: https://github.com/SeldonIO/seldon-server