Onit acquires legal startup McCarthyFinch to inject AI into legal workflows

Onit, a workflow software company based in Houston with a legal component, announced this week that it has acquired 2018 TechCrunch Disrupt Battlefield alum McCarthyFinch.  Onit intends to use the startup’s AI skills to beef up its legal workflow software offerings.
The companies did not share the purchase price.
After evaluating a number of companies in the space, Onit focused on McCarthyFinch, which gives it an artificial intelligence component the company’s legal workflow software had been lacking. “We evaluated about a dozen companies in the AI space and dug in deep on six of them. McCarthyFinch stood out from the pack. They had the strongest technology and the strongest team,” Eric M. Elfman, CEO and co-founder of Onit told TechCrunch.
The company intends to inject that AI into its existing Aptitude workflow platform.”Part of what really got me excited about McCarthyFinch was the very first conversation I had with their CEO, Nick Whitehouse.


Original URL: http://feedproxy.google.com/~r/Techcrunch/~3/n_Sq2t4A5EE/

Original article

Majority of Alexa Now Running on Faster, More Cost-Effective Amazon EC2 Inf1 Instances

Today, we are announcing that the Amazon Alexa team has migrated the vast majority of their GPU-based machine learning inference workloads to Amazon Elastic Compute Cloud (EC2) Inf1 instances, powered by AWS Inferentia. This resulted in 25% lower end-to-end latency, and 30% lower cost compared to GPU-based instances for Alexa’s text-to-speech workloads. The lower latency allows Alexa engineers to innovate with more complex algorithms and to improve the overall Alexa experience for our customers.
AWS built AWS Inferentia chips from the ground up to provide the lowest-cost machine learning (ML) inference in the cloud. They power the Inf1 instances that we launched at AWS re:Invent 2019. Inf1 instances provide up to 30% higher throughput and up to 45% lower cost per inference compared to GPU-based G4 instances, which were, before Inf1, the lowest-cost instances in the cloud for ML inference.
Alexa is Amazon’s cloud-based voice service that powers Amazon Echo devices


Original URL: http://feedproxy.google.com/~r/AmazonWebServicesBlog/~3/yxWTmcGMxo8/

Original article

Amazon Transcribe Now Supports Automatic Language Identification

In 2017, we launched Amazon Transcribe, an automatic speech recognition service that makes it easy for developers to add a speech-to-text capability to their applications. Since then, we added support for more languages, enabling customers globally to transcribe audio recordings in 31 languages, including 6 in real-time.
A popular use case for Amazon Transcribe is transcribing customer calls. This allows companies to analyze the transcribed text using natural language processing techniques to detect sentiment or to identify the most common call causes. If you operate in a country with multiple official languages or across multiple regions, your audio files can contain different languages. Thus, files have to be tagged manually with the appropriate language before transcription can take place. This typically involves setting up teams of multi-lingual speakers, which creates additional costs and delays in processing audio files.
The media and entertainment industry often uses Amazon Transcribe to convert media content


Original URL: http://feedproxy.google.com/~r/AmazonWebServicesBlog/~3/8C4zZN0Ntvs/

Original article

Amazon ECS Now Supports EC2 Inf1 Instances

As machine learning and deep learning models become more sophisticated, hardware acceleration is increasingly required to deliver fast predictions at high throughput. Today, we’re very happy to announce that AWS customers can now use the Amazon EC2 Inf1 instances on Amazon ECS, for high performance and the lowest prediction cost in the cloud. For a few weeks now, these instances have also been available on Amazon Elastic Kubernetes Service.
A primer on EC2 Inf1 instancesInf1 instances were launched at AWS re:Invent 2019. They are powered by AWS Inferentia, a custom chip built from the ground up by AWS to accelerate machine learning inference workloads.
Inf1 instances are available in multiple sizes, with 1, 4, or 16 AWS Inferentia chips, with up to 100 Gbps network bandwidth and up to 19 Gbps EBS bandwidth. An AWS Inferentia chip contains four NeuronCores. Each one implements a high-performance systolic array matrix multiply engine,


Original URL: http://feedproxy.google.com/~r/AmazonWebServicesBlog/~3/tabsX62Nap8/

Original article

Amazon launches new Alexa developer tools

Amazon today announced a slew of new features for developers who want to write Alexa skills. In total, the team released 31 new features at its Alexa Live event. Unsurprisingly, some of these are relatively minor but a few significantly change the Alexa experience for the over 700,000 developers who have built skills for the platform so far.
“This year, given all our momentum, we really wanted to pay attention to what developers truly required to take us to the next level of what engaging [with Alexa] really means,” Nedim Fresko, the company’s VP of Alexa Devices & Developer Technologies, told me.
Maybe it’s no surprise then that one of the highlights of this release is the beta launch of Alexa Conversations, which the company first demonstrated at its re:Mars summit last year. The overall idea here is, as the name implies, to make it easier for users to have a natural


Original URL: http://feedproxy.google.com/~r/Techcrunch/~3/374y-Nm8960/

Original article

Find Your Most Expensive Lines of Code – Amazon CodeGuru Is Now Generally Available

Bringing new applications into production, maintaining their code base as they grow and evolve, and at the same time respond to operational issues, is a challenging task. For this reason, you can find many ideas on how to structure your teams, on which methodologies to apply, and how to safely automate your software delivery pipeline.
At re:Invent last year, we introduced in preview Amazon CodeGuru, a developer tool powered by machine learning that helps you improve your applications and troubleshoot issues with automated code reviews and performance recommendations based on runtime data. During the last few months, many improvements have been launched, including a more cost-effective pricing model, support for Bitbucket repositories, and the ability to start the profiling agent using a command line switch, so that you no longer need to modify the code of your application, or add dependencies, to run the agent.

You can use CodeGuru in two ways:
CodeGuru Reviewer uses


Original URL: http://feedproxy.google.com/~r/AmazonWebServicesBlog/~3/Y06GFelvVHo/

Original article

Reinventing Enterprise Search – Amazon Kendra is Now Generally Available

At the end of 2019, we launched a preview version of Amazon Kendra, a highly accurate and easy to use enterprise search service powered by machine learning. Today, I’m very happy to announce that Amazon Kendra is now generally available.
For all its amazing achievements in past decades, Information Technology had yet to solve a problem that all of us face every day: quickly and easily finding the information we need. Whether we’re looking for the latest version of the company travel policy, or asking a more technical question like “what’s the tensile strength of epoxy adhesives?”, we never seem to be able to get the correct answer right away. Sometimes, we never get it at all!
Not only are these issues frustrating for users, they’re also responsible for major productivity losses. According to an IDC study, the cost of inefficient search is $5,700 per employee per year: for a


Original URL: http://feedproxy.google.com/~r/AmazonWebServicesBlog/~3/e1d-SeJXSqQ/

Original article

Announcing TorchServe, An Open Source Model Server for PyTorch

PyTorch is one of the most popular open source libraries for deep learning. Developers and researchers particularly enjoy the flexibility it gives them in building and training models. Yet, this is only half the story, and deploying and managing models in production is often the most difficult part of the machine learning process: building bespoke prediction APIs, scaling them, securing them, etc.
One way to simplify the model deployment process is to use a model server, i.e. an off-the-shelf web application specially designed to serve machine learning predictions in production. Model servers make it easy to load one or several models, automatically creating a prediction API backed by a scalable web server. They’re also able to run preprocessing and postprocessing code on prediction requests. Last but not least, model servers also provide production-critical features like logging, monitoring, and security. Popular model servers include TensorFlow Serving and the Multi Model Server.


Original URL: http://feedproxy.google.com/~r/AmazonWebServicesBlog/~3/fHUbrJNG-nA/

Original article

Integrate your COVID-19 crisis communication chatbot with Slack

In times of crisis, chatbots can help people quickly find the answers that they need to critical questions. In the case of a pandemic like COVID-19, people might be searching for information about the disease’s progression or where to get tested. In this tutorial, I show you how to integrate a crisis communication chatbot with Slack to make it faster for users to get answers to their COVID-related questions.
This tutorial gives you step-by-step instructions for how you can get your COVID Crisis Communication Assistant up and running with Slack.
The following figure shows you the result.

Learning objectives
In this tutorial, you will:
Learn how to build a Slack application
Integrate your Slack app with Watson Assistant
Build a Call for Code COVID Crisis Communications Slack-enabled Chatbot solution
Prerequisites
An IBM Cloud account
Create a Watson Assistant COVID-19 Crisis Communication Chatbot
Set up a Slack workspace with administrative rights
Estimated time
It should take you approximately 15 minutes to complete the tutorial


Original URL: https://developer.ibm.com/tutorials/create-crisis-communication-chatbot-integrate-slack/

Original article

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑

%d bloggers like this: