You are here: Home » NewsFeeds » Tensor Comprehensions

Tensor Comprehensions

Today, Facebook AI Research (FAIR) is announcing the release of Tensor Comprehensions, a C++ library and mathematical language that helps bridge the gap between researchers, who communicate in terms of mathematical operations, and engineers focusing on the practical needs of running large-scale models on various hardware backends. The main differentiating feature of Tensor Comprehensions is that it represents a unique take on Just-In-Time compilation to produce the high-performance codes that the machine learning community needs, automatically and on-demand.
Order of magnitude productivity gains
The typical workflow for creating new high-performance machine learning (ML) layers can span days or weeks of engineering work through a two phase process:
A researcher writes a new layer at a numpy-level abstraction, chaining existing operations in a deep learning library like PyTorch, and tests it in small-scale experiments. The performance of the code implementing the validated idea needs to be accelerated by an order of magnitude to run


 

Original article