Sound United Enters Agreement to Acquire Onkyo Home Audio

In a major move that’s sure to reverberate through the entire AV industry, Sound United has entered a “term sheet” agreement to acquire Onkyo Home Audio. If the deal goes through, that brings Onkyo, Integra, Pioneer and Pioneer Elite brands under the same roof as Denon, Polk Audio, Marantz, Definitive technology, HEOS, Classé and Boston Acoustics.
This preliminary agreement sets the stage for negotiations to close the deal. If it goes down, current Sound United CEO Kevin Duffy will be the CEO of the whole operation.
“We are thrilled by the opportunity to add the venerable Onkyo and Pioneer brands to our portfolio. Sound United is one of the leading dedicated providers of premium audio/video products, and we believe the combined businesses will bring unrivaled innovation and sound performance to our consumers and channel partners,” said Kevin Duffy. “Upon completion of the transaction, we will work tirelessly with the consumer audio division


Original URL: https://www.avsforum.com/sound-united-enters-agreement-acquire-onkyo-home-audio/

Original article

New – Amazon S3 Batch Operations

AWS customers routinely store millions or billions of objects in individual Amazon Simple Storage Service (S3) buckets, taking advantage of S3’s scale, durability, low cost, security, and storage options. These customers store images, videos, log files, backups, and other mission-critical data, and use S3 as a crucial part of their data storage strategy.
Batch Operations Today, I would like to tell you about Amazon S3 Batch Operations. You can use this new feature to easily process hundreds, millions, or billions of S3 objects in a simple and straightforward fashion. You can copy objects to another bucket, set tags or access control lists (ACLs), initiate a restore from Glacier, or invoke an AWS Lambda function on each one.
This feature builds on S3’s existing support for inventory reports (read my S3 Storage Management Update post to learn more), and can use the reports or CSV files to drive your batch operations.


Original URL: http://feedproxy.google.com/~r/AmazonWebServicesBlog/~3/UKVPV3jqKZU/

Original article

In the Works – EC2 Instances (G4) with NVIDIA T4 GPUs

I’ve written about the power and value of GPUs in the past, and I have written posts to launch many generations of GPU-equipped EC2 instances including the CG1, G2, G3, P2, P3, and P3dn instance types.
Today I would like to give you a sneak peek at our newest GPU-equipped instance, the G4. Designed for machine learning training & inferencing, video transcoding, and other demanding applications, G4 instances will be available in multiple sizes and also in bare metal form. We are still fine-tuning the specs, but you can look forward to:
AWS-custom Intel CPUs (4 to 96 vCPUs)
1 to 8 NVIDIA T4 Tensor Core GPUs
Up to 384 GiB of memory
Up to 1.8 TB of fast, local NVMe storage
Up to 100 Gbps networking
The brand-new NVIDIA T4 GPUs feature 320 Turing Tensor cores, 2,560 CUDA cores, and 16 GB of memory. In


Original URL: http://feedproxy.google.com/~r/AmazonWebServicesBlog/~3/mFACd0DizMo/

Original article

Now Available – Five New Amazon EC2 Bare Metal Instances: M5, M5d, R5, R5d, and z1d

Today we are launching the five new EC2 bare metal instances that I promised you a few months ago. Your operating system runs on the underlying hardware and has direct access to the processor and other hardware. The instances are powered by AWS-custom Intel® Xeon® Scalable Processor (Skylake) processors that deliver sustained all-core Turbo performance.
Here are the specs:
Instance Name
Sustained All-Core Turbo
Logical Processors
Memory
Local Storage
EBS-Optimized Bandwidth
Network Bandwidth
m5.metal
Up to 3.1 GHz
96
384 GiB

14 Gbps
25 Gbps
m5d.metal
Up to 3.1 GHz
96
384 GiB
4 x 900 GB NVMe SSD
14 Gbps
25 Gbps
r5.metal
Up to 3.1 GHz
96
768 GiB

14 Gbps
25 Gbps
r5d.metal
Up to 3.1


Original URL: http://feedproxy.google.com/~r/AmazonWebServicesBlog/~3/X9WlJUUFb7o/

Original article

New for AWS Lambda – Use Any Programming Language and Share Common Components

I remember the excitement when AWS Lambda was announced in 2014! Four years on, customers are using Lambda functions for many different use cases. For example, iRobot is using AWS Lambda to provide compute services for their Roomba robotic vacuum cleaners, Fannie Mae to run Monte Carlo simulations for millions of mortgages, Bustle to serve billions of requests for their digital content.
Today, we are introducing two new features that are going to make serverless development even easier:
Lambda Layers, a way to centrally manage code and data that is shared across multiple functions.
Lambda Runtime API, a simple interface to use any programming language, or a specific language version, for developing your functions.
These two features can be used together: runtimes can be shared as layers so that developers can pick them up and use their favorite programming language when authoring Lambda functions.
Let’s see how they work more in detail.
Lambda Layers
When building serverless applications, it


Original URL: http://feedproxy.google.com/~r/AmazonWebServicesBlog/~3/ZLIGrPGMDnI/

Original article

New – AWS Transfer for SFTP – Fully Managed SFTP Service for Amazon S3

Many organizations use SFTP (Secure File Transfer Protocol) as part of long-established data processing and partner integration workflows. While it would be easy to dismiss these systems as “legacy,” the reality is that they serve a useful purpose and will continue to do so for quite some time. We want to help our customers to move these workflows to the cloud in a smooth, non-disruptive way.
AWS Transfer for SFTP Today we are launching AWS Transfer for SFTP, a fully-managed, highly-available SFTP service. You simply create a server, set up user accounts, and associate the server with one or more Amazon Simple Storage Service (S3) buckets. You have fine-grained control over user identity, permissions, and keys. You can create users within Transfer for SFTP, or you can make use of an existing identity provider. You can also use IAM policies to control the level of access granted to each user.


Original URL: http://feedproxy.google.com/~r/AmazonWebServicesBlog/~3/3-x-XlQjosM/

Original article

New AWS Resource Access Manager – Cross-Account Resource Sharing

As I have discussed in the past, our customers use multiple AWS accounts for many different reasons. Some of them use accounts to create administrative and billing boundaries; others use them to control the blast radius around any mistakes that they make.
Even though all of this isolation is a net positive for our customers, it turns out that certain types of sharing can be useful and beneficial. For example, many customers want to create resources centrally and share them across accounts in order to reduce management overhead and operational costs.
AWS Resource Access Manager The new AWS Resource Access Manager (RAM) facilitates resource sharing between AWS accounts. It makes it easy to share resources within your AWS Organization and can be used from the Console, CLI, or through a set of APIs. We are launching with support for Route 53 Resolver Rules (announced yesterday in Shaun’s excellent post) and


Original URL: http://feedproxy.google.com/~r/AmazonWebServicesBlog/~3/MenLlrtoO18/

Original article

New – Train Custom Document Classifiers with Amazon Comprehend

Amazon Comprehend gives you the power to process natural-language text at scale (read my introductory post, Amazon Comprehend – Continuously Trained Natural Language Processing, to learn more). After launching late 2017 with support for English and Spanish, we have added customer-driven features including Asynchronous Batch Operations, Syntax Analysis, support for additional languages (French, German, Italian, and Portuguese), and availability in more regions.
Using automatic machine learning (AutoML), Comprehend lets you create custom Natural Language Processing (NLP) models using data that you already have, without the need to learn the ins and outs of ML. Based on your data set and use case, it automatically selects the right algorithm, tuning parameter, builds, and tests the resulting model.
If you already have a collection of tagged documents—support tickets, call center conversations (via Amazon Transcribe, forum posts, and so forth)— you can use them as a starting point. In this context, tagged simply


Original URL: http://feedproxy.google.com/~r/AmazonWebServicesBlog/~3/gNJLej3dlEA/

Original article

New – Redis 5.0 Compatibility for Amazon ElastiCache

Earlier this year we announced Redis 4.0 compatibility for Amazon ElastiCache. In that post, Randall explained how ElastiCache for Redis clusters can scale to terabytes of memory and millions of reads and writes per second! Other recent improvements to Amazon ElastiCache for Redis include:
Read Replica Scaling – Support for adding or removing read replica nodes to a Redis Cluster, along with a reduction of up to 40% in cluster creation time.
PCI DSS Compliance – Certification as Payment Card Industry Data Security Standard (PCI DSS) compliant. This allows you to use ElastiCache for Redis (engine versions 4.0.10 and higher) to build low-latency, high-throughput applications that process sensitive payment card data.
FedRAMP Authorized and Available in AWS GovCloud (US) – United States government customers and their partners can use ElastiCache for Redis to process and store their FedRAMP systems and data for mission-critical, high-impact workloads in the AWS GovCloud (US)


Original URL: http://feedproxy.google.com/~r/AmazonWebServicesBlog/~3/hRs7GSEzw_4/

Original article

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑

%d bloggers like this: