Build Your Own Professional-Grade Audio Amp on the Sort-Of Cheap

Photo: Randi Klett
Years ago I decided to see how little I needed to spend to build a high-end, audiophile quality, class-D amplifier. The answer, then, was US $523.43. I built a worthy little amp, and the article I wrote about it for IEEE Spectrum still attracts page views, and even sporadic emails from people asking where they can get the parts.
Sorry folks, the main components are long gone. So I’ve been steering people to excellent class-D amplifier kits from Class D Audio, DIY Class D, and Ghent Audio instead. But a couple of months ago I got the itch to see how much better I could do now, almost a decade later, with the same challenge. Part of my motivation was the annual Best Stereo Amps lists from gadget-review website The Master Switch. The lists are dominated by amps costing more than $1,000 (nine of them cost


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/2AqSCVfj960/build-your-own-professionalgrade-audio-amp-on-the-sort-of-cheap

Original article

Explore, Transcribe and Tag at Crowd.loc.gov!

This is a guest post by Lauren Algee, senior innovation specialist with the Library’s Digital Innovation Lab.
What yet-unwritten stories lie within the pages of Clara Barton’s diaries, the writings of civil rights pioneer Mary Church Terrell or letters written by constituents, friends and colleagues to Abraham Lincoln? With the launch of crowd.loc.gov, the Library of Congress is harnessing the power of the public to make these and many other collection items accessible to everyone.
Crowd.loc.gov invites the public to volunteer to transcribe (type) and tag with keywords digitized images of text materials from the Library’s collections. Volunteers will journey through history first-hand and help the Library while gaining new skills – like learning how to analyze primary sources or read cursive.
Finalized transcripts will be made available on the Library’s website, improving access to handwritten and typed documents that computers cannot accurately translate without human intervention. The enhanced access will occur


Original URL: https://blogs.loc.gov/loc/2018/10/explore-transcribe-and-tag-at-crowd-loc-gov/

Original article

Fedora 29 Released

ekimd writes: Fedora 29 is released today. Among the new features are the ability to allow parallel installation of packages such as Node.js. Fedora 29 also supports ZRAM (formerly called compcache) for ARMv7 and v8. In addition to the more efficient use of RAM, it also increases the lifespan of microSD cards on the Raspberry Pi as well as other SBCs. “Additionally, UEFI for ARMv7 is now supported in Fedora 29, which also benefits Raspberry Pi users,” reports TechRepublic. “Fedora already supported UEFI on 64-bit ARM devices.”

Read more of this story at Slashdot.


Original URL: http://rss.slashdot.org/~r/Slashdot/slashdot/~3/CTK4X4LGReA/fedora-29-released

Original article

Red Hat Enterprise Linux 7.6 Released

Etcetera writes: Fresh on the heels of the IBM purchase announcement, Red Hat released RHEL 7.6 today. Business press release is here and full release notes are here. It’s been a busy week for Red Hat, as Fedora 29 also released earlier this morning. No doubt CentOS and various other rebuilds will begin their build cycles shortly. The release offers improved security, such as support for the Trusted Platform Module (TPM) 2.0 specification for security authentication. It also provides enhanced support for the open-source nftables firewall technology.

“TPM 2.0 support has been added incrementally over recent releases of Red Hat Enterprise Linux 7, as the technology has matured,” Steve Almy, principal product manager, Red Hat Enterprise Linux at Red Hat, told eWEEK. “The TPM 2.0 integration in 7.6 provides an additional level of security by tying the hands-off decryption to server hardware in addition to the network bound disk encryption


Original URL: http://rss.slashdot.org/~r/Slashdot/slashdot/~3/1zoREHDSA_4/red-hat-enterprise-linux-76-released

Original article

A Tour of the Top Algorithms for Machine Learning Newbies

In machine learning, there’s something called the “No Free Lunch” theorem. In a nutshell, it states that no one algorithm works best for every problem, and it’s especially relevant for supervised learning (i.e. predictive modeling).For example, you can’t say that neural networks are always better than decision trees or vice-versa. There are many factors at play, such as the size and structure of your dataset.As a result, you should try many different algorithms for your problem, while using a hold-out “test set” of data to evaluate performance and select the winner.Of course, the algorithms you try must be appropriate for your problem, which is where picking the right machine learning task comes in. As an analogy, if you need to clean your house, you might use a vacuum, a broom, or a mop, but you wouldn’t bust out a shovel and start digging.The Big PrincipleHowever, there is a common principle


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/FVcWH2jLBtE/a-tour-of-the-top-10-algorithms-for-machine-learning-newbies-dde4edffae11

Original article

Why Jupyter is data scientists’ computational notebook of choice

Perched atop the Cerro Pachón ridge in the Chilean Andes is a building site that will eventually become the Large Synoptic Survey Telescope (LSST). When it comes online in 2022, the telescope will generate terabytes of data each night as it surveys the southern skies automatically. And to crunch those data, astronomers will use a familiar and increasingly popular tool: the Jupyter notebook. Jupyter is a free, open-source, interactive web tool known as a computational notebook, which researchers can use to combine software code, computational output, explanatory text and multimedia resources in a single document. Computational notebooks have been around for decades, but Jupyter in particular has exploded in popularity over the past couple of years. This rapid uptake has been aided by an enthusiastic community of user–developers and a redesigned architecture that allows the notebook to speak dozens of programming languages — a fact reflected in its name, which


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/LaXXzw6qa2Q/d41586-018-07196-1

Original article

Introducing AdaNet: Fast and Flexible AutoML with Learning Guarantees

Posted by Charles Weill, Software Engineer, Google AI, NYCEnsemble learning, the art of combining different machine learning (ML) model predictions, is widely used with neural networks to achieve state-of-the-art performance, benefitting from a rich history and theoretical guarantees to enable success at challenges such as the Netflix Prize and various Kaggle competitions. However, they aren’t used much in practice due to long training times, and the ML model candidate selection requires its own domain expertise. But as computational power and specialized deep learning hardware such as TPUs become more readily available, machine learning models will grow larger and ensembles will become more prominent. Now, imagine a tool that automatically searches over neural architectures, and learns to combine the best ones into a high-quality model. Today, we’re excited to share AdaNet, a lightweight TensorFlow-based framework for automatically learning high-quality models with minimal expert intervention. AdaNet builds on our recent reinforcement learning


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/SGdlpChU7xA/introducing-adanet-fast-and-flexible.html

Original article

The Linux Kernel Is Now VLA (Variable-Length Array) Free

With the in-development Linux 4.20 kernel, it is now effectively VLA-free… The variable-length arrays (VLAs) that can be convenient and part of the C99 standard but can have unintended consequences.
VLAs allow for array lengths to be determined at run-time rather than compile time. The Linux kernel has long relied upon VLAs in different parts of the kernel — including within structures — but going on for months now (and years if counting the kernel Clang’ing efforts) has been to remove the usage of variable-length arrays within the kernel. The problems with them are:
– Using variable-length arrays can add some minor run-time overhead to the code due to needing to determine the size of the array at run-time.
– VLAs within structures is not supported by the LLVM Clang compiler and thus an issue for those wanting to build the kernel outside of GCC, Clang only supports the C99-style VLAs.
– Arguably most


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/58c6B0H3q9c/scan.php

Original article

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑

%d bloggers like this: