Fedora 29 Released

ekimd writes: Fedora 29 is released today. Among the new features are the ability to allow parallel installation of packages such as Node.js. Fedora 29 also supports ZRAM (formerly called compcache) for ARMv7 and v8. In addition to the more efficient use of RAM, it also increases the lifespan of microSD cards on the Raspberry Pi as well as other SBCs. “Additionally, UEFI for ARMv7 is now supported in Fedora 29, which also benefits Raspberry Pi users,” reports TechRepublic. “Fedora already supported UEFI on 64-bit ARM devices.”

Read more of this story at Slashdot.


Original URL: http://rss.slashdot.org/~r/Slashdot/slashdot/~3/CTK4X4LGReA/fedora-29-released

Original article

Red Hat Enterprise Linux 7.6 Released

Etcetera writes: Fresh on the heels of the IBM purchase announcement, Red Hat released RHEL 7.6 today. Business press release is here and full release notes are here. It’s been a busy week for Red Hat, as Fedora 29 also released earlier this morning. No doubt CentOS and various other rebuilds will begin their build cycles shortly. The release offers improved security, such as support for the Trusted Platform Module (TPM) 2.0 specification for security authentication. It also provides enhanced support for the open-source nftables firewall technology.

“TPM 2.0 support has been added incrementally over recent releases of Red Hat Enterprise Linux 7, as the technology has matured,” Steve Almy, principal product manager, Red Hat Enterprise Linux at Red Hat, told eWEEK. “The TPM 2.0 integration in 7.6 provides an additional level of security by tying the hands-off decryption to server hardware in addition to the network bound disk encryption


Original URL: http://rss.slashdot.org/~r/Slashdot/slashdot/~3/1zoREHDSA_4/red-hat-enterprise-linux-76-released

Original article

A Tour of the Top Algorithms for Machine Learning Newbies

In machine learning, there’s something called the “No Free Lunch” theorem. In a nutshell, it states that no one algorithm works best for every problem, and it’s especially relevant for supervised learning (i.e. predictive modeling).For example, you can’t say that neural networks are always better than decision trees or vice-versa. There are many factors at play, such as the size and structure of your dataset.As a result, you should try many different algorithms for your problem, while using a hold-out “test set” of data to evaluate performance and select the winner.Of course, the algorithms you try must be appropriate for your problem, which is where picking the right machine learning task comes in. As an analogy, if you need to clean your house, you might use a vacuum, a broom, or a mop, but you wouldn’t bust out a shovel and start digging.The Big PrincipleHowever, there is a common principle


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/FVcWH2jLBtE/a-tour-of-the-top-10-algorithms-for-machine-learning-newbies-dde4edffae11

Original article

Why Jupyter is data scientists’ computational notebook of choice

Perched atop the Cerro Pachón ridge in the Chilean Andes is a building site that will eventually become the Large Synoptic Survey Telescope (LSST). When it comes online in 2022, the telescope will generate terabytes of data each night as it surveys the southern skies automatically. And to crunch those data, astronomers will use a familiar and increasingly popular tool: the Jupyter notebook. Jupyter is a free, open-source, interactive web tool known as a computational notebook, which researchers can use to combine software code, computational output, explanatory text and multimedia resources in a single document. Computational notebooks have been around for decades, but Jupyter in particular has exploded in popularity over the past couple of years. This rapid uptake has been aided by an enthusiastic community of user–developers and a redesigned architecture that allows the notebook to speak dozens of programming languages — a fact reflected in its name, which


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/LaXXzw6qa2Q/d41586-018-07196-1

Original article

Introducing AdaNet: Fast and Flexible AutoML with Learning Guarantees

Posted by Charles Weill, Software Engineer, Google AI, NYCEnsemble learning, the art of combining different machine learning (ML) model predictions, is widely used with neural networks to achieve state-of-the-art performance, benefitting from a rich history and theoretical guarantees to enable success at challenges such as the Netflix Prize and various Kaggle competitions. However, they aren’t used much in practice due to long training times, and the ML model candidate selection requires its own domain expertise. But as computational power and specialized deep learning hardware such as TPUs become more readily available, machine learning models will grow larger and ensembles will become more prominent. Now, imagine a tool that automatically searches over neural architectures, and learns to combine the best ones into a high-quality model. Today, we’re excited to share AdaNet, a lightweight TensorFlow-based framework for automatically learning high-quality models with minimal expert intervention. AdaNet builds on our recent reinforcement learning


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/SGdlpChU7xA/introducing-adanet-fast-and-flexible.html

Original article

The Linux Kernel Is Now VLA (Variable-Length Array) Free

With the in-development Linux 4.20 kernel, it is now effectively VLA-free… The variable-length arrays (VLAs) that can be convenient and part of the C99 standard but can have unintended consequences.
VLAs allow for array lengths to be determined at run-time rather than compile time. The Linux kernel has long relied upon VLAs in different parts of the kernel — including within structures — but going on for months now (and years if counting the kernel Clang’ing efforts) has been to remove the usage of variable-length arrays within the kernel. The problems with them are:
– Using variable-length arrays can add some minor run-time overhead to the code due to needing to determine the size of the array at run-time.
– VLAs within structures is not supported by the LLVM Clang compiler and thus an issue for those wanting to build the kernel outside of GCC, Clang only supports the C99-style VLAs.
– Arguably most


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/58c6B0H3q9c/scan.php

Original article

How to Install and Use logrotate to Manage Log Files in Ubuntu 18.04 LTS

Log files are most important for Linux system security. the logrotate tool is specially designed to simplify the administration of log files on a Linux system that allows automatic rotation, compression, removal, and mailing of log files. In this tutorial, I will explain how to use logrotate to manage logs on Ubuntu 18.04 server.


Original URL: https://www.howtoforge.com/tutorial/ubuntu-logrotate/

Original article

Atlassian sells Jitsi, an open-source videoconferencing tool it acquired in 2015, to 8×8

After announcing earlier this year that it planned to shut down HipChat and Stride and sell the IP of both to Slack, today enterprise software company Atlassian made another move related to its retreat from enterprise chat. It is selling Jitsi, a popular open-source chat and videoconferencing tool, to 8X8, a provider of cloud-based business phone and internal communications services. 8X8 says it plans to integrate Jitsi with its current conferencing solutions, specifically a product called 8X8 Meetings, and to keep it open source.
Terms of this latest sale to 8×8 have not been disclosed. Both the tech and the engineering team working on Jitsi, led by Emil Ivov, are coming with the acquisition.
Atlassian originally acquired Jitsi and its owner BlueJimp for an undisclosed sum in 2015 with the intention of adding video communications to HipChat, and later Stride (which launched in 2017).
But now those two products are headed for the graveyard


Original URL: http://feedproxy.google.com/~r/Techcrunch/~3/9LHqsnVLvhU/

Original article

Georgia State and publishers continue legal battle over fair use of course materials

When three publishers sued Georgia State University for sharing excerpts of textbooks with students at no charge 10 years ago, librarians and faculty members took notice.

The lawsuit was a big deal for universities offering “e-reserves” to students — free downloadable course materials that often included scanned pages from print textbooks.

GSU (and many other higher ed institutions) believed that this use of publisher content was within the bounds of “fair use” — a much debated tenet of copyright law. Oxford University Press, Cambridge University Press and Sage Publications disagreed. The publishers argued that this use of copyrighted materials without a license constituted infringement.

Then the courts went to work. In 2012, U.S. District Judge Orinda Evans sided largely against the publishers. The court ruled that 43 of 48 alleged cases of infringement were fair use — a judgment heralded as a victory for higher education institutions and libraries.

But in 2014, the U.S.


Original URL: https://www.insidehighered.com/news/2018/10/30/georgia-state-and-publishers-continue-legal-battle-over-fair-use-course-materials

Original article

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑

%d bloggers like this: