Lawsuit accuses PACER of overcharging for document downloads, misusing budget surplus

filestackFor digital document fans, the government’s PACER legal document retrieval system has long been a bone of contention. It charges ten cents per page for document retrieval, which can run into a considerable degree of expense when it comes to documents dozens of pages long—you basically have to pay to download it even to read enough of it know if it’s the document you actually want. (The fee is capped at $3 per single document but that can still run into significant sums.) As I noted a couple of years ago, I ran up $41 in PACER charges just while covering a story for TeleRead.

Ars Technica has a story about a class action lawsuit brought by three nonprofits against PACER, claiming that the proceeds of all those document payments are being misappropriated. PACER is authorized by law to charge fees necessary “to reimburse expenses in providing these services,” but the suit holds that millions of dollars in PACER proceeds have been used to pay for other projects instead. As it turns out, PACER was bringing in much more money than it needed.

“Rather than reduce the fees to cover only the costs incurred, the AO instead decided to use the extra revenue to subsidize other information-technology-related projects—a mission creep that only grew worse over time,” the suit (PDF) claims. Citing government records, the suit says that by the end of 2006, the judiciary’s information-technology fund had accumulated a surplus of $150 million with $32 million from PACER fees [PDF]. When fees were increased to 10 cents a page in 2012, the amount of income from PACER increased to $145 million, “much of which was earmarked for other purposes such as courtroom technology, websites for jurors, and bankruptcy notification systems,” according to the suit.

PACER also declined to provide a four-month fee exemption to journalists who needed to check the records to run an analysis of how effective certain legal software was. Effectively, PACER comes off as fairly greedy given how simple digital documents are to store and retrieve.

That’s why the late Aaron Swartz helped develop an alternate document retrieval system called RECAP, which automatically uploads pages downloaded from PACER to its servers so people can read them for free thereafter. It saves a lot of people a lot of money, but can only be updated one document at a time as PACER users pay to download them.

If PACER is making so much money that it’s able to run at a surplus, it seems that its retrieval fees should be cut back to something more reasonable. There’s no reason citizens should need to pay inflated rates to retrieve documents that are legally in the public domain.

The post Lawsuit accuses PACER of overcharging for document downloads, misusing budget surplus appeared first on TeleRead News: E-books, publishing, tech and beyond.


Original URL: http://www.teleread.com/lawsuit-accuses-pacer-of-overcharging-for-document-downloads-misusing-budget-surplus/

Original article

The Perfect Server – Ubuntu 16.04 (Xenial Xerus) with Apache, PHP, MySQL, PureFTPD, BIND, Postfix, Dovecot and ISPConfig 3.1

This tutorial shows how to install an Ubuntu 16.04 (Xenial Xerus) server (with Apache2, BIND, Dovecot) for the installation of ISPConfig 3.1, and how to install ISPConfig. ISPConfig 3 is a web hosting control panel that allows you to configure the following services through a web browser: Apache or nginx web server, Postfix mail server, Courier or Dovecot IMAP/POP3 server, MySQL, BIND or MyDNS nameserver, PureFTPd, SpamAssassin, ClamAV, and many more. This setup covers the installation of Apache (instead of nginx), BIND (instead of MyDNS), and Dovecot (instead of Courier).


Original URL: https://www.howtoforge.com/tutorial/perfect-server-ubuntu-16.04-with-apache-php-myqsl-pureftpd-bind-postfix-doveot-and-ispconfig/

Original article

Facebook bug hunter finds a backdoor left by hackers on corporate server

When Orange Tsai set out to participate in Facebook’s bug bounty program in February, he successfully managed to gain access to one of Facebook’s corporate servers. But once in, he realized that malicious hackers had beaten him to it.

Tsai, a consultant with Taiwanese penetration testing outfit Devcore, had started by mapping Facebook’s online properties, which extend beyond user-facing services like facebook.com or instagram.com.

One server that caught his attention was files.fb.com, which hosted a secure file transfer application made by enterprise software vendor Accellion and was presumably used by Facebook employees for file sharing and collaboration.

To read this article in full or to leave a comment, please click here


Original URL: http://www.computerworld.com/article/3060623/security/facebook-bug-hunter-finds-a-backdoor-left-by-hackers-on-corporate-server.html#tk.rss_all

Original article

The average size of Web pages is now the average size of a Doom install


In July 2015 I suggested that the average web page weight would equal that of the Doom install image in about 7 months time.

In about 7 months average web page size will be same as Doom install image.

Well done us! Onwards & upwards! pic.twitter.com/xtSAtZjPGl

— ronan cremin (@xbs) July 30, 2015

Well, we’ve made it, albeit a bit later than expected. This is where we are today:

web page size revisited revised

Recall that Doom is a multi-level first person shooter that ships with an advanced 3D rendering engine and multiple levels, each comprised of maps, sprites and sound effects. By comparison, 2016’s web struggles to deliver a page of web content in the same size. If that doesn’t give you pause you’re missing something. So where does this leave us?

Overall trends in page size

First, the good news. The remarkably straight line that charted the web’s creeping bloat over the years to 2015 looks to have softened ever so slightly—the slope is now slightly lower than the historical average. The bad news is that we’re still adding weight almost as fast as before.

Top performers vs. the rest

While the overall average page size is increasing inexorably, it’s worth looking at the weight of the top ten websites separately. The following chart shows the progression of the global average page weight vs. the top ten websites.

alexa_top_10_weight_vs_all_sites

There are two points of note here.

  1. The top ten sites are significantly lighter than the rest (worth noting if you want to be a top website).
  2. While the web in general is steadily getting heavier, the top sites have turned the corner.

Readers will point out that the some of the top sites are search engines and thus they have an easier job in keeping things light but even so the second point still stands: they are getting lighter, not heavier. Note also that the top ten websites list includes some relatively rich sites such as Amazon.

Note: the average page size figure cited here obscures a lot of important detail. Ilya Grigorik points out that web page weights are not normally distributed so the intuitive sense of average value is misleading in this case (as it often is). As Ilya says:

Let’s start with the obvious: the transfer size is not normally distributed, and there is no meaningful “central value” and talking about the mean is meaningless, if not deceiving.

He is of course correct—in this case the average figure is overly influenced by very large outliers or, to put it another way, the desktop web in particular has a long and heavy tail (mobile web is less tail-heavy). Sometimes the median value (middle value of a series) gives a better sense of what people experience but ultimately cumulative distribution functions are the only way to accurately describe the reality.

Outlook for the web

2015 was clearly a crisis year for the web. Multiple events raised awareness of just how bad performance had gotten: ad blockers arrived on iOS; Facebook and then Google announced schemes to help make the web faster; web page bloat even made the pages of the New York Times. Vox Media (the company behind The Verge) declared performance bankruptcy.

While ads are still the web’s pantomime villain the reality is that we haven’t been paying enough attention to performance. As the web went through its awkward teenage years we let creeping featuritus take hold and eventually clutter simply got the better of us. New JavaScript gallery module? Sure, why not? Oooh that new web font would look nice here but why not add a another analytics tool while we’re in there? Should I bother resizing this 6,000 pixel image? Nah, let the browser do it, works for me.

Technology adoption sometimes follows a pattern of early experimentation → overuse → sensible long-term level. In the 1990’s when desktop publishing technology suddenly became accessible at a much lower price point thanks to PCs and cheap DTP software people went crazy—the more colours and fonts you could cram onto a page the better. It took many years for this trend to peter out and settle down to a point where creativity and user experience found their balance.

It’s clear that awareness of web performance amongst leading practitioners is now heightened sufficiently for change. But as you can see from the graphs above this awareness hasn’t yet trickled down to everyone else—the top ten websites are generally leading indicators, the rest will hopefully follow later. But changes are already well underway: WordPress’s support for responsive images in version 4.4 alone will make a big difference to the global average page size, since WordPress powers about 26% of websites; Drupal 8 has done the same by adding a responsive images module into the core.

There will be a virtuous circle effect here: as key sites improve their performance the slower ones will stand out even more. AMP and Facebook Instant Articles will serve to remind people just how fast the web can be. Google is sending ever-stronger signals how performance will affect rankings and may even add a “slow” label to sites. There is nothing like the harsh light of search engine rankings to effect change—even creeping bloat may have met its match.

Far from being doomed 2015 was the best year for the web in a long time and 2016 looks set to continue this momentum. But let’s not waste the web bloat crisis.

EDITED 20/04/2016 to add comment about average page weight


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/tb9sKRcSRhU/the-web-is-doom

Original article

Ubuntu 16.04’s new Snap format is a security risk

matthew-garrett.jpg

Matthew Garrett: “As long as Ubuntu desktop still uses X11, the Snap format provides you with very little meaningful security.”


Image: ZDNet

The new Snap app package format is a headline feature of the new Ubuntu 16.04, touted by Canonical as a secure way of developing software that makes it impossible for an app to steal your data.

“The security mechanisms in Snap packages allow us to open up the platform for much faster iteration across all our flavours as Snap applications are isolated from the rest of the system,” Olli Ries, head of Canonical’s Ubuntu client platform products and releases wrote earlier this month.

“Users can install a Snap without having to worry whether it will have an impact on their other apps or their system,” he continued.

But that claim is only half true, according to Matthew Garrett, a well-known Linux kernel developer and security developer at CoreOS.

He contends that using Snap packages on Ubuntu mobile does offer genuine security improvements, but on the desktop that claim is “horribly, awfully misleading”.

“Any Snap package you install is completely capable of copying all your private data to wherever it wants with very little difficulty,” wrote Garrett.

To prove his point, he built a proof-of-concept attack package in Snap, which first shows an “adorable” teddy bear and then logs keystrokes from Firefox and could be used to steal private SSH keys. The PoC actually injects a harmless command, but could be tweaked to include a cURL session to steal SSH keys.

Garrett says the key reason Snap offers little security on Ubuntu desktop is that it uses the X11 window system.

“X has no real concept of different levels of application trust. Any application can register to receive keystrokes from any other application. Any application can inject fake key events into the input stream. An application that is otherwise confined by strong security policies can simply type into another window,” he wrote.

“An application that has no access to any of your private data can wait until your session is idle, open an unconfined terminal and then use cURL to send your data to a remote site.

“As long as Ubuntu desktop still uses X11, the Snap format provides you with very little meaningful security.”


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/G7igf9x-sXk/

Original article

Windows Ink, Cortana improvements and more arrive in the latest Windows 10 build out now

ink3-1024x683 It’s going to be a good Friday for those testing the latest releases of the Windows 10 operating system, as Microsoft is today rolling out a new build of its PC and mobile OS which will allow users to try the newly announced Windows Ink experience for the first time. Windows Ink, announced at the Build 2016 event last month, offers improved pen support for Windows 10 PCs, including… Read More


Original URL: http://feedproxy.google.com/~r/Techcrunch/~3/P6IiYQtKNkI/

Original article

Core Windows Utility Can Be Used To Bypass Whitelisting

Reader msm1267 writes: A core Windows command-line utility, Regsvr32, used to register DLLs to the Windows Registry can be abused to run remote code from the Internet, bypassing whitelisting protections such as Microsoft’s AppLocker. A researcher who requested anonymity found and recently privately disclosed the issue to Microsoft. It’s unknown whether Microsoft will patch this issue with a security bulletin, or in a future release. Regsvr32, also known as Microsoft Register Server, is a Microsoft-signed binary that runs as default on Windows. The researcher’s proof-of-concept allows him to download and run JavaScript or VBScript from a URL provided via the command line. “There’s really no patch for this; it’s not an exploit. It’s just using the tool in an unorthodox manner. It’s a bypass, an evasion tactic,” the researcher said.The Register reports: “It’s built-in remote code execution without admin rights and which bypasses Windows whitelisting. I’d say it’s pretty bad,” said Alex Ionescu, a Windows and ARM kernel guru. The trick — Smith didn’t want to call it an exploit — is neat because it does not touch the Registry, does not need administrator rights, can be wrapped up in an encrypted HTTP session, and should leave no trace on disk as it’s a pure to-memory download. No patch exists for this, although regsvr32 can be firewalled off from the internet. Microsoft was not available for immediate comment.


Share on Google+

Read more of this story at Slashdot.


Original URL: http://rss.slashdot.org/~r/Slashdot/slashdot/~3/Wy7EHStPZVQ/core-windows-utility-can-be-used-to-bypass-whitelisting

Original article

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑

%d bloggers like this: