How to Install Piwik with Nginx on Ubuntu 15.10

Piwik is the leading open source web analytics application, it has been developed to provide an alternative to google-analytics. In this tutorial, I will show you how to install Piwik on ubuntu 15.10 with Nginx web server and PHP 5.6 in php-fpm mode. We will use MariaDB as the database system.


Original URL: https://www.howtoforge.com/tutorial/how-to-install-piwik-with-nginx-on-ubuntu-15-10/

Original article

Now Available: Improved Training Course for AWS Developers

My colleague Mike Stroh is part of our sales training team. He wrote the guest post below to introduce you to our newest AWS training courses.


Jeff;

We routinely tweak our 3-day AWS technical training courses to keep pace with AWS platform updates, incorporate learner feedback, and the latest best practices.

Today I want to tell you about some exciting enhancements to Developing on AWS. Whether you’re moving applications to AWS or developing specifically for the cloud, this course can show you how to use the AWS SDK to create secure, scalable cloud applications that tap the full power of the platform.

What’s New
We’ve made a number of updates to the course—most stem directly from the experiences and suggestions of developers who took previous versions of the course. Here are some highlights of what’s new:

  • Additional Programming Language Support – The course’s 8 practice labs now support Java, .Net, Python, JavaScript (for Node.js and browser)—plus the Windows and Linux operating systems.
  • Balance of Concepts and Code – The updated course expands coverage of key concepts, best practices, and troubleshooting tips for AWS services to help students build a mental model before diving into code. Students then use an AWS SDK to develop apps that apply these concepts in hands-on labs.
  • AWS SDK Labs – Practice labs are designed to emphasize the AWS SDK, reflecting how developers actually work and create solutions. Lab environments now include EC2 instances preloaded with all required programming language SDKs, developer tools, and IDEs. Students can simply log in and start learning!
  • Relevant to More Developers – The additional programming language support helps make the course more useful to both startup and enterprise developers.
  • Expanded Coverage of Developer-Oriented AWS Services – The updated course put more focus on the AWS services relevant to application development. So there’s expanded coverage of Amazon DynamoDB, plus new content on AWS Lambda, Amazon Cognito, Amazon Kinesis Streams, Amazon ElastiCache, AWS CloudFormation, and others.

Here’s a map that will help you to understand how the course flows from topic to topic:

How to Enroll
For full course details, look over the Developing on AWS syllabus, then find a class near you. To see more AWS technical courses, visit AWS Training & Certification.

Mike Stroh, Content & Community Manager


Original URL: http://feedproxy.google.com/~r/AmazonWebServicesBlog/~3/cY_-jOVcUK8/

Original article

GitHub: Update on 1/28 service outage

On Thursday, January 28, 2016 at 00:23am UTC, we experienced a severe service outage that impacted GitHub.com. We know that any disruption in our service can impact your development workflow, and are truly sorry for the outage. While our engineers are investigating the full scope of the incident, I wanted to quickly share an update on the situation with you.

A brief power disruption at our primary data center caused a cascading failure that impacted several services critical to GitHub.com’s operation. While we worked to recover service, GitHub.com was unavailable for two hours and six minutes. Service was fully restored at 02:29am UTC. Last night we completed the final procedure to fully restore our power infrastructure.

Millions of people and businesses depend on GitHub. We know that our community feels the effects of our site going down deeply. We’re actively taking measures to improve our resilience and response time, and will share details from these investigations.


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/g3VvfjZjYkU/2101-update-on-1-28-service-outage

Original article

IBM Closes Weather Co. Purchase, Names David Kenny New Head Of Watson Platform

The Weather Company Meteorologist, Jess Parker (left) works with IBM Watson Developer Chris Ackerson (right) to install a personal weather station on the roof of IBM Watson Headquarters at Astor Place in New York City. More than 180,000 people around the world connect personal weather stations, like the one pictured, to Weather Underground's worldwide personal weather station network to share live, localized weather data to people around the globe. IBM today announced that it has closed the acquisition of The Weather Company’s B2B, mobile and cloud-based web-properties, weather.com, Weather Underground, The Weather Company brand and WSI, its global business-to-business brand. (Jon Simon/Feature Photo Service for IBM) IBM is taking another step to expand its Watson AI business and build its presence in areas like IoT: today the company announced that its acquisition of the Weather Company — the giant weather media and data group — has now officially closed. IBM is not disclosing the value of the deal: it was originally reported to be in the region of $2 billion, but sources close to IBM tell us… Read More


Original URL: http://feedproxy.google.com/~r/Techcrunch/~3/b1_b2giKkqU/

Original article

Berkman Center releases tool to combat ‘link rot’

This week, the Berkman Center for Internet and Society at Harvard University announced the release of Amber, a free software tool for websites and blogs that preserves content and prevents broken links. When installed on a blog or website, Amber can take a snapshot of the content of every linked page, ensuring that even if those pages are interfered with or blocked, the original content will be available.

“The Web’s decentralization is one of its strongest features,” said Jonathan Zittrain, Faculty Chair of the Berkman Center and George Bemis Professor of International Law at Harvard Law School. “But it also means that attempting to follow a link might not work for any number of reasons. Amber harnesses the distributed resources of the Web to safeguard it. By allowing a form of mutual assistance among Web sites, we can together ensure that information placed online can remain there, even amidst denial of service attacks or broad-based attempts at censorship.”

The release of Amber builds on an earlier proposal from Zittrain and Sir Tim Berners-Lee for a “mutual aid treaty for the Internet” that would enable operators of websites to easily bolster the robustness of the entire web. It also aims to mitigate risks associated with increasing centralization of online content. Increasingly fewer entities host information online, creating choke points that can restrict access to web content. Amber addresses this by enabling the storage of snapshots via multiple archiving services, such as the Internet Archive’s Wayback Machine and Perma.cc.

Amber is useful for any organization or individual that has an interest in preserving the content to which their website links. In addition to news outlets, fact-checking organizations, journalists, researchers, and independent bloggers, human rights curators and political activists could also benefit from using Amber to preserve web links. The launch is the result of a multi-year research effort funded by the U.S. Agency for International Development and the Department of State.

“We hope supporters of free expression may use Amber to rebroadcast web content in a manner that aids against targeted censorship of the original web source,” said Geneve Campbell, Amber’s technical project manager. “The more routes we provide to information, the more all people can freely share that information, even in the face of filtering or blockages.”

Amber is one of a suite of initiatives of the Berkman Center focused on preserving access to information. Other projects include Internet Monitor, which aims to evaluate, describe, and summarize the means, mechanisms, and extent of Internet content controls and Internet activity around the world; Lumen, an independent research project collecting and analyzing requests for removal of online content; and Herdict, a tool that collects and disseminates real-­time, crowdsourced information about Internet filtering, denial of service attacks, and other blockages. It also extends the mission of Perma.cc, a project of the Library Innovation Lab at the Harvard Law School Library. Perma.cc is a service that helps scholars, courts and others create web citation links that will never break.

Amber is now available for sites that run on WordPress.org or Drupal. Find out more and download the plugin at amberlink.org.


Original URL: http://today.law.harvard.edu/berkman-center-releases-tool-to-combat-link-rot/

Original article

Show HN: Ship – A fast, native issue tracker for software projects

Nick Sivo and I are releasing Ship, an issue tracker for software projects.

We started with the premise that it should be a native app in recognition of how frequently it ought to be referenced when used properly. We then decided to design around these features that we are uniquely strong at:

Ship’s name comes from this story about the release of the Macintosh, which I love because it’s both inspirational and practical at the same time. With such an ambitious name, we had to press through and actually, you know, ship something. Here it is: https://www.realartists.com.

Preconceptions and Assumptions

This first version of Ship is built for people like us. Specifically, we want to track hundreds or thousands of issues and pull up something we filed two months ago as quickly and easily as something we filed two hours ago.

We want to reduce friction when filing all of those issues, but we also want some structure, grouping issues by classification into components with specific milestones. Having some structure makes issues easier to find and lets us visualize our progress and determine just how much we have to go.

For example:

Milestone progress chart

We want to be able to quickly answer all sorts of questions with easily built queries. Here are some of my favorites:

Not my problem (open issues that used to be assigned to me but no longer are):

Not my problem query results

Enhance! (open enhancements assigned to me – got to stay on top of the little things):

Enhance

Changes built by the buildbot since build 250:

Since 250

We also want a scriptable API to store relevant information from our other tools. For example, this is how our buildbot tags issues that are fixed in each of our builds:

api = shippy.Api(token=args.apitoken)

identifiers = set()
for commit in commits:
    identifiers.update(locate_fixed_problems(commit.message))

for id in identifiers:
    api.problem_keyword_set(id, "Built in %s" % (args.bot), args.build)

Technical Notes

We decided to build Ship with proper bidirectional offline support from day zero, enabling some unique and powerful features. From a technical standpoint it sets Ship apart from online-only issue tracking systems. Because of that, I thought I’d highlight some interesting tidbits about how we implemented our offline support. I’ll start with the database schema and queries, explain how offline edits are handled, and cover file attachments and historical snapshots.

At a high level, data in Ship is represented as a log. Everything you do is a log entry. Every attribute of an issue you file is just a log entry, and every issue is itself just the sequence of those entries. Here’s what it looks like in JSON:

{
  "identifier": "39d11394-8d44-4662-b418-75d4bb0c1543",
  "problemIdentifier": 501,
  "authorIdentifier": "27a1cb73-36cb-4573-8edf-be0adc9b9f54",
  "creationDate": "2016-01-22T21:04:46.23Z",
  "sequence": 35092,
  "batchIdentifier": "0f0bb8bf-a670-47d0-a796-ca550f88ba71",
  "type": "state",
  "stateIdentifier": "93a067fe-011f-4cca-9fa2-00aa18d38d85"
},
{
  "identifier": "7862b8f6-d038-4e31-ae23-2e3bfc8e162c",
  "problemIdentifier": 501,
  "authorIdentifier": "27a1cb73-36cb-4573-8edf-be0adc9b9f54",
  "creationDate": "2016-01-22T21:04:46.23Z",
  "sequence": 35093,
  "batchIdentifier": "0f0bb8bf-a670-47d0-a796-ca550f88ba71",
  "type": "title",
  "title": "Weird animation when bringing up Keywords popover"
}

Every time a client connects to the server, it informs the server of its latest log entry sequence number. The server then compresses and sends all of the entries that have been created since the client’s last connection. Further, the connection remains open, and new changes are streamed to all connected clients.

On both the client and the server, the log is stored in a single database table (one row per entry). Flattened representations of the issues are stored in an additional table, where the data is computed by rolling up the log on a per issue basis (one row per issue). This flattened representation enables fast and efficient querying.

Database Schema and Queries

One interesting aspect of the client / server design is that both the client and the server use essentially the same core database schema and contain the same data. This means that at a high level, a single query makes sense to run against both the client database and the server database. While initially only the client supported querying, as we built our REST and Python APIs we found ourselves wanting to run both ad hoc and user saved queries against the server.

The clients use NSPredicate to perform queries against their Core Data stores. This is done for two reasons: the Mac client uses a heavily customized NSPredicateEditor to good effect to allow people to interactively build queries, and second how else are you going to query Core Data?

The server side is an ASP.NET app backed by SQL Server. We wrote a C# library that can parse NSPredicate formatted queries and convert them into LINQ expression trees that we can run with Entity Framework on the server. I’m guessing the set of people who have a bunch of NSPredicate defined queries that they also want to run in .NET land is probably just Nick and I, but it’s still a neat technical achievement.

NSPredicate is also nice in that it gives us a lingua franca for doing queries. A lot of issue trackers invent their own query language, and I feel like that’s a mistake. Almost nobody except the people who work on it are going to remember some issue tracker specific query language. NSPredicate is nice because it’s easily composed and decomposed to and from a visual query builder (unlike SQL) which is what most users will want to use, and if you do drop down and actually write NSPredicates directly, as our API allows you to do, at least some set of users are already fluent with the query language.

Offline Edits

What I’ve described so far should give you the picture of how read support in offline mode works in the Ship client. The client maintains its own complete mirror of the server’s database, and the server live syncs changes down to it whenever the client is online.

But what if you want to file a new issue or edit an existing issue while offline? In these cases, the client updates its local database with the corresponding pending log entries. When you go back online, the client attempts to sync those pending changes back to the server, and most of the time succeeds.

However, in some cases, your changes might conflict with changes made by somebody else. When that happens, we fork the issue (just like git) and show a really nice merge conflicts UI that allows you to resolve the conflicts. Because we know exactly where in the log pending changes were applied, we can ensure we only conflict in the case that two people make logically simultaneous and mutually exclusive edits (e.g. you change the priority from 1 to 2 and I change it from 1 to 3 without knowing about your change). Our merge support is probably overkill since a lot of popular tools choose the data loss route in this case, but data loss bugs me so much that we had to do the right thing.

File Attachments

You can add files of arbitrary type and size inline in Ship. Storing and syncing complete bug databases for organizations on client machines for offline use is not actually that crazy. Even tens of thousands of issues don’t require a that much storage if you ignore attachments.

Whereas clients have full databases in terms of their ability to create, view, and query issues offline, we can’t reasonably store all of the attachments offline. I mean, just counting Rob Ford gifs alone, I probably add more attachments than Nick would want to store on his computer. Instead, we optimistically download tiny attachments, caching them with a reasonably long lifespan. Bigger attachments are lazily downloaded and cached with a short lifespan, or when possible for supported media files, streamed on demand with no local caching.

On the server side the attachments get stored in Azure blob storage. The process of uploading attachments is basically memory map the file to be uploaded, and then send checksummed and optionally deflated 4MB chunks up to the server until the file is fully uploaded. Since we’re operating only on relatively small chunks at a time, if we get interrupted due to network conditions or the laptop going to sleep or whatever, we can just pick up where we left off at the next opportunity.

We also use a kind of neat approach for compressing the chunks to upload. We try to use zlib to compress the chunks as we upload, and if we get an improvement then great. If we fail to improve the chunk size by some factor, we mark it as a failure and move on to the next chunk. If we get a certain number of failures (4), we assume the data we’re uploading is already compressed and then stop trying to compress any more. This heuristic works well and allows us to really quickly upload large log files and the like which are highly compressible while not wasting cpu cycles in zlib on, say, mp4 screencasts.

One last cool thing we do is we let other connected clients observe upload progress of attachments over the web socket. So suppose Nick is attaching a large log file to an issue, from my computer I can actually see that the attachment is on its way and I can ask to download it as it becomes available.

Historical Snapshots

Ship has a flexible progress charting system built into it. You can use it to see your incoming and outgoing issue rates, compare progress across milestones, teams, and team members. You can see if you’re speeding up or slowing down, getting buried or digging out.

To support these charting features, as well as a few other querying features in Ship, it is possible to query for issues not just as they are in the present, but as they were at a specific point in time or range of times in the past.

It turns out that this is actually fairly straightforward with the log format used by Ship. By simply taking subsequences of the sequence of log entries for a given issue at the committed save points (i.e. points where somebody saved an issue), we can produce all of the logical historical states of the issue. We just save those rolled up historical states into a table in the database with some validity date ranges attached and we’re ready to query against it for charting or various questions you might want to ask (e.g. show me all open issues that were ever assigned to me, but now are re-assigned to somebody else).


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/tFvoh597-f0/ship-it.html

Original article

Defending yourself from Amazon.com

The email address on file for an Amazon.com account should never be used anywhere else for anything.

There are multiple reports, spanning years, of attackers abusing the Amazon chat system to scam customer support representatives into divulging your personal information. All the scammers need is a victims name and email address.

Amazon.com chat

Starting a chat with Amazon – without logging in first

For years now, Amazon has known about this security hole in their procedures and done nothing about it.

To read this article in full or to leave a comment, please click here


Original URL: http://www.computerworld.com/article/3027244/security/defending-yourself-from-amazon-com.html#tk.rss_all

Original article

18F: Introducing the CSS coding style guide

January 11, 2016

Tagged /

css
/

frontend
/

programming
/

style guide
/

by Marco Segreto and Jeremia Kimelman

18F is releasing our CSS coding style guide, which specifies our best practices and rules for writing consistent, maintainable CSS code. It was built with extensive user research to ensure we accurately understood the problems our developers were facing and to match them up with conventions in the public frontend community.

screenshot: css from css guide

What’s in the guide

The guide uses two approaches to improving CSS code at 18F: a written guide and a linter to automatically check code for compliance to our guidelines.

The guide lays out rules and recommendations on writing consistent and maintainable CSS. The first part of the guide goes over 18F’s recommended CSS frameworks and CSS processing languages: Sass and Bourbon Neat. These were chosen because they’re broad enough to allow us to apply our standards in different situations, and they have wide-spread use in the current front end community.

We also have guidelines on whitespace, sorting order, naming conventions and general formatting. Supplying these rules allows different codebases to all remain consistent so 18F developers don’t have to adjust to new ways of writing CSS when they join different teams, which happens often due to the large number of simultaneous projects happening at 18F.

While many of the rules in this guide are meant to standardize CSS code across our projects, we also wanted to include the best practices from the front end community. We researched numerous open sourced guides and resources to develop suggestions for CSS architecture, file structure, how and when to use certain language features and CSS specificity. Since CSS as a language has very few constraints, having a set of guidelines can help less advanced developers write CSS in a more sustainable way.

While 18F developers are diligent about reading guides and following best practices, everyone can use a little help sometimes. So we made a linting tool that checks a codebase to ensure it conforms to all the rules in the guide. The linter can be used both on GitHub or locally to check code on a developer’s computer. If the linter finds any discrepancies between the code and our guide, it will issue a warning and the developer can choose how to proceed. Automated testing like this helps ensure we’re shipping the best quality code we can, while also freeing up time so developers can spend more time coding and less time testing.

Getting started

To get started using the style guide for your Sass code, we recommend setting up the linting tools and running them on your project using these instructions. The linter can either be set up to run locally or run on each pull request through the free Hound CI service. Either option works, it depends on what is best for your team.

Coming up

We’d like to continually improve this project as we go forward. Future updates may include:

  • Ensure we have linting tools that work with pure CSS rather than just Sass.
  • Update our linter so when it gives warnings they’ll tie back to the coding style guide. This could mean setting up identifiers for each rule in a similar way to how pylint does it.
  • Track metrics on the success of the style guide by monitoring how many 18F project teams are conforming to it.
  • Reach out to other federal development teams to expand or modify the guide so it can work for all federal teams.

If you have any feedback on the style guide, please open an issue at the 18F frontend repository on GitHub.


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/0XpCBmMputs/

Original article

Amber: How TeleRead and other sites can now bring back dead Web links, Lazarus fashion

amberyoutubeDon’t you hate it when Web links vanish?

But what if a visitor to TeleRead or another blog could perform a Lazarus act and bring back the dead ones?

Wish no more. We’ve just added the Amber plugin for WordPress and Drupal, from the Berkman Center for Internet and Society at Harvard. Check out the related video.

From now on, if TeleRead links to an external site and the link no longer works, you can see the page just the same. The “hover” option to see the page will show up after two seconds.

At least that’s our hope. Let’s see if Amber gets along ok with our other plug-ins. The fall-back pages will be stored at the Internet Archive, although we could also have chosen our own server. We’re talking Web pages here. But what if copyright law allowed similar technology for preservation of books, especially networked ones?

Sorry, but this service is only “from now on.” It won’t work with already-vanished links. What’s more, Amber will not preserve pages from sites that opt out. And the preserved pages may not be the most recent versions.

Needless to say, I’m highly in favor of anything that mitigates “link rot.” TeleRead goes back to the 1990s and is the world’s oldest site devoted to general-interest news and views on e-books. We’ve outlasted many and perhaps most of the sites we’ve linked to.

We’re still working on these matters internally, by the way. If you do get our 404 page because you couldn’t find a post you were looking for, you’ll see a reminder to use the search box in the upper right. The desired page may still be on our site—just not at the same Web address.

The post Amber: How TeleRead and other sites can now bring back dead Web links, Lazarus fashion appeared first on TeleRead.


Original URL: http://www.teleread.com/e-reading-tips-apps-and-gadgets/amber-teleread-sites-can-now-bring-back-dead-web-links-lazarus-fashion/

Original article

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑

%d bloggers like this: