How to annotate the web for the world, your group, or your students

Using proper annotation can help you add context to content. Andy Wolber looks at three Chrome web extensions that let you add another layer of information to a web page.


Original URL: http://techrepublic.com.feedsportal.com/c/35463/f/670841/s/4e4dc7d5/sc/28/l/0L0Stechrepublic0N0Carticle0Chow0Eto0Eannotate0Ethe0Eweb0Efor0Ethe0Eworld0Eyour0Egroup0Eor0Eyour0Estudents0C0Tftag0FRSS56d97e7/story01.htm

Original article

The Linux Foundation announces free ‘Intro to Cloud Infrastructure Technologies’ edX course

learningschoolstudentwomen

One of the most important things any human can do is learn. Keeping your mind sharp and active is very healthy. While formal education, such as college, is a great investment in yourself, it is understandably too expensive for many. While I cherish my degree, I don’t love my monthly student loan payment.

Luckily for current learners, massive open online courses are becoming increasingly popular. While these MOOCs may not carry the same panache as a university diploma, they can enable you to learn new things for free. Today, the Linux Foundation announces the totally free ‘Intro to Cloud Infrastructure Technologies’ course. This will be offered through the reputable edX.org.

“Understanding cloud technologies tops the list of most important skills for any developer, sysadmin or emerging DevOps professional. LFS151, an Introduction to Cloud Infrastructure Technologies, will provide a primer on cloud computing and the use of open source software to maximize development and operations. It will cover next-generation cloud technologies like Docker, CoreOS, Kubernetes and OpenStack; it will provide an overview of software-defined storage and networking solutions; and a review of DevOps and continuous integration best practices”, says The Linux Foundation.

Anant Agarwal, edX CEO and MIT Professor explains, “The Linux Foundation’s Intro to Linux is among our most popular courses of all time. It’s clear Linux and open source software are key to a fruitful future in tech. As we see with edX itself, cloud technologies have become a part of daily life. We’re excited to see learners from around the world take advantage of this unique educational opportunity”.

While the course is totally free, you can upgrade to a verified certificate for $99. This is not necessary, but can look better if you intend to put the course on a resume or linkedin. Speaking of the latter, edX even offers a way to easily import a verified certificate into your LinkedIn profile.

Even though the course is free, the instructors — Chip Childers and Neependra Khare — are actually very impressive people. Childers is a current member of The Apache Software Foundation and is a Cloud Foundry Foundation VP. Khare is an expert on Docker, having written the book ‘Docker Cookbook’.

If you are ready to sign up for this free course, you can do so here. Will you take advantage? Tell me in the comments.

Photo Credit: Syda Productions /Shutterstock


Original URL: http://feeds.betanews.com/~r/bn/~3/I2WukKNx8TY/

Original article

Show HN: Podcat – Imdb for podcasts

‘IMDB’ for Podcasts

See everyone who’s been on your favorite podcast! Check out podcasts mentioning

Bobby Lee,


Glynn Washington

or

David Choe.

Discover

Find new podcasts through the guests they share.

Share

Send your friend a link to a specific time in an episode. The can play it right in the browser!


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/Eve1ZhIHg6k/

Original article

More code review tools

Effective code review catches bugs before they’re deployed, improves code consistency, and helps educate new developers. We’re adding new features to make code review on GitHub faster and more flexible.

Find what you’re looking for, faster

Pull requests with many changes sometimes require review from several people with different areas of expertise. If you’re a Ruby expert, for example, you might want to focus just on the Ruby code and ignore any changes made to HTML and CSS files. You can use the new files list to search by extensions like .rb, .html, etc. You can also filter by filename if you know what you’re looking for.

Find files you’re looking for, faster

More flexible review

Not all teams review code the same way. The most popular style on GitHub is reviewing all changes in a pull request at once, making the pull request the unit of change. Some teams choose to use a commit-by-commit workflow where each commit is treated as the unit of change and is isolated for review.

Page through commits easily

For teams using this style, you’ll now have access to the new commits list in the review bar which can help you quickly find the commit you want to review. We’ve also added pagination and new keyboard shortcuts to make navigating through commits in a pull request even easier. Use the ? key when viewing a pull request to view the list of keyboard shortcuts.

Use previous and next buttons to navigate through commits

View comments with more context

Every day, developers have interesting conversations and debates about code during the review process. These conversations can help new and future developers gain context quickly and better understand how and why a codebase has evolved over time. To ensure these conversations and the accompanying diffs are never lost we’ve made it easier to get deeper context on any line comment, even if it has been outdated by newer changes.

Use the “view outdated diff” button for more context

Pick up where you left off

While code review is essential for high quality code, it can be a long and tiring process, especially for projects with many incoming pull requests. It’s often most helpful to view just the new changes that have occured after you have reviewed a pull request, so we’ve added a timeline indicator to help you get to those changes, faster.

New changes are now marked in the timeline

Today’s changes are all about making code review on GitHub faster and more flexible for you and your teams. Check out the documentation for more information on working with pull requests. As always, get in touch with us for any questions or feedback. We’d love to hear how we can make code review even better.


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/lzWj5P1rkMI/2123-more-code-review-tools

Original article

The Complete Guide to HTTP/2 with HAProxy and Nginx

TL;DR

Skip to Configuration section if you want to skip installation process and you are only interested in configuration details.

Why should I care about HTTP/2?

There are already many articles out there about HTTP/2 and its benefits – and I encourage you to read them. I’ll focus on points which are most important from my point of view.

Key benefits of HTTP/2:

  • It is binary (not textual like HTTP/1.1) and uses header compression. No more worries about header and cookies size.
  • Is fully multiplexed, can use one connection for parallelism. Your site performs much better in case it includes plenty of resources (fonts, CSS, JS, image files) because now they are all loaded in single TCP connection, in a non-blocking manner. Domain sharding and asset concatenation becomes an anti-pattern. In short: your website loads much faster.
  • It allows the server to push responses proactively into client caches (no support for that feature in Nginx yet).
  • It uses the new ALPN extension which allows for faster-encrypted connection. The encryption protocol is determined during initial connection.

Can I use it today?

Yes, you can and you should. As you can see on Can I Use service, all modern browsers now support HTTP/2, incl. IE11 and Edge. The only exceptions are in the mobile world with Opera Mini and Android Browser not supporting it. 

Moreover, the configuration described below ensures that clients that are not supporting HTTP/2, will fallback to HTTP/1.1. This is very important: your website should be accessible for older browsers or search engine bots.

Setup

I’ll use CentOS 7 for my setup, but you can easily adjust all code snippets for other Linux distribution.

You will need:

  1. Site running over SSL. You can use dummy certificate if you do not have any (SIMPLE).
  2. Nginx 1.9.5 or newer (SIMPLE).
  3. HAProxy 1.6 or newer with OpenSSL 1.0.2 (TRICKY).
  4. Good HAProxy and Nginx config (SIMPLE).
  5. Some way to determine if you are using HTTP/2. HTTP/2 and SPDY indicator is a good one for Chrome browser.

The OpenSSL part is a bit more tricky only because most Linux distributions come with OpenSSL 1.0.1 (or older) which doesn’t support ALPN (Application Layer Protocol Negotiation). ALPN extension allows the application layer to negotiate which protocol will be used in the connection, and it is essential if we want to support HTTP/2 and HTTP/1.1 on the same TCP port. Besides, HTTP/2 in HAProxy is only supported using ALPN, so it is a must be on our list.

If you are familiar with the installation process, skip it and move to the configuration section.

1. Obtain SSL certificates

You can obtain trusted certificates cheaply on ssl2buy.com, which is re-seller of many trusted issuers. I did buy a couple of certificates from them and I can recommend their service and customer support. You can get AlphaSSL certificate for under $20.

If you need to generate dummy certificates for HAProxy and/or Nginx, you can use the following commands:

We use generated by above commands certificates and keys in the configs below.

2. Nginx setup

Installing Nginx 1.9 on CentOS 7 is fairly simple. The only thing is to use mainline YUM repository, not the stable one. As described on Nginx.org page, put yum repo config in /etc/yum.repos.d/nginx.repo and do yum install:

That’s it. 

Let’s create Nginx vhost.conf to make sure our Nginx works as expected and we have HTTP/2 on it. Here’s simple vhost config:

  • Note 1: The key is line with listen 443 default_server ssl http2. That essentially gives you HTTP/2.
  • Note 2: Ignore for now about line 3 in this config with listen 81 part – we’ll come back to it soon.
  • Note 3: I use standard 80/443 ports as I run this example inside Docker image, so they do not conflict with any ports on my host machine. If needed, adjust to your case.
  • Note 4: I use a dummy.crt and dummy.key generated in Obtain SSL certificates step.

Now, when you connect to it using https:// protocol, HTTP/2 indicator should tell you that site is running on HTTP/2 protocol.

Congratulations, you have working Nginx with HTTP/2!

3. OpenSSL and HAProxy installation

This part is a bit more tricky. We need to compile OpenSSL 1.0.2 from the source (because it is not available in the yum repository yet) and then re-compile HAProxy to use it.

The way it worked for me was to build OpenSSL with the no-shared param and to link OpenSSL statically against HAProxy. I followed instructions from official HAProxy README. Funny enough, that was the last place I looked into after trying different ways… and the most resourceful. How often you read these long and often boring README files?

After that, you should have HAProxy compiled and installed. Test it with:

4. Configuration

This is the complete /etc/haproxy/haproxy.cfg we will use:

The most essential bits are here:

Here we define HTTPS front-end interface to which clients connect with HAProxy listening on port 443.

Connections are handled by backends nodes-http2 and nodes-http, depending if the client supports HTTP/2 or not. Note that we terminate (i.e., decrypt) SSL on HAProxy with this config; connections to backend servers are unencrypted. Our backend server can be reached by HAProxy by web.server hostname (which is our running Nginx, as described above).

Note bind *:443 line with alpn h2,http/1.1 where we advertise two supported protocols for clients: HTTP/2 and HTTP/1.1. This way browsers which do not support HTTP/2 yet, are still able to connect to our website

Line use_backend nodes-http2 if { ssl_fc_alpn -i h2 } redirects clients supporting HTTP/2 to nodes-http2 backend, the rest will be handled by nodes-http which uses old HTTP/1.1 protocol. This point is quite important, as you want to have backward compatibility for clients not supporting HTTP/2 yet.

So then we have a line:
server node1 web.server:81 check send-proxy
At this point, HAProxy is talking HTTP/2 protocol only. And it is connecting to web.server on unusual port 81. What pleasant surprises do we have there?

Let’s use Nginx with the following vhost config (as described above):

Line:
listen      81  default_server http2 proxy_protocol;
defines server on port 81 which talks in HTTP/2 language too. Note that we cannot use the server on port 443 which talks in SSL: our SSL connection got decrypted by HAProxy and now we have a non-encrypted connection. Therefore we need separate server on its own port (81 in our case) which communicate in HTTP/2 layer only, without SSL

Small digression: there’s also proxy_protocol keyword there. Its equivalent in haproxy.cfg is send-proxy, on the backend server configuration. Proxy Protocol is a separate story, well explained in this article on Nginx.com. In short, it allows passing information about client IP address and port numbers by HAProxy to the backend server (and thus: your application), which is usually very desired.

You can run HAProxy with the above config using:

haproxy -f /etc/haproxy/haproxy.cfg

Now you should be able to connect to your proxy host (e.g. https://localhost:443/) and see it working with HTTP/2. If you testing in Firefox, check network inspector headers, you should see something like X-Firefox-Spdy: “h2”.

Docker images

If you live already on Docker island, you can use our MILLION12 images. Here at MILLION12 we started using Docker long before it was stable 1.0 and since then we’ve had pleasure to built a couple of useful images, incl. million12/haproxy and million12/nginx, which we use in this example. They already have the discussed configuration.

You can launch the whole stack using the following docker-compose.yml file. Note that we link the Nginx server under name web.server inside haproxy container, which is the hostname used in haproxy.cfg presented above.

Connect to https://haproxy:8443 and you should see screens like that (note the blue HTTP/2 indicator):

If you want to check real production project with these Docker images and the above configuration, check out https://PrototypeBrewery.io. Prototype Brewery is our product, prototyping tool for planning and building interactive and responsive web projects. Check it out, we are already on HTTP/2 (and don’t forget to sign up).

Summary

As you can see, migrating to HTTP/2 is a fairly simple process which you can put in place today. There’s no reason to wait as the majority of browsers supports it. Moreover, with fallback to HTTP/1.1 you are on the safe side.

If you think I missed something here, anything can be improved etc., please write in comments below.

About the author

Marcin Ryzycki is co-founder at Prototype Brewery. He dreams of building successful startups, likes martial arts and has spent the last 15 years creating more or less awesome websites and web applications.


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/XDQ8z-_mVf0/http-2-with-haproxy-and-nginx-guide

Original article

AWS Database Migration Service

Do you currently store relational data in an on-premises Oracle, SQL Server, MySQL, MariaDB, or PostgreSQL database? Would you like to move it to the AWS cloud with virtually no downtime so that you can take advantage of the scale, operational efficiency, and the multitude of data storage options that are available to you?

If so, the new AWS Database Migration Service (DMS) is for you! First announced last fall at AWS re:Invent, our customers have already used it to migrate over 1,000 on-premises databases to AWS. You can move live, terabyte-scale databases to the cloud, with options to stick with your existing database platform or to upgrade to a new one that better matches your requirements.  If you are migrating to a new database platform as part of your move to the cloud, the AWS Schema Conversion Tool will convert your schemas and stored procedures for use on the new platform.

The AWS Database Migration Service works by setting up and then managing a replication instance on AWS. This instance unloads data from the source database and loads it into the destination database, and can be used for a one-time migration followed by on-going replication to support a migration that entails minimal downtime.  Along the way DMS handles many of the complex details associated with migration, including data type transformation and conversion from one database platform to another (Oracle to Aurora, for example). The service also monitors the replication and the health of the instance, notifies you if something goes wrong, and automatically provisions a replacement instance if necessary.

The service supports many different migration scenarios and networking options  One of the endpoints must always be in AWS; the other can be on-premises, running on an EC2 instance, or running on an RDS database instance. The source and destination can reside within the same Virtual Private Cloud (VPC) or in two separate VPCs (if you are migrating from one cloud database to another). You can connect to an on-premises database via the public Internet or via AWS Direct Connect.

Migrating a Database
You can set up your first migration with a couple of clicks! You simply create the target database, migrate the database schema, set up the data replication process, and initiate the migration. After the target database has caught up with the source, you simply switch to using it in your production environment.

I start by opening up the AWS Database Migration Service Console (in the Database section of the AWS Management Console as DMS) and clicking on Create migration.

The Console provides me with an overview of the migration process:

I click on Next and provide the parameters that are needed to create my replication instance:

For this blog post, I selected one of my existing VPCs and unchecked Publicly accessible. My colleagues had already set me up with an EC2 instance to represent my “on-premises” database.

After the replication instance has been created, I specify my source and target database endpoints and then click on Run test to make sure that the endpoints are accessible (truth be told, I spent some time adjusting my security groups in order to make the tests pass):

Now I create the actual migration task. I can (per the Migration type) migrate existing data, migrate and then replicate, or replicate going forward:

I could have clicked on Task Settings to set some other options (LOBs are Large Objects):

The migration task is ready, and will begin as soon as I select it and click on Start/Resume:

I can watch for progress, and then inspect the Table statistics to see what happened (these were test tables and the results are not very exciting):

At this point I would do some sanity checks and then point my application to the new endpoint. I could also have chosen to perform an ongoing replication.

The AWS Database Migration Service offers many options and I have barely scratched the surface. You can, for example, choose to migrate only certain tables. You can also create several different types of replication tasks and activate them at different times.  I highly recommend you read the DMS documentation as it does a great job of guiding you through your first migration.

If you need to migrate a collection of databases, you can automate your work using the AWS Command Line Interface (CLI) or the Database Migration Service API.

Price and Availability
The AWS Database Migration Service is available in the US East (Northern Virginia), US West (Oregon), US West (Northern California), Europe (Ireland), Europe (Frankfurt), Asia Pacific (Tokyo), Asia Pacific (Singapore),  and Asia Pacific (Sydney) Regions and you can start using it today (we plan to add support for other Regions in the coming months).


Jeff;

 


Original URL: http://feedproxy.google.com/~r/AmazonWebServicesBlog/~3/6HAMPEEkNXI/

Original article

AWS Database Migration Service

Do you currently store relational data in an on-premises Oracle, SQL Server, MySQL, MariaDB, or PostgreSQL database? Would you like to move it to the AWS cloud with virtually no downtime so that you can take advantage of the scale, operational efficiency, and the multitude of data storage options that are available to you?

If so, the new AWS Database Migration Service (DMS) is for you! First announced last fall at AWS re:Invent, our customers have already used it to migrate over 1,000 on-premises databases to AWS. You can move live, terabyte-scale databases to the cloud, with options to stick with your existing database platform or to upgrade to a new one that better matches your requirements.  If you are migrating to a new database platform as part of your move to the cloud, the AWS Schema Conversion Tool will convert your schemas and stored procedures for use on the new platform.

The AWS Database Migration Service works by setting up and then managing a replication instance on AWS. This instance unloads data from the source database and loads it into the destination database, and can be used for a one-time migration followed by on-going replication to support a migration that entails minimal downtime.  Along the way DMS handles many of the complex details associated with migration, including data type transformation and conversion from one database platform to another (Oracle to Aurora, for example). The service also monitors the replication and the health of the instance, notifies you if something goes wrong, and automatically provisions a replacement instance if necessary.

The service supports many different migration scenarios and networking options  One of the endpoints must always be in AWS; the other can be on-premises, running on an EC2 instance, or running on an RDS database instance. The source and destination can reside within the same Virtual Private Cloud (VPC) or in two separate VPCs (if you are migrating from one cloud database to another). You can connect to an on-premises database via the public Internet or via AWS Direct Connect.

Migrating a Database
You can set up your first migration with a couple of clicks! You simply create the target database, migrate the database schema, set up the data replication process, and initiate the migration. After the target database has caught up with the source, you simply switch to using it in your production environment.

I start by opening up the AWS Database Migration Service Console (in the Database section of the AWS Management Console as DMS) and clicking on Create migration.

The Console provides me with an overview of the migration process:

I click on Next and provide the parameters that are needed to create my replication instance:

For this blog post, I selected one of my existing VPCs and unchecked Publicly accessible. My colleagues had already set me up with an EC2 instance to represent my “on-premises” database.

After the replication instance has been created, I specify my source and target database endpoints and then click on Run test to make sure that the endpoints are accessible (truth be told, I spent some time adjusting my security groups in order to make the tests pass):

Now I create the actual migration task. I can (per the Migration type) migrate existing data, migrate and then replicate, or replicate going forward:

I could have clicked on Task Settings to set some other options (LOBs are Large Objects):

The migration task is ready, and will begin as soon as I select it and click on Start/Resume:

I can watch for progress, and then inspect the Table statistics to see what happened (these were test tables and the results are not very exciting):

At this point I would do some sanity checks and then point my application to the new endpoint. I could also have chosen to perform an ongoing replication.

The AWS Database Migration Service offers many options and I have barely scratched the surface. You can, for example, choose to migrate only certain tables. You can also create several different types of replication tasks and activate them at different times.  I highly recommend you read the DMS documentation as it does a great job of guiding you through your first migration.

If you need to migrate a collection of databases, you can automate your work using the AWS Command Line Interface (CLI) or the Database Migration Service API.

Price and Availability
The AWS Database Migration Service is available in the US East (Northern Virginia), US West (Oregon), US West (Northern California), Europe (Ireland), Europe (Frankfurt), Asia Pacific (Tokyo), Asia Pacific (Singapore),  and Asia Pacific (Sydney) Regions and you can start using it today (we plan to add support for other Regions in the coming months).


Jeff;


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/6HAMPEEkNXI/

Original article

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑

%d bloggers like this: