Two Factor Belt and Suspenders

Weak passwords are out. Strong passwords are in but may not be enough to protect you. When you use dual or two factor authentication, you add a hurdle to those attempting to get unauthorized access to your law practice information. It doesn’t involve your finger or your face, which are password replacements and not necessarily better. Instead, you supplement your username and password with a one-time code.

You already use two factor authentication in other parts of your life. Probably the most common is the PIN and cash card. You have to have both the card, inserted in a machine, and your PIN to complete a transaction. If someone steals your card, or knows your PIN, they have only half the information they need. You can replicate this with your online accounts.

Who Supports Two Factor Authentication

This isn’t a new topic and has been touched on elsewhere on Slaw. An obstacle to two factor use is that not all services support two factors. To find out who does, look at this list. It is not comprehensive, but it lists many of the most popular business-oriented cloud services. As the screenshot below indicates, you can see services that support two factor and methods of using it with that service.

cutout-of-twofactorauth

If you are at the planning stages of moving your law practice information onto cloud-based services, this may be a decision point you add to the mix. There are many options, and making it a requirement of cloud use is a reasonable protection for your client confidential and private information.

One if by Text, Two if by App

The way two factor authentication on the Web tends to work is this:

  • You visit the Web site (or login to the app) and type in your username;
  • You type in your password;
  • Once you are authenticated, you see another box that looks like it could take a password. Instead, you type a time-sensitive code in it.

You can choose how to get that code. As the list at TwoFactorAuth shows, you might have it texted to you (sent by SMS). You configure this setting in your cloud-based service, if it’s available. Let’s use Dropbox as an example. When you access your account’s security page and turn on two factor authentication, you can choose to have the codes texted to you or you can use an app.

Once you’ve logged into Dropbox with your username and password, when you get to the two-factor code step, Dropbox will text you a code. You have a limited amount of time to type it into the Web page. If you do, you will be able to access your Dropbox files.

Alternatively, you can use an app. Some services will have their own apps; Microsoft Account (Android) is a good example. Other services can be used with more generic two factor apps, like Google’s Authenticator (Android | iOS) or Authy (Android | iOS).

I prefer an app because I do not always have my phone on or may be somewhere with poor wireless coverage. When I need a code, I open my phone and the Google Authenticator app and it shows me codes for my accounts. Next to each is a small circle that shows how much time I have left before a new code appears.

google-authenticator

If you use a mobile device that does not receive wireless messages, like some tablets, an authenticator app that creates its own time-based codes is a must. Regardless of whether you receive the codes over SMS or auto-generate them, you need to protect your device. Once you set up your phone to be used for two-factor authentication, be sure you are locking it with a PIN or password. These codes are only safe so long as your phone is.

No Need to Be Mobile

My own reality is that most of the services I use two factor authentication on are ones I access from my desktop PC. This means I’m sitting down and working and then need to find a phone to get a code. The easiest way to avoid this is to have the cloud service remember this computer. That way, it stops asking you for codes on your primary computer and just prompts you when you attempt to access files from somewhere unusual.

Don’t do it. If your information is important enough to need two-factor authentication, then take the couple of seconds it takes to input the code each time. Your computer might be stolen, or someone might get remote access to it, and you’ve dropped the security for a bit of convenience.

Windows users can use an open source app called Winauth to avoid a mobile device. You set up each site you want to use two-factor on – Google and Microsoft are built-in but you can add others – and the app runs on your computer. I like it for a couple of reasons. First, I don’t need my phone or mobile device. Second, unlike my mobile app, it requires a password before you can get to your codes. You only have to unlock it once but it’s a nice feature.

winauth-screencap-authenticator-menu

While Winauth will run on a Mac under Bootcamp, you might as well use a phone in that case. It doesn’t work until you’ve logged into your PC. Mac users can use SAASPASS to protect even their operating system password. It relies on your phone but you can set it up to show a remote unlock button on your phone when it’s near your Mac and login securely.

Two Factor without Passwords

Social logins are already common: you visit a site and can login using your Google or Facebook account, even though it’s not a Google or Facebook site. A different spin on that is Unloq. It eliminates passwords and only allows authentication based on your response to an e-mail or text message. It’s intended to be used by sites as either a primary or second factor authentication login.

Google has recently announced Smart Lock, a password manager that’s integrated into its Chrome browser and other services. They have are also working on Project Abacus, a so-called “multi modal” alternative to passwords. Beyond biometrics – you can use a device if you provide a fingerprint or face print, etc., already as a second factor on some systems – the project apparently looks at how you use the device in addition to biometric attributes. A complex view of who you are by how you use your device may enable you to cut out the passwords in the future.

Two factor authentication isn’t a must. There are some sites that a strong password is sufficient for protection. However, if you have the option to use two factor authentication on services that store client private and confidential information, take advantage of it. The more obstacles to unauthorized and unintended access to your practice information, the better.


Original URL: http://www.slaw.ca/2015/06/29/two-factor-belt-and-suspenders/

Original article

Potential-happiness: A Riemann and Elasticsearch dashboard for the terminal


README.md


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/qqPzeHZ-Ei4/potential-happiness

Original article

Show HN: Open source alternative to LaunchRock


README.md

Build Status
Code Climate
Test Coverage

This is a quick application to get up and running quickly with your new startup idea so you can focus on your actual product. It is a prelaunch MVP landing page aimed at gathering signups and testing market interest. It was originally written as an open source alternative to LaunchRock. It is written with Ruby on Rails. Originally, we needed an application that provided signup for two types of users for a two-sided market. It’s out of the box, ready to go. Just add styling. Fork and enjoy!

It may have a bit of our content, but it wouldn’t take you too long to change it to fit your need. Just a heads up.

Example

Here is an example of the launchpage once it’s all styled/designed (although, both the project and design are old): Backstagr

Features

  1. Email collection for two types of users

  2. Social sharing

  3. Auto mailer

  4. Ability to export user emails via CSV

    Coming soon

  5. Post signup survey and questionaire to gather more market research from your beta users.

  6. Waiting list social actions (i.e. move up the list if you share to 3 friends or something along these lines)

Get it running

Items you should change to customise it for your needs (baring the obvious. I’m not listing those. You’ll see the title, etc.):

  1. The .gitignore includes the mail initializer.. Here is the layout for stmp through google. Just fill with your own information:
require 'development_mail_interceptor'

ActionMailer::Base.smtp_settings = {
    :address            => "smtp.gmail.com",
    :port                   => 587,
    :domain             => "mydomain",
    :user_name      => "myuser@mydomain.com",
    :password       => "mypassword",
    :authenticaton => "plain",
    :enable_starttls_auto => true
}

ActionMailer::Base.default_url_options[:host] = "localhost:3000"
ActionMailer::Base.register_interceptor(DevelopmentMailInterceptor) if Rails.env.development?
  • Change the email in lib/development_mail_interceptor.rb to your email so that when you’re running app in development the test emails get sent to your email address.
  1. You’ll want to go into app/views/static/success as well as app/views/layouts/_twitterscript/app/views/layouts/_facebookscript and change the details of the social plugins to match your domain/twitter/facebook. It’s easy to add HN, Reddit, etc.

  2. All the normal rails stuff to start up an app. I’m only calling out the items that need to be changed that aren’t so obvious.

Contributing

  1. Fork the repo and clone it.

  2. Make your changes in a new git branch:

    git checkout -b my-fix-branch master

  3. Create your patch, including appropriate test cases making sure they pass.

  4. Push your branch to GitHub:

    git push origin my-fix-branch

  5. In GitHub, send a pull request to launchpage-rails:master

Contributors

A really big thanks to kaiomagalhaes for updating this to Rails 4 and improving some very old code.


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/3wC82FC6grQ/launchpage-rails

Original article

Atom 1.0

Today we’re proud to announce Atom 1.0. It’s amazing to think Atom has only been out and available to the public for a little over a year. A lot has happened since then. Atom has been downloaded 1.3 million times, and serves 350,000 monthly active users. The community has created 660 themes, and 2,090 packages including can’t-live-without packages that have their own mini communities like the linter, autocomplete-plus, and minimap.

In the 155 releases since launch, the editor has improved immensely in performance, stability, feature-set, and modularity. The editor is faster in scrolling, typing, and start-up time. Atom now has a Windows installer, Linux packages, and several heavily requested features have been added like pane resizing and multi-folder projects.

Atom has become more modular through stabilizing the API, built-in ES6 support using babel, services for inter-package communication, decorations for extending the core editor, and new themes that automatically adapt the UI to the syntax colors. We’ve even removed some of our core packages in favor of community-built packages like autocomplete-plus.

To make using Atom easier, we now have extensive API docs, a flight manual, and a tutorial video on setting up Atom.

Humble Beginnings

Atomicity initial build

Atom started as a side project of GitHub founder @defunkt (Chris Wanstrath) way back in mid 2008, almost exactly seven years ago. He called it Atomicity. His dream was to use web technologies to build something as customizable as Emacs and give a new generation of developers total control over their editor.

But as is the fate of many side projects, it was put on hold to focus on his main gig—GitHub.com. It was the beginning of 2009, GitHub.com had just launched eight months earlier, and it was looking like it might be successful. As he set Atomicity aside, @defunkt figured someone else would release a desktop editor based on web-technologies.

Then no one did.

In-browser editors like Cloud9 started popping up, and with them came open source JavaScript editors. In August 2011, GitHub included Ace into the github.com website for editing files. This re-ignited @defunkt’s interest in Atomicity, and three days later he had an OS X app with Ace running in a native WebView control. That was the beginning of the Atom you know today.

Atomicity with Ace running in a WebView control

Between August and November 2011, @defunkt and @probablycorey worked on Atomicity in their free time. By November, Atomicity became Atom, and Atom was upgraded to be an official GitHub project. Then in December @nathansobo, author of treetop, a Ruby parsing DSL, and generally excited about text editors, joined GitHub to work on Atom full time.

The rest is history woven into a narrative by the atom/atom git history and contributor graphs.

Atom's beginnings

Today

We’re happy to say that Atom 1.0 today reflects @defunkt’s original vision—to give today’s developers total control over their editor with familiar technologies.

The realization of this vision as Atom 1.0 is the foundation that will take us into the future. It is the technology stack, with the power and familiarity of the web platform combined with node and all it has to offer; it’s the stable API and atom core, which have been shaped by hundreds of contributors; and most of all, it’s you, the community.

Thanks to you, we have hurdled significant technical challenges. Because of your packages, Atom has feature breadth that we couldn’t have come close to achieving on our own. You’ve taken on major features like the linter, grown thriving sub-communities with autocomplete-plus, and taken on entire language niches with go-plus, atom-typescript, and omnisharp-atom.

Until now, work has largely gone into defining the 1.0 foundation. Now that the foundation is stable, we can shift our efforts to reaching the full potential of the platform.

Of course, we’ll continue to polish the core user-experience, improve performance and stability, and add international support, but realizing the full potential of Atom is about more than polish. We’re considering questions such as: What does super deep git integration look like? What does “social coding” mean in a text editor? How do we enable package authors to build IDE-level features for their favorite language?

We can’t wait to show you what’s next. Atom 1.0 is only the beginning.

Atom 1.0 screenshot


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/-W7iQTglfK4/atom-1-0.html

Original article

New Google and CMU Moonshot: the ‘Teacherless Classroom’

theodp writes: At the behest of Google, Carnegie Mellon University will largely replace formal lectures in a popular introductory Data Structures and Algorithms course this fall with videos and a social networking tool to accommodate more students. The idea behind the multi-year research project sponsored by Google — CMU will receive $200,000 in the project’s first year — is to find a way to leverage existing faculty to meet a growing demand for computer science courses, while also expanding the opportunities for underrepresented minorities, high school students and community college students, explained Jacobo Carrasquel, associate teaching professor of CS. “As we teach a wider diversity of students, with different backgrounds, we can no longer teach to ‘the middle,'” Carrasquel said. “When you do that, you’re not aiming at the 20 percent of the top students or the 20 percent at the bottom.” The move to a “teacherless classroom” for CS students at CMU [tuition $48K] comes on the heels of another Google CS Capacity Award-inspired move at Stanford [tuition $45K], where Pair Programming was adopted in a popular introductory CS class to “reduce the increasingly demanding workload for section leaders due to high enrollment and also help students to develop important collaboration skills.”


Share on Google+

Read more of this story at Slashdot.


Original URL: http://rss.slashdot.org/~r/Slashdot/slashdot/~3/i63pVrdy1xU/new-google-and-cmu-moonshot-the-teacherless-classroom

Original article

Tell HN: Commercial VPN service now in open source

Hi folks!

Let me introduce myself.
I am CEO/CTO/CIO/etc of russian SmartVPN.biz VPN service.

I couple years ago I got an idea to create my own startup – VPN service.
It was quite sudden idea, when a couple of friends asked me to give them access to my personal VPN server.
Of course I did not study the market, didn’t check how many such services are already exists. I’ve just started coding.
A little bit later I made working prototype and pushed it to production.

There were not so many expenses, only my free time and 20$/month for low-end vps.

A year and a half the project was in production. Lots of things changed meanwhile.
Internet in russia became very limited and censored.
I’m glad that I helped people to bypass stupid internet limitations in our country.
I’ve also experienced DDoS attacks twice, it is really exciting feeling, when you understand, that your service is real, and someone wants to get it down.

But the time is passing, my interests and priorities changed too. That is why I decided to shutdown my startup.
I don’t want to hide my sources on hard drive, so I’ve decided to make them opensource. Totally.
I published them on github https://github.com/smartvpnbiz with MIT license.
So anyone can fork it, use anyway you want. You can try yourself in this hard business.

This is not an ads, I just want to help someone, who may need my experience.
So I’ll be glad if my service will help anyone.


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/I0RwkUYHR1M/item

Original article

Sci-Hub tears down academia’s “illegal” copyright paywalls

sci-hubWith a net income of more than $1 billion Elsevier is one of the largest academic publishers in the world.

The company has the rights to many academic publications where scientists publish their latest breakthroughs. Most of these journals are locked behind paywalls, which makes it impossible for less fortunate researchers to access them.

Sci-Hub.org is one of the main sites that circumvents this artificial barrier. Founded by Alexandra Elbakyan, a researcher born and graduated in Kazakhstan, its main goal is to provide the less privileged with access to science and knowledge.

The service is nothing like the average pirate site. It wasn’t started to share the latest Hollywood blockbusters, but to gain access to critical knowledge that researchers require to do their work.

“When I was working on my research project, I found out that all research papers I needed for work were paywalled. I was a student in Kazakhstan at the time and our university was not subscribed to anything,” Alexandra tells TF.

After Googling for a while Alexandra stumbled upon various tools and services to bypass the paywalls. With her newly gained knowledge, she then started participating in online forums where other researchers requested papers.

When she noticed how grateful others were for the papers she shared, Alexandra decided to automate the process by developing software that could allow anyone to search for and access papers. That’s when Sci-Hub was born, back in 2011.

“The software immediately became popular among Russian researchers. There was no big idea behind the project, like ‘make all information free’ or something like that. We just needed to read all these papers to do our research,” Alexandra.

“Now, the goal is to collect all research papers ever published, and make them free,” she adds.

Of course Alexandra knew that the website could lead to legal trouble. In that regard, the lawsuit filed by Elsevier doesn’t come as a surprise. However, she is more than willing to fight for the right to access knowledge, as others did before her.

“Thanks to Elsevier’s lawsuit, I got past the point of no return. At this time I either have to prove we have the full right to do this or risk being executed like other ‘pirates’,” she says, naming Aaron Swartz as an example.

“If Elsevier manages to shut down our projects or force them into the darknet, that will demonstrate an important idea: that the public does not have the right to knowledge. We have to win over Elsevier and other publishers and show that what these commercial companies are doing is fundamentally wrong.”

The idea that a commercial outfit can exploit the work of researchers, who themselves are often not paid for their contributions, and hide it from large parts of the academic world, is something she does not accept.

“Everyone should have access to knowledge regardless of their income or affiliation. And that’s absolutely legal. Also the idea that knowledge can be a private property of some commercial company sounds absolutely weird to me.”

Most research institutions in Russia, in developing countries and even in the U.S. and Europe can’t afford expensive subscriptions. This means that they can’t access crucial research, including biomedical research such as cancer studies.

Elsevier’s ScienceDirect paywall
sciencedirect

So aside from the public at large, Sci-Hub is also an essential tool for academics. In fact, some researchers use the site to access their own publications, because these are also locked behind a paywall.

“The funniest thing I was told multiple times by researchers is that they have to download their own published articles from Sci-Hub. Even authors do not have access to their own work,” Alexandra says.

Instead of seeing herself as the offender, Alexandra believes that the major academic publishers are the ones who are wrong.

“I think Elsevier’s business model is itself illegal,” she says, pointing to article 27 of the UN declaration on human rights which reads that “everyone has the right freely to participate in the cultural life of the community, to enjoy the arts and to share in scientific advancement and its benefits.”

The paywalls of Elsevier and other publishers violate this right, she believes. The same article 27 also allows authors to protect their works, but the publishers are not the ‘authors,’ they merely exploit the copyrights.

Alexandra insists that her website is legal and hopes that future changes in copyright law will reflect this. As for the Elsevier lawsuit, she’s not afraid to fight for her rights and already offers a public confession right here.

“I developed the Sci-Hub.org website where anyone can download paywalled research papers by request. Also I uploaded at least half of more than 41 million paywalled papers to the LibGen database and worked actively to create mirrors of it.

“I am not afraid to say this, because when you do the right thing, why should you hide it?” she concludes.

Note: Sci-Hub is temporarily using the sci-hub.club domain name. The .org will be operational again next week.


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/AxQ4-h8lhUU/

Original article

MySQL performance optimization: 50% more work with 60% less latency variance

When I joined Pinterest, my first three weeks were spent in Base Camp, where the newest engineering hires work on real production issues across the entire software stack. In Base Camp, we learn how Pinterest is built by building it, and it’s not uncommon to be pushing code and making meaningful contributions within just a few days. At Pinterest, newly hired engineers have the flexibility to choose which team they’ll join, and working on different parts of the code as part of the Base Camp experience can help with this decision. Base Campers typically work on a variety of tasks, but my project was a deep dive into a MySQL performance optimization project.

Pinterest, MySQL and AWS, oh my!

We work with MySQL running entirely inside Amazon Web Services (AWS). Despite using fairly high-powered instance types with RAID-0 SSDs and a fairly simple workload (many point selects by PK or simple ranges) that peaks around 2,000 QPS, we had been unable to realize anywhere near the expected IO performance levels. Exceeding roughly 800 write IOPS would lead to unacceptable increases in latency and replication lag, and this replication lag or insufficient read performance on the slaves would slow down ETL and batch jobs, which would then have downstream impact on any other team relying on those jobs. The only options available were either to go to an even larger instance size, thus doubling our cost and obliterating our efficiency, or find ways to make the existing systems perform better.

I took over the project from my colleague, Rob Wultsch, who had already made the significant discovery that Linux kernel version appears to be quite important when running on SSD inside AWS. The default 3.2 kernel that ships with Ubuntu 12.04 just doesn’t cut it, nor does the 3.8 kernel that AWS recommends as a minimum version (although it’s still more than twice as fast as 3.2). Running sysbench on an i2.2xlarge (2 disk RAID-0 of SSDs) instance with kernel 3.2 could barely hit 100MB/sec of 16K random writes. Upgrading the kernel to 3.8 got us to 350MB/sec with the same test, but this was still much lower than expected. Seeing this kind of improvement from such a simple change opened up many new questions and hypotheses about other inefficiencies and poor configuration options: Could we get better performance from an even newer kernel? Should we change other settings at the OS level? Are there optimizations to be found in my.cnf? How can we make MySQL go faster?

In pursuit of answers, I set up almost 60 different sysbench fileIO test configurations with different kernels, filesystems, mount options and RAID block sizes. Once the best fit configuration was chosen from these experiments, I ran another 20 or so sysbench OLTP runs with other system permutations. The basic test methodology was identical across all trials: run the test for an hour collecting metrics at one second intervals, then drop the first 600 seconds to account for cache warm-up time and process the remainder. After the optimal configuration had been identified, we rebuilt our largest and most critical servers and rolled out the changes into production.

From 5000 QPS to 26000 QPS: scaling MySQL performance without scaling hardware

Let’s take a look at the impact of these changes on some basic sysbench OLTP tests via the p99 response times and throughput metrics at 16 and 32 threads for several different configurations.  

Here is what each of the numbers represent:

  • CURRENT:                    3.2 kernel and standard MySQL configuration
  • STOCK:                          3.18 kernel with standard MySQL configuration
  • KERNEL:                        3.18 kernel with a few IO/memory sysctl tweaks
  • MySQL:                          3.18 kernel with an optimized MySQL configuration
  • KERN + MySQL:         3.18 kernel with tweaks from #3 and #4
  • KERN + JE:                    3.18 kernel with tweaks from #3 and jemalloc
  • MySQL + JE:                3.18 kernel with MySQL config from #4 and jemalloc
  • ALL:                                 3.18 kernel with #3,  #4 and jemalloc

When we enable all of the optimizations, we find we can achieve roughly 500 percent more read and write throughput at both 16 and 32 threads while simultaneously reducing p99 latency by over 500ms in both directions. On the read side, we go from approximately 4100 – 4600 QPS to just over 22000 – 25000, depending on concurrency. On the write side, we go from approximately 1000 QPS to 5100 – 6000 QPS. These are massive gains in headroom and performance achieved with just a few simple changes.

Of course, all the synthetic benchmarks in the world don’t mean much if they don’t translate into real-world results. The graph below shows latency on our primary clusters from both the client and server perspective from several days before the upgrades were pushed until several days after all the masters were upgraded. The process took just a week to complete.

The red line represents client-perceived latency, and the green represents server-measured latency. From the client perspective, p99 latency dropped from a highly-variable 15-35ms with outliers over 100ms down to a pretty flat 15ms with outliers at 80ms or less. Server-measured latency also declined from a wavy 5-15ms to a basically-flat 5ms, with a daily 18ms spike due to system maintenance. Furthermore, since the beginning of the year, our peak throughput on these clusters has increased about 50 percent, so not only are we handling considerably more load (still well under our estimated capacity), we’re doing it with much better and more predictable throughput. And, in what can only be termed good news for everyone who enjoys sleeping through the night, the number of pageable incidents related specifically to system performance or general server overload dropped from over 300 in March down to less than 10 in April and May combined.

For more information, including the before-and-after details of what our MySQL and OS configurations look like, check out the slides from “All Your IOPS Are Belong To Us,” my talk from the 2015 Percona Live MySQL Conference/Expo, and stay tuned for more insights on how we get the most out of MySQL, Redis and other storage technologies.

Ernie Souhrada is a database engineer on the SRE team, part of the Cloud Engineering team.

For Pinterest engineering news and updates, follow our engineering Pinterest, Facebook and Twitter. Interested in joining the team? Check out our Careers site.


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/-Dpvphmt9aE/mysql-performance-optimization-50-more-work-with

Original article

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑

%d bloggers like this: