Write Less Code

03 Jun 2016

Not too long ago, I sat down to ‘clean up’ a project that I inherited. I was given the reins of the refactoring efforts because the project has had several bugs in production. It was stuck in a vicious cycle where fixing old bugs would introduce new ones. Shortly after diving into the source code, it became evident to me what the problem was: the project was a big, hairy mess. I used the word big because there was lots of unnecessary, redundant and tightly coupled code. By hairy mess, I don’t mean that the code looked amateur or was full of shortcuts. In fact, the problem was quite the opposite. There was too much magic and everywhere I looked, I saw clever and grandiose design practices that that no relationship with the actual problem that the project was built to solve. Things like reflection, aspect oriented programming, custom annotations were all present. The project was an over-engineered beast. To put it into perspective, after the refactoring was over, the project was reduced to less than half of its original size.

Continue reading “Write Less Code”


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/fAWU8caNBtU/

Original article

Inferno: A fast, React-like JavaScript library for building UIs

README.md

Build Status
Coverage Status
Dependencies
devDependency Status
MPL-2.0
NPM Version
npm downloads

Inferno is an isomorphic library for building high-performance user interfaces, which is crucial when targeting mobile devices. Unlike typical virtual DOM libraries like React, Mithril, Cycle and Om, Inferno uses intelligent techniques to separate static and dynamic content. This allows Inferno to only “diff” renders that have dynamic values.

In addition to this, we’ve painstakingly optimized the code to ensure there is as little overhead as possible. We believe that Inferno is currently the fastest virtual DOM implementation out there – as shown by some of our benchmarks. Inferno is all about performance, whilst keeping a robust API that replicates the best features from libraries such as React.

In principle, Inferno is compatible with the standard React API, allowing painless transition from React to Inferno. Furthermore, Inferno has a Babel plugin allowing JSX syntax to transpile to optimised Inferno virtual DOM.

Key Features

  • One of the fastest front-end frameworks for rendering UI in the DOM
  • Components have a similar API to React ES2015 components with inferno-component
  • Stateless components are fully supported and have more usability thanks to Inferno’s hooks system
  • Isomorphic/universal for easy server-side rendering with inferno-server

Benchmarks

Install

Very much like React, Inferno requires the inferno and the inferno-dom packages for consumption in the browser’s DOM. Inferno also has the inferno-server package for
server-side rendering of virtual DOM to HTML strings (differing from React’s route of using react-dom/server for server-side rendering). Furthermore, rather than include the
ES2015 component with class syntax in core (like React), the component is in a separate package inferno-component to allow for better modularity.

NPM:

Core package:

npm install --save inferno

ES2015 stateful components (with lifecycle events) package:

npm install --save inferno-component 

Browser DOM rendering package:

npm install --save inferno-dom 

Helper for creating Inferno VNodes (similar to React.createElement):

npm install --save inferno-create-element 

Server-side rendering package:

npm install --save inferno-server 

Pre-bundled files for browser consumption:

http://infernojs.org/releases/0.7.8/inferno.min.js
http://infernojs.org/releases/0.7.8/inferno-create-element.min.js
http://infernojs.org/releases/0.7.8/inferno-component.min.js
http://infernojs.org/releases/0.7.8/inferno-dom.min.js
http://infernojs.org/releases/0.7.8/inferno-server.min.js

Overview

Let’s start with some code. As you can see, Inferno intentionally keeps the same, good, design ideas as React regarding components: one-way data flow and separation of concerns.
In these examples, JSX is used via the Inferno JSX Babel Plugin to provide a simple way to express Inferno virtual DOM.

import Inferno from 'inferno';
import InfernoDOM from 'inferno-dom';

const message = "Hello world";

InfernoDOM.render(
  <MyComponent message={ message } />,
  document.getElementById("app")
)

Furthermore, Inferno also uses ES6 components like React:

import Inferno from 'inferno';
import { Component } from `inferno-component`;
import InfernoDOM from 'inferno-dom';

class MyComponent extends Component {
  constructor(props) {
    super(props);
    this.state = {
      counter: 0
    }
  }
  render() {
    return (
      <div>
        <h1>Header!</h1>
        <span>Counter is at: { this.state.counter }</span>
      </div>
    )
  }
}

InfernoDOM.render(<MyComponent />, document.body);

The real difference between React and Inferno is the performance offered at run-time. Inferno can handle large, complex DOM models without breaking a sweat.
This is essential for low-powered devices such as tablets and phones, where users are quickly demanding desktop-like performance on their slower hardware.

Inferno Top-Level API

Inferno.createVNode

Creates an Inferno VNode object that has chainable setting methods.

import createVNode from `inferno`;

InfernoDOM.render(createVNode().setTag('div').setClassName('foo').setAttrs({ id: 'test' }).setChildren('Hello world!'), document.body);

Inferno.createBlueprint

Creates an Inferno VNode using a predefined blueprint. Using the reference to the blueprint, it allows for faster optimisations with little overhead.

import InfernoDOM from 'inferno-dom';

const myBlueprint = Inferno.createBlueprint({
    tag: 'div',
    attrs: {
        id: 'foo'
    },
    children: { arg: 0 }
});

InfernoDOM.render(myBlueprint('foo'), document.body);

For each property on the object passed as the argument to createBlueprint, anything that has been defined with { arg: X } is regarded as a dynamic value (matching the argument of calling this blueprint), otherwise the properties are regarded as static.
For example: if my object is const blueprint = Inferno.createBlueprint({ tag: { arg: 0 } }), then you’d expect to call blueprint('div') with the argument 0 (first argument) being the tag for the VNode.

InfernoCreateElement

Creates an Inferno VNode using a similar API to that found with React’s createElement

import InfernoDOM from 'inferno-dom';
import Component from 'inferno-component';
import createElement from 'inferno-create-element';

class BasicComponent extends Component {
    render() {
        return createElement('div', {
               className: 'basic'
           },
           createElement('span', {
               className: this.props.name
           }, 'The title is ', this.props.title)
       )
    }
}

InfernoDOM.render(createElement(BasicComponent, { title: 'abc' }), document.body);

InfernoComponent

Stateful component:

import Component from 'inferno-component';

class MyComponent extends Component {
  render() {
    ...
  }
}

This is the base class for Inferno Components when they’re defined using ES6 classes.

Stateless component:

const MyComponent => ({ name, age }) => 
  <span>My name is: { name } and my age is: {age}</span>  
);

Stateless components are first-class functions where their first argument is the props passed through from their parent.

InfernoDOM.render

import InfernoDOM from 'inferno-dom';

InfernoDOM.render(<div />, document.body);

Render a virtual node into the DOM in the supplied container given the supplied virtual DOM. If the virtual node was previously rendered into the container, this will
perform an update on it and only mutate the DOM as necessary, to reflect the latest Inferno virtual node.

InfernoServer.renderToString

import InfernoServer from 'inferno-server';

InfernoServer.renderToString(<div />, document.body);

Render a virtual node into an HTML string, given the supplied virtual DOM.

Hooks

Please note: hooks are provided by inferno-dom;

Inferno supports many of the basic events on DOM nodes, such as onClick, onMouseOver and onTouchStart. Furthermore, Inferno allows you to attach
common hooks directly onto components and DOM nodes. Below is the table of all possible hooks available in inferno-dom.

Name Triggered when Arguments to callback
onCreated a DOM node has just been created domNode
onAttached a DOM node being attached to the document domNode
onWillDetach a DOM node is about to be removed from the document domNode
onWillUpdate a DOM node is about to perform any potential updates domNode
onDidUpdate a DOM node has performed any potential updates domNode
onComponentWillMount a stateless component is about to mount domNode, props
onComponentDidMount a stateless component has mounted successfully domNode, props
onComponentWillUnmount a stateless component is about to be unmounted domNode, props
onComponentShouldUpdate a stateless component has been triggered to updated domNode, lastProps, nextProps
onComponentWillUpdate a stateless component is about to perform an update domNode, lastProps, nextProps
onComponentDidUpdate a stateless component has performed an updated domNode, props

Using hooks

It’s simple to implicitly assign hooks to both DOM nodes and stateless components.
Please note: stateful components (ES2015 classes) from inferno-component do not support hooks.

function createdCallback(domNode, props) {
    // [domNode] will be available for DOM nodes and components (if the component has mounted to the DOM)
    // [props] will only be passed for stateless components
}

InfernoDOM.render(<div onCreated={ createdCallback } />, document.body);

function StatelessComponent({ props }) {
    return <div>Hello world</div>;
}

InfernoDOM.render(<StatelessComponent onComponentWillMount={ createdCallback } />, document.body);

Hooks provide powerful lifecycle events to stateless components, allowing you to build components without being forced to use ES2015 classes.

Performance

Inferno tries to address two problems with creating UI components:

  • Writing large applications in large teams is slow in terms of development and expensive in costs – it shouldn’t be.
  • Writing complex applications generally results in poor performance on mobile/tablet/older machines – it shouldn’t.
  • Writing intensive modern UIs that require many updates/animations falls apart and becomes overly complicated – it shouldn’t be.

Writing code should be fun. Browsers are getting more advanced and the technologies being supported are growing by the week. It’s about
time a framework offered more fun without compromising performance.

JSX

Inferno has its own JSX Babel plugin.

Differences from React

Inferno strives to be compatible with much of React’s basic API. However, in some places, alternative implementations have been used.
Non-performant features have been removed or replaced where an alternative solution is easy to adopt without too many changes.

Custom namespaces

Inferno wants to always deliver great performance and in order to do so, it has to make intelligent assumptions about the state of the DOM and the elements available to mutate. Custom namespaces conflict with this idea and change the schema of how different elements and attributes might work; so Inferno makes no attempt to support namespaces. Instead, SVG namespaces are automatically applied to elements and attributes based on their tag name.

The stateful ES2015 Component is located in its own package

React’s ES2015 component is referenced as React.Component. To reduce the bloat on the core of Inferno, we’ve extracted the ES2015 component
into its own package, specifically inferno-component rather than Inferno.Component. Many users are opting to use stateless components with
Inferno’s hooks to give similar functionality as that provided by ES2015 components.

Automatic unit insertion on properties and properties

Inferno makes no attempt to add the unit to numerical attributes or properties that React attempts to automatically add units to. For example:

will result in px being added automatically to the style property in React. To ensure Inferno is kept lean and fast, the
code base does not contain these expensive checks and overheads have been removed. It’s completely down to the user to specify the property.
So with Inferno, you should use the following to achieve the same result

.

Contributing

Testing

npm run test:browser // browser tests
npm run test:server // node tests
npm run test // browser and node tests
npm run browser // hot-loaded browser tests

Building

Linting

npm run lint:source // lint the source

Inferno is supported by BrowserStack

Supported by Browserstack


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/V5oQwMM2u6g/inferno

Original article

Using Amazon Auto Scaling with Stateful Applications

You’ve heard this before. The team has been working on this service and a couple months later traffic is picking up. Pretty awesome you think, customers are loving this feature! Hold on, now you hear finance people screaming at the Amazon bill. The application is considerably resource intensive. You got two options: 1) find the bottleneck and optimize, 2) limit cost of running the service. Let’s focus on the latter.

Welcome to Amazon Auto Scaling. If the fundamental premise of the cloud is “use only the resources you need for as long as you need”, then add “dynamic scaling based on traffic”.  Bottom line: you save money when traffic is low. Enterprise SaaS is a great use case since customers are using your product during typical business hours, resulting in very low traffic at night. So what does it take to make the switch?

Stateless VS stateful application

Ideally you want to be dealing with a stateless application, where terminating one node won’t produce side effects on user experience. Typical stateless apps include frontend web servers or any app that doesn’t rely on keeping session state in memory.

Unfortunately not all software is created equal. Our use case is a video recorder for web based meetings. While the presenter is discussing slides, the recorder is watching the presentation unfolding real time.
Try to terminate one instance with active sessions and you’re impacting user experience. But there is a solution to which we’ll come back shortly.

Prerequisites

One thing with dynamically terminating instances is that you can’t rely on SSH access any longer:

  • Logs need to be forwarded to a remote host using Elasticsearch, Splunk or similar.
  • Provisioning an instance is done in an automated fashion. We use Chef and Terraform.
  • Not directly related to Auto Scaling here but proper deployment tooling is also a requirement. We use Jenkins pipelines and Chef.

Architecting the app around Auto Scaling

Our video recorder app went through a few changes. First you need to expose a health check endpoint that returns HTTP 200 if the app is in a good state. Amazon is continuously polling it and will replace unhealthy instances.

Then you need the app to properly report your chosen autoscaling metric to CloudWatch. For instance our metric is “number of running jobs”. A java thread can handle the task or even a cron job.

One more thing about stateful applications: we want to make sure we don’t disrupt running jobs during a scale down event. Amazon conveniently provides Lifecycle Hooks which allows to perform a custom action before terminating the instance. For instance: a decrease in traffic triggers a scale down event. The oldest instance (by creation time) is picked and moves to Termination:Wait state. Amazon sends a notification using SNS to check if the instance is ready to be terminated. The instance gets the notification and will give the green light for termination if there is no job running, otherwise it’ll respond with a heartbeat to keep waiting until all jobs are done. Which means your app needs to have a thread listening to SNS notifications.
Interestingly lifecycle hooks cannot be set up on Amazon web console, you’ll have to use the CLI.

Amazon SDKs make it pretty straightforward to implement the above two items.

Capacity planning

Our video recorder app is mostly CPU bound, so ideally you want to keep CPU load no higher than 50%. The goal is to have enough capacity to be ready for traffic peaks in the morning, and remove instances as traffic slows down at end of the day.

Auto Scaling policies are designed around a specific metric. If you’re working with a queue based model then scaling will be done based on the SQS queue size, otherwise we’ll use the custom metric “number of running jobs”.

Good scale up policies tend to be more aggressive in terms of instance count to add. In our case we set the scale up policy to “Add 2 instances when average jobs per instance is => 3”. Given that our minimum instance count is 4 we’ll trigger a scale up as soon as we hit 12 simultaneous jobs, which does happen early in the morning.
Regarding scale down, it’s best to be more conservative as we want to make sure we aren’t falling short in the middle of a good traffic period. In other words, better to remove instances whenever those are doing nothing. We set our scale down policy to “Remove 1 instance when average jobs per instance =< 1”

Also make sure that both policies are compatible with each other. For instance: scale up is triggered with 12 jobs and we go from 4 to 6 instances. If scale down is triggered when hitting an average of 2 jobs per instance, which we do now, then we immediately scale back down.

Lessons learned

  • Scale up and scale down events are not independent. One needs to fully complete before the next one can execute. This can be dangerous in the case where a scale down event is triggered, putting the instance in Termination:Wait state until all jobs are done running. The instance can easily stay in that state for 45min sometimes. If you have a simultaneous traffic peak you risk running out of capacity as you won’t be able to increase the cluster size before 45min.
  • No new requests are sent to the instance when it’s in Termination:Wait state. This is true when using ELB at least, as Amazon support confirmed. This allows for safe connection draining.
  • Fundamental system metrics matter. A few things came up while transitioning our recorder app to Auto Scaling mode, one of them being CPU overuse. It can be easy to lose track of simple things with the ephemeral nature of instances. We made sure to have alerts set up for CPU usage, load average and RAM usage.
  • One more side effect of ephemeral instances is that local data is lost upon instance termination which makes it harder to troubleshoot customer issues. One solution is to proactively store any useful data in S3. We dealt with it by setting up proper Splunk alerts which would bubble up any serious looking issue. Then we’d investigate on the box and retrieve any relevant data.
  • Watch out for Amazon Limits. Those are limiting the quantity of instances, among other things, that can be spun up within a region. As your Auto Scaling cluster grows or if you have multiple clusters in one region, you’ll likely face the limit sooner or later. Make sure to raise the limits to a high ceiling ahead of time, and subscribe to email notifications. We got bitten once when EBS volumes maxed out.

Instrumentation is everything

The key to a successful Auto Scaling transition is proper instrumentation. We made a detailed dashboard showing instance count, job count, unhealthy hosts on ELB, average job count per instance. It helps uncover patterns and confirm existing ones.

Here is an overview of a typical business day:

Screen Shot 2016-03-02 at 100443 AMpng

Most important things to notice on the upper right corner graph is the scale up in the early morning traffic as well as the cascading scale down in mid afternoon. It clearly demonstrates the impact of Auto Scaling during key business hours.

Another important graph is the middle left “simulRecordings”. Each color represents a different instance, which means the sooner the color count increases the sooner we have scaled up and spread traffic out. We can also spot new traffic peaks.

Finally the one on the bottom left corner allows for a reality check of Auto Scaling rules. As the average job count per instance increases, we should expect to see scale up activity in the upper right corner graph.

We highly recommend using StatsD to report application metrics as it’s the easiest and there is libraries for all languages.

In closing, it’s also worth considering the tradeoff of refactoring the application versus using Auto Scaling. As we’ve seen the Auto Scaling lifecycle will bring along its complexities. Sometimes it makes sense to refactor the app into a lighter version. One last thing to keep in mind is instance provisioning time. It pays off to keep provisioning time to a minimum in order to keep scaling movements seamless. Ideally you want to have a pre-packaged AMI ready to start, which allow for a total startup time under 1min30s. Another reason to use such AMIs is to reduce runtime downloading of dependencies (can happen sometimes with Chef). Dependencies can be in a bad state and an auto-scaled instance might never be able to successfully boot-up. Packer is a great tool for this and integrates easily within your existing workflow.

Special thanks to Kartick Suriamoorthy for his support and proofreading, and ClearSlide for the opportunity to work on this great project.


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/v9n1LFVdaJw/using-amazon-auto-scaling-with-stateful-applications

Original article

Cloud.gov

We built cloud.gov on the open source, public Cloud Foundry project. We
completed the first prototype in about three months, and we started
launching production apps about a month after that. We continue to
fine-tune it based on the needs of the apps that teams have built.

Because it’s open source, we can continue to take advantage of
improvements to the operability and security of Cloud Foundry. It helps
us scale.

Because it’s an agile project, cloud.gov is always under development. As
we observe its use, we will continue to add new capabilities. Therefore,
we do not say it is “complete” and we probably never will.


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/vMbhHN_rt-4/

Original article

Libp2p – p2p network stack

README.md



GoDoc
Build Status

libp2p implementation in Go

libp2p is a networking stack and library modularized out of The IPFS Project, and bundled separately for other tools to use.

libp2p is the product of a long, and arduous quest of understanding — a deep dive into the internet’s network stack, and plentiful peer-to-peer protocols from the past. Building large scale peer-to-peer systems has been complex and difficult in the last 15 years, and libp2p is a way to fix that. It is a “network stack” — a protocol suite — that cleanly separates concerns, and enables sophisticated applications to only use the protocols they absolutely need, without giving up interoperability and upgradeability. libp2p grew out of IPFS, but it is built so that lots of people can use it, for lots of different projects.

We will be writing a set of docs, posts, tutorials, and talks to explain what p2p is, why it is tremendously useful, and how it can help your existing and new projects. But in the meantime, check out

libp2p implementation in Go is a work in progress. As such, there’s a few things you can do right now to help out:

  • Go through the modules below and check out existing issues. This would be especially useful for modules in active development. Some knowledge of IPFS/libp2p may be required, as well as the infrasture behind it – for instance, you may need to read up on p2p and more complex operations like muxing to be able to help technically.
  • Perform code reviews.
  • Add tests. There can never be enough tests.

go-libp2p repo will be a place holder for the list of Go modules that compose Go libp2p, as well as its entry point.

Install

$ go get -d github.com/ipfs/go-libp2p
$ cd $GOPATH/src/github.com/ipfs/go-libp2p
$ make

Examples can be found on the libp2p-examples repo

$ cd $GOPATH/src/github.com/ipfs/go-libp2p
$ make deps
$ go test ./p2p/<path of module you want to run tests for>

Links

Extracting packages from go-libp2p

We want to maintain history, so we’ll use git-subtree for extracting packages.

# 1) create the extracted tree (has the directory specified as -P as its root)
> cd go-libp2p/
> git subtree split -P p2p/crypto/secio/ -b libp2p-secio
62b0a5c21574bcbe06c422785cd5feff378ae5bd
# important to delete the tree now, so that outdated imports fail in step 5
> git rm -r p2p/crypto/secio/
> git commit
> cd ../

# 2) make the new repo
> mkdir go-libp2p-secio
> cd go-libp2p-secio/
> git init && git commit --allow-empty

# 3) fetch the extracted tree from the previous repo
> git remote add libp2p ../go-libp2p
> git fetch libp2p
> git reset --hard libp2p/libp2p-secio

# 4) update self import paths
> sed -someflagsidontknow 'go-libp2p/p2p/crypto/secio' 'golibp2p-secio'
> git commit

# 5) create package.json and check all imports are correct
> vim package.json
> gx --verbose install --global
> gx-go rewrite
> go test ./...
> gx-go rewrite --undo
> git commit

# 4) make the package ready
> vim README.md LICENSE
> git commit

# 5) bump the version separately
> vim package.json
> gx publish
> git add package.json .gx/
> git commit -m 'Publish 1.2.3'

# 6) clean up and push
> git remote rm libp2p
> git push origin master


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/AuFDpJuvMLc/go-libp2p

Original article

Hacked in a public space? Thanks, HTTPS

Have you ever bothered to look at who your browser trusts? The padlock of a HTTPS connection doesn’t mean anything if you can’t trust the other end of the connection and its upstream signatories. Do you trust CNNIC (China Internet Network Information Centre). What about Turkistan trust or many other “who are they” type certificate authorities?

Even if you do trust whoever issued the certificate it doesn’t mean much if the network cannot be trusted. A lot of experts claim “HTTPS is broken” and here is one small example of why. If you sit in a coffee shop and go surfing you can quite easily end up being the victim of a man-in-the-middle (MitM) attack. All a potential attacker needs is a copy of Kali Linux, a reasonably powerful laptop and coffee!

But wait, you cry, aren’t certificates supposed to protect us from exactly this type of thing? Yes but… essentially in our coffee-shop scenario the connection can be forced to run via the MitM laptop using a program called SSLstrip to copy the data as it is passed back and forth to Gmail. We get the traffic from the victim by poisoning the ARP cache and pretending to be the router. SSLStrip forces a victim’s browser into communicating via an attacker’s laptop in plain-text over HTTP, with the adversary proxies the modified content from an HTTPS server.

Of course, you need to hack the coffee shop’s router, too.

The HTTPS between Gmail and you is now readable because you get the decrypted plain text data before it is encrypted and sent to Gmail.

It isn’t just coffee shops that present this risk. Frequently, SSL inspection is used in offices of larger companies to monitor staff web activity. Several companies such as FireEye and Bluecoat provide specialised appliances to do this at wirespeed, essentially rendering them unnoticeable. Governments can also do the same using FinFisher or other tools running on ISP networks.

This is one of the main reasons I tell people not to check their web mail on their work computer. Employers probably have the right do that written into their employment terms and conditions. Companies do, however, have other more legitimate reasons for breaking SSL scanning for malware-related traffic and data loss prevention (DLP being the new hot ticket item). If you couldn’t look inside an encrypted packet you would have no idea what’s flowing across the network most of the time other than source and destination.

What are the mitigations against all these for the average Joe user? In reality not a lot. Use your common sense when connecting to a Wi-Fi hotspot. Ask yourself:

  1. Do I know I am connecting to the correct Wi-Fi hotspot?
  2. Do I trust that hotspot and its owners?
  3. Where possible use a VPN thereby somewhat mitigating against MitM attacks

On a larger scale there are a few things that can be done but require effort. If a site provides only HTTPS then sslstrip would fail as it can’t fall back to HTTP. Also browsers are becoming better at dealing with these types of issues.

Some browsers such as Chrome use a new technique called certificate pinning. Certificate pinning, though, is limited to Google sites at present. This technique creates a digital fingerprint for each HTTPS site visited and afterwards compares it to the certificate being presented. It will warn the user if things don’t look as they should. Another method that site owners can use to protect their clients is HSTS. This tells the browser on first visit that the site is HTTPS only and therefore the browser should only ever connect to via HTTPS for a determined length of time.

Any attempt to redirect the browser to an HTTP version of the site will be stopped by the browser. The one weakness with this technology is that the browser has to have first visited the genuine site to receive the HSTS response. But if you make sure you’ve visited a site that supports HSTS on a trusted network, your browser will then ensure it is never redirected to HTTP.

A site owner who knows they will only ever use HTTPS and uses HSTS (HTTP Strict Transport Security) can have their website added to a HSTS preloaded list in the Chromium project. Getting your site added to that list means that Chromium will never allow an unencrypted connection to your site.

A lot of companies who deploy monitoring will often install their own root certificates on company computers. This lets the proxy devices to self-sign certificates for any domain and be trusted by the computers.

HTTPS is not the silver-bullet online defence shield a lot of users believe it to be on public networks, meaning activities such as online banking and shopping are done at their own risk.

While there are some additional steps you can take, you should – therefore – continue to exercise caution when using a network you don’t control and think about the type of information that you may be sharing with people you may not want to. ®


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/kk4i6uACxj4/

Original article

The Art of Pivoting: How I got out of a toxic relationship with academia

The Art of Pivoting

Most of you know me as @BorisAdryan on Twitter or from my technical Internet-of-Things blog Opinions & Experiments. I’ve had a Medium account for a while, and I keep it for more subjective content. This is my second post:

The Art of Pivoting, or less pretentious, how I changed from being a frustrated life science academic to using my skills as well-paid consultant for industrial engineering problems.

Setting the scene

It’s June 2016 and I’m packing my bags to move back to Germany after 12 years of academic research at the University of Cambridge and surrounding institutes, like the famous MRC Laboratory of Molecular Biology, forge of Nobel Prizes and home to eminent scientists like Watson & Crick, Sanger, Perutz, the ones you know from Jeopardy or biochemistry textbooks. I had come from a Max-Planck-Institute in Germany, where I had previously completed a life science PhD in slightly under three years. When I started my degree there in 2001, I had been the fastest student to fulfil the requirements for the Diplom in biology at my home university — and already had two peer-reviewed publications in my pocket. You may see the trajectory: success, efficiency, coming from good places, going to good places; the basic ingredients for a successful academic career.

Me in the “Model Room” at the old MRC Laboratory of Molecular Biology, with one of the original myoglobin structures.

Up is the only way

My wife and I had moved to Cambridge in 2004 to both do a brief postdoc abroad. Spice up the CV a bit, meet interesting people before settling down with a normal job back in the home country, that sort of stuff. The work I did was advanced and using technology not available to many people in Europe outside Cambridge at the time, but not revolutionary. However, combining experimental molecular biology and computational analysis of large biological datasets had just seen its first great successes, and I was a man in demand with my coding skills. Publications are the number one currency to climb the academic ladder and, by 2007, I had accumulated enough credit both in terms of scientific output as well as reputation in the field that I seriously considered an academic career for life.

Here, it may need to be explained to everyone who hasn’t spent time in academia why seriously considered is the appropriate phrase. It was a conscious decision for the long game. It’s the Tour de France or Iron Man of a career. You have to believe that you can do it and secure a position against all odds and a fierce competition. You have to be in it to win it. Chances are that you’re not going to make it, a fear that’s constantly present but there’s normally no-one you know who you could ask what life on the other side looks like, because failed academics -an arrogant view I held myself for a long time about those not making it- tend to disappear, ashamed and silent. Or get normal, unglorious jobs. According to my wife, who left academia when our second child was on the way, you got to be “stupid enough to commit to that”, given that academic salaries are poor compared even to entry-level industry positions, the workload is bigger, quite similar to that of running a start-up, and the so-called academic freedom is these days reduced to framing your interests into what funding bodies consider worth supporting.

Speaking of start-ups. 90% of start-ups fail. That’s a slightly better success rate than getting into the game that allows you to fight for a permanent academic position in the first place. In my cohort of Royal Society University Research Fellows, the success rate of obtaining a salary whilst building up a team was about 3%. What happens to the others who want to do science in academia? I’m sure many would not mind to stay postdoctoral scientists forever and pursue research in support of some other principal investigator (PI, a research group leader), but the system doesn’t cater well for that career track. Up is the only way. If you can’t make it to the group leader level, chances are that sooner or later you’re running out of funding. That’s because on the postgraduate level, especially after the financial crisis, there is a rather limited amount of money in the system that allows employment which resembles a regular job. Ambition, ego or an almost unreasonable love for the subject is the key driver for everyone else. Money is dished out competitively, and of course it’s considered an honour to be bringing your own salary to work unsocial hours for a rising star or established hot-shot. This sees many PhD level researchers leave academia sooner or later.

The postdoctoral level is attractive to employers in industry, as applicants are fully qualified scientists with hands-on experience in their subject.

This isn’t necessarily a bad thing. It’s just not what many of them had envisaged when they started their journey in university because they were hoping to do independent research in an academic setting.

Good times

I was fortunate enough to secure a University Research Fellowship from the Royal Society in 2008. Their package is great. Initial funding for 5 years (a good salary plus a small budget for commodity items and travel) followed by review, and then another 3 years. That’s pretty amazing. Know that other UK research councils might also give a young group leader money for an additional post to hire an assistant straightaway, but the overall funding period is just five years without any extension, and there is an enormous pressure to deliver. The Royal Society know about The Long Game. Eight years are over sooner than one might imagine in a research project, but it’s enough time to fail once or twice with a research idea and recover while getting back on your feet. One wrong strategic decision with any other startup package, and you’re history. Eight years is also sufficient to write grant applications, a process that consists mostly of waiting for an often negative outcome; and thus several attempts increase the chances to obtain actual research funding for the implementation of one’s ideas. I was quite lucky in that respect, as I had secured my first proper 3-year grant that paid for an assistant early in 2009.

The joys of starting a new lab: “Join me, I need more hands!” from my hiring page back then.

It was a great experience and I absolutely don’t regret my attempt in academia. I worked with some of the smartest people I’ve ever met and the learning journey was quite mutual. Originally trained as biologist and being a self-taught programmer, I benefitted considerably from hosting and supervising a postdoc and several students from computer science, engineering and mathematics. It was actually my very first PhD student who taught me the basics of machine learning, and my postdoc who introduced me to the art of agent-based models and large-scale simulations, methods that have been crucial to my professional success ever since. While my job had already turned to that of a research administrator and mentor rather than a hands-on scientist, my crew made sure I knew about their stack, including version control and other tips and tricks.

I’m not sure of the breakdown over time, but by 2015 I’ve had hosted a total of more than 40 researchers. At the peak of our success we were a team of 15 international, interdisciplinary scientists: postdocs, PhD students, Master and project students, and a few academic visitors. Papers were and, in fact, are still coming.

A painful realisation

The first few years of my Fellowship were a blast. Funding, research prizes, a continuous stream of talented applicants, and regular publications solely from our lab but also international collaborations.

Good times in the lab: Group members discussing results with collaborators.

In the first four years just doing science, albeit often in the form of writing grant applications or papers, was my primary activity. That turned into spending time worrying. The Royal Society is pretty clear that with accepting one of their Research Fellows, the Department and University commit to further that person’s career, cumulating in a permanent appointment. That seems to work rather well around the UK, except for Oxford and Cambridge, where Research Fellows are seen as a renewable resource that is naturally going to replenish itself, attracting great candidates for the opportunities these world-class universities can offer. In other words, it’s silently agreed and commonly understood that Research Fellows need to find a home somewhere else after they’ve generated revenue and prestige for Oxbridge. In my case, add into that mix a Department whose purpose was quite openly debated at the University at the time (“why have a Genetics Department if everyone else is doing genetics as well?”), a Head of Department who was widely seen as placeholder until it was clear what was going to happen, and the inertia of academic decisions in general…

Towards the 5-year review with the Royal Society, we were asked to provide proof of our employability, the standing in the field, collaborations within and outside the University, publications, and, importantly, any job offers. I took the opportunity to test the waters. Two good Russell Group universities had offered to host me for the remaining three years of my Fellowship, one even with a proleptic appointment.

Unfortunately, with a wife in management-level full-time employment and three kids, I was unable to accept any of these offers outside Cambridge. It became suddenly very clear to me that if I wanted to stay and things turned against me, I would have to consider alternative career paths. At the same time, it was clear that I had to invest every possible resource into obtaining an academic post at the University of Cambridge if I wanted to do academic science for life. I was entering a world of pain.

All systems 110% — at all times

There isn’t a better motivator than fear.

It’s a common joke that academics have a problem with time management because of their inability to say no. Everyone higher up the food chain tells young investigators to say no. No to teaching. No to committees. No to administrative duties. “Concentrate on your science, because that’s what you’re going to be assessed on”. At the same time, it’s very clear that if the choice is between two candidates, the better departmental citizen is more likely to be successful. In fact, my good citizenship was explicitly spelled out in my Head of Department’s recommendation letter to the Royal Society, while at the same time pointing out to me that I might want to consider a few less activities.

The rules about departmental citizenship are nowhere written. It’s just what you hear between the lines in comments about the poor performer who failed to do submit his part for a communal bid or the raised eyebrow about some lazy bastard who refused to teach. Unless the system discourages anyone with the ambition to secure a permanent post actively from taking on additional responsibilities, unestablished PIs are going to pour themselves into research, teaching, administration, outreach, you name it — at 110% of what’s healthy.

Add three little kids into that mix, and it may become clear why over time I’ve acquired a collection of meds vast enough to run a burn-out clinic.

Removing perspectives

Five years into my Fellowship, I felt more and more like a chased rabbit. Work was not about science anymore, work had become that abstract thing you need to do in order to secure a post. Also, with all the activities I agreed to do and to participate in, the time I actually spent doing my own hands-on research had become marginal. While my research group was at its peak and, from the outside, I looked like a very successful scientist, my job and my attitude towards it had completely changed. I began to hate my job.

Running a prolific computational biology research team at the University of Cambridge, I imagined it would be easy to switch into a management role in pharmaceutical R&D. I sent a few applications and had a few telephone conversations, but very soon it emerged that I did not have the relevant qualifications -that is: no business experience– to successfully run a group in industry. My wife explained to me that I had long surpassed the point-of-no-return, because just as you have to earn your stripes in academia to be trusted with directing research, you do have to have industrial project experience and considerable domain knowledge about drug development to be trusted with a R&D team. My most realistic chance would be a more technical role, at least to start with.

Swallowing my pride, I applied for Senior Scientist positions, or, as I thought of it, I applied to become a compute monkey for someone with a lot less academic credibility. However, while next-generation sequencing, gene expression analysis, pathway reconstruction and pipeline development were all happening in my own research group, I was clearly not the one who knew the nitty-gritty of their implementation anymore. The interviews were humiliating. “What’s your favourite Bioconductor package for RNA-seq?” — “Uh, I’d have to ask my PhD student for that.” “How do you force the precise calculation of p-values in kruskal.test?” — “I’d google it!”. Needless to say, I didn’t get a single offer.

Truly fucked: I was stuck in academia!

The moral of the story seemed very clear to me: Postdocs are great and appreciated in industry because they still know how to do stuff. As an academic group leader, you are essentially useless to industry. You can handwave your way through and claim management skills and theoretical knowledge, but most of what you do on an everyday basis (writing papers! navigate funding body websites! library committee! teaching students!) is highly irrelevant for industry.

It can always get worse

We got a new Head of Department in 2013. I’m not going to judge her. Let’s just say that the road to hell is plastered with good intentions. And I was in for the next shock: For years mentors and colleagues treated me as if my appointment was just a question of time, but unfortunately my Department had never had the resources to make me a real offer. Retrospectively, I can’t remember a single time that my mentors had told me to seek employment elsewhere, change institutions or even warned me that things might not pan out okay. We knew that the new Head had negotiated at least two posts, but -I rephrase one of my senior colleagues politely- it wasn’t clear whether she didn’t know or had not decided or had not even thought about what to do with them. And me. And the handful other junior PIs in the Department.

To bring the situation to a conclusion, I mentioned that a neighbouring Department had shortlisted me for interview for a post with them. I had hoped for a quick decision and maybe -unrealistically- even an ad-hoc offer while telling her the news over tea. The response could not have been more devastating for me: “Boris, that’s great. You see, jobs aren’t easy to get these days, and it would be nice if you could stay around.” She meant that without any malice. She was truly empathetic and happy for me. Nevertheless, I spent the rest of the day on auto-pilot, unable to come up with any reasonable thought.

Trust your gut feeling

Around that time I had started to interact with the maker community, and tinkering had become a hobby to take my mind of the sodding job. I already had a few good contacts in the tech industry, and by a weird chance encounter, was offered an extremely well-paid job in academic outreach and pre-sales of a tech company. It wasn’t the most exciting of all jobs, but offered almost the three-fold of my academic salary and could have been a door opener for many other opportunities within a large, international company.

My interview with that other Department went well. It was alluded to me that they were still debating, but I was probably very close to being offered a permanent post. In the end, it didn’t happen. I was close to a nervous breakdown. Left with no other perspective to stay in academia for much longer than the remaining three years on the Fellowship and wanting to cease the opportunity, I decided to shut down operations and accept the industry offer.

I let my mentors talk me out of it. In turn, I signed up for two more nerve-wrecking years struggling for a job that would never materialise.

I’m sure they acted with the best intentions. I’m sure from their perspective, it was too early to give up, that there was still good time to change my fate. However, they were unaware how costly that was for me emotionally, how often I suffered debilitating anxiety attacks.

A friend with whom I shared an earlier draft of this post commented on the above paragraph:

I am wondering — how qualified are these people to mentor you/us? They only know one side of the fence. They have grown into their established positions in an environment that was very different than today’s. And, let’s face it, they are mostly part of the establishment they created and maintain, and are unlikely to be ready to change it.

That’s a hard judgment and I’m not entirely sure it’s true for my mentors, neverthless I can’t deny that the same thoughts had crossed my mind before. Also, having three kids puts an entirely different economic pressure on an earner, and I’m not sure the gentlemen did appreciate that.

Losing more incentives

More than 45% of businesses to not recover from major disaster. I’d say the same is true for research groups. In October 2014, my personal friend and lab manager suddenly and unexpectedly died. He had been with me from 2009 and was responsible for much of the day-to-day operations in my lab, particularly the molecular biology and fruit-fly research that was going on in my group. It was a major blow for me and the lab. Besides the technical and organisational roles he had in our research, he was the experienced keeper of unwritten wisdom; and as a friend and my longest employee, my primary port-of-call when I had professional issues and doubts. Losing a friend I truly enjoyed working with meant the loss of another incentive to work.

While my Department was quite supportive to my grieving group members and provided some unbureaucratic administrative help in the short term, independent of this sad event, they let me down when it came to the longer term perspective a month thereafter.

The writing on the wall

Most research grants are awarded for a 3-year period. It requires the project lead to have employment with the University for the entire duration of the grant. I knew that I was in a difficult position with just two more years to go on my Fellowship and discussed the case with my Head of Department. This is where things got complicated. She had been ill-advised by the funder and was under the impression that I could simply add my final year’s salary as cost to that application. Following her encouragement, I wrote a proposal. The mistake was soon discovered by the funder and we were asked to retract the application, unless the Department was willing to provide an underwrite for my salary for the final year of the grant, if awarded. Twelve months of a mid-career level academic salary, in return for significantly higher overheads that the University would have received, and fuel for further research.

She said no. No reasons given.

In effect, that left my group without the ability to secure any funding to do our research. We had pilot data, we had experience with a new method, we had a game plan, we had relevant previous publications. And a Head of Department who deemed the work not worth supporting.

Two months later I had a formal appraisal. The conversation with the Head went well, she confirmed that my scientific strategy was reasonable, that my merits were strong, that I had a good reputation. And then we talked about the next steps.

Similarly to our first encounter, she was encouraging about all the wrong things.

Statements along the lines of “Oh, you have a strong interest about things outside the life sciences? That’s good. In case the plans with academia don’t work out.” didn’t really signal I was in for a job any time soon. We even talked about the skills that I would need to improve my employability by industry, and I had to explain the difference between myself and a web stack software engineer to someone who didn’t know a single line of code. My last formal feedback and mid-term development plan therefore states: “Should learn more node.js”. Great advice. I framed it. It hangs in my toilet now.

The endgame

A brief memo in January 2015 informed our Department that the two posts where going to be advertised in a matter of days, and that the competition would be open to everyone. Despite feeling miserable, I still wanted to go for it and submitted an application. However, after the treatment of the two previous years, I myself wasn’t even convinced anymore if I really wanted to work like that. I’m tempted to say that I tried my best during the interview, but I’m well aware that my lack of enthusiasm probably showed. I had just no fight left in me to act all “oh, I’m really looking forward to this very exciting opportunity…” or pretend to be a visionary scientist. They asked about the big questions of my science, as if that had not been laid out in the grant proposal they’ve retracted. Big questions… …my arse, as the environment and tone of the previous two years had me focus more on being employable, anywhere really, than to think about actual research.

I was fed up. The final no from the Department still hurt, but felt like a relief; much like getting out of a toxic relationship.

I had briefed my group. On the day when I was told that my application was not successful, I announced the closure of my laboratory.

Preparing for queen-sided castling

Wait what? You might think now. Did I not just tell you that I was not employable outside academia? How could I be relieved? Read on!

Unintentionally skilling up

In the beginning of 2013 I didn’t have a plan how to get my neck out of the noose. I just had a few geeky interests outside academia, funnily enough inspired by an educational toy computer invented in Cambridge that came out a year earlier: The Raspberry Pi.

  • I had started playing with the Raspberry Pi and soon thereafter with Arduinos and more professional microcontrollers.
  • I rediscovered the joys (and pains!) of low-level C programming, something I had not done in nearly 15 years.
  • I developed an interest in home automation and the hardware interfaces and wireless technology around it.

(If it’s not clear how these bullet points are connected — don’t worry: the geeks know).

A Raspberry Pi toy Linux computer with a high-power radio transceiver.

While building up expertise in these areas by attending hobbyist meetings as well as industry workshops, I found that I really liked to talk to people about it. Geeking around became a real hobby, with a surprising social component. Take that, academic loner without a life outside the lab! So I started going to Meetup groups, but also got certified as STEM Ambassador and went into schools to teach kids about technology and programming in CodeClub. And I volunteered to give my first presentations, sharing my geek interests at Raspberry Pi enthusiast meetings, called Raspberry Jams. This was probably the first time that I learned that my new interests around hardware and software actually had a name: The Internet of Things.

The biggest influence in what follows were the Internet of Things Meetups in London.

In a nutshell, a Meetup is a typically free-of-charge gathering of like-minded individuals, and there are many different ones on all sort of topics. The monthly IoT London Meetup usually features three speakers from different backgrounds, back then often a hobbyist, an artist and a professional, who entertain the group with short talks for ten minutes each. A great format for learning!

During one of my first Meetups an entrepreneur with a reasonable idea and a hardware prototype pitched his product. A convincing case, except from a data analytical perspective, his strategy seemed flawed. At first I was shy, mind you a biologist in a meeting of technologists, but when I informally voiced my doubts, I suddenly found myself as center of a conversation. People were taking me seriously!

Testing the waters in business

I started going to other IoT events, partly out of interest in IoT and the businesses in the field, partly to see what particularly the analytics field looked like. Sometimes I even volunteered to help out with name badges and running errants, in exchange for access to conferences that otherwise are charged at a premium rate. What I had heard through the grapevine was confirmed: Most out-of-the-box offerings around IoT data analytics were neither understood by the sales people, nor by their prospective customers. There was a distinct need for someone who understood data science and could communicate its principles in a simple and business-oriented way. That was me! After initially providing consultancy informally and often unpaid -remember, I was just a biologist with a geek interest- I finally registered a business. If you are into IoT, you may have heard of it: thingslearn.

thingslearn Ltd.: Data analytics, machine learning and context integration for the Internet of Things.

At this stage, IoT was still just a hobby and despite occassional commercial activities, I was still a full-time academic. Nevertheless, people started to take notice. In 2013 an open-source project called Node-RED was released, a visual programming environment for the Internet of Things. I had come across it at one of the IoT London Meetups and, together with friends from the same group, we were amongst the first to drive it to its limits and pester its developers at IBM with feature requests and bug reports. The developers referred to it as plumbing tool for IoT data, but how the plumbing is done remains very much a problem to the user. In the spirit of an academic, I sought to systematically test different cloud platforms for their functionality and ease of interaction with Node-RED. Access numbers to that section of my blog -back then still that of my research group at the University- went through the roof. CEOs of two IoT platforms asked for my time, wanting to know exactly what I liked and what I didn’t like — because it mattered. I had started to make myself a name in IoT.

Conferences — that’s where professionals speak, no?

In August 2014 I stumbled over a tweet by one of the Node-RED developers that, unfortunately, he was unable to deliver his presentation at an IoT conference in Berlin. Half-jokingly I mentioned that I was going to be in Germany for holidays anyway and that I would happily take his speaker slot. Within 30 minutes, I had an email from the conference organisers. Within two hours, I was planning my first talk at a professional IoT conference. September 2014 saw me wearing business-casual, fully mic’ed up with a stick-to-your-face headset, and ready to rumble. I made a ton of very important contacts.

Me. The first time wearing a shirt in a professional context.

Inspired by my success, I submitted a talk proposal to one of the best IoT developer conferences in the world: thingmonk. I could impossibly talk about Node-RED again, especially since its very developers were coming to the event as well, but I offered my genuine perspective of a big data practitioner from the life-sciences, and what our work on databases, minimally required meta-data, data standards, repositories and ontologies could teach the IoT. The talk was accepted and very well perceived.

It slowly emerged that people accepted me as domain expert for context-integration in the IoT — because nobody else was speaking about it.

From there, things went very quickly. I got invitations to talk about that subject at quite a few data science and IoT conferences. And to give you an idea of time: I held my O’Reilly webcast on IoT ontologies in May 2015, a week before I announced the closure of my research group and end of my academic career.

Moving in for the kill

Let’s rewind for a moment. Five months earlier, at thingmonk, I had a very good chat with one of the other speakers, the CEO of a London-based startup. I explained to her my doubts about academia, but also the doubts about my own ability to be successful in industry. She invited me for an internship, or better, to stick around and see what people were doing and how they were doing it, and offered me some tech training in her obscure programming language so I could see how work in the real world looked like. A week after my academic appraisal, the one that recommended I should learn more node.js, the week before Christmas 2014, I set up my laptop in a London office.

The startup environment was entirely different than I had experienced my job interviews with pharmaceutical companies. Everyone was googling Stackoverflow. People learned on the go, and they learned fast and delivered impressive production-ready solutions. But everyone was different and had a different skill set, and that difference was appreciated. My doctorate even made everyone assume that I surely would be the smartest person in the room. 🙂

We forged a strategic partnership — not as business partners, but as friends.

It became clear that, if the academic shit really hit the fan, I was always welcome to come back. I would bring data science knowledge, and in turn I’d learn devops.

Pivoting like a boss

The moment my academic career was over, I contacted my friend. We had previously discussed collaborations that her and my companies could and should do if only I had more time. I was ready!

Over the next months, I took on a big hardware project and, as new skills, learned how to design printed circuit boards and how to optimise an embedded system for power saving. I took on a data science project, which exposed me to geo information systems and the algorithms and methods employed in GIS. And I learned how to do a pre-sales conversation, when the customer still needs convincing that data and analytics is the answer.

I keep this section short as there were other, still ongoing projects. To keep a long story short: Nobody expected me to know everything right from the start. People in industry are a lot more lenient than academics. And a lot more appreciative.

In academia we are taught to question everything and criticise everyone, whereas people in industry are foremost interested in how you help them with what you’re doing.

By summer 2015 I had fully transitioned into my role as Founder, Director and primary Doer-of-Things at thinglearn. That was enabled by my Royal Society University Research Fellowship, who supported me throughout the entire period. While my Fellowship was orignally awarded to do genomic research, the Royal Society allow people to develop in different directions, especially when there are potential commercial applications of the research. In fact, the Royal Society’s Business and Innovation courses, which I took in 2014 and 2015, teach out-of-the-box thinking and prepare academics for a role as founders and entrepreneurs. I’m infinetely grateful for that.

The time I spent investigating how the data sharing principles of modern biology may be used in an IoT context was thus a logical extension of my prior research.

No other funder would have allowed this. I was even further encouraged when the Royal Society invited me as advisor of their policy group for machine learning in January 2016, or to speak at a Café Scientifique about the data problems of genome research and IoT in Manchester coming July.

In the meantime, I had become a regular speaker at IoT conferences near and far. With my academic training in machine learning and data analytics, my geeky interest in technology and my passion for good user experience, I filled the void between engineers and marketing.

Preparing to move

In early summer 2015 I had spotted an advertisement for a professorship at a German university, focus: The Internet of Things. With their obsession in paper qualifications, I didn’t believe that an electrical engineering department would seriously consider me as a candidate. However, they did. I went through a series of interviews over a few months, everytime with people higher in the academic hierarchy than before.

A picture I took at a gallery when visiting for job interviews. You probably have to know Germany culture quite well to understand how this picture is very true.

My wife and I started looking around for houses in the area. Good schools, cheap houses, great quality of life — but not because of chosing or being able to afford the right neighbourhood. No.

This was Germany, a country in which the provision of good education is expected from the government, where cellars and attics and double-glazing are a minimal housing standard, and where the cost of living is so much lower than in the UK.

All things considered, we decided that, even if my wife wouldn’t be able to work, we wanted to move here again.

The professorship didn’t materialise. Or, let me rephrase that, I’m still waiting for the final outcome — eight months after the last interview and being sent occasional brief reminders that I’m still under consideration, but that the formal appointment is now in the hands of the local government… However, we definitely wanted to move back home after 12 years in the UK, project name box-it-for-Brexit, without having to rely on the mercy of some lengthy administrative process, and so I started to go through my LinkedIn contact list (which I grew to >500 IoT contacts in two years).

Through my continous outreach activities on Twitter and my personal blog, along with conference visits, giving talks and just being present whenever it was important, I was well known in the field. I sent out four more or less informal application letters to people I trusted — all four yielded invitations for interview.

In the end I signed a contract with a large German engineering company, where I’m going to do what I’ve enjoyed doing for the past couple of months: read, write and talk about IoT, analyse data, write code — with sufficient freedom to look at the bigger picture, and for a salary that even the professorship could not compete with.


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/YQntk6QBxWo/the-art-of-pivoting-88c80f0fabd8

Original article

WordPress Sites Under Attack From New Zero-Day In WP Mobile Detector Plugin

An anonymous reader writes: A large number of websites have been infected with SEO spam thanks to a new zero-day in the WP Mobile Detector plugin that was installed on over 10,000 websites. The zero-day was used in real-world attacks since May 26, but only surfaced to light on May 29 when researchers notified the plugin’s developer. Seeing that the developer was slow to react, security researchers informed Automattic, who had the plugin delisted from WordPress.org’s Plugin Directory on May 31. In the meantime, security firm Sucuri says it detected numerous attacks with this zero-day, which was caused by a lack of input filtering in an image upload field that allowed attackers to upload PHP backdoors on the victim’s servers with incredible ease and without any tricky workarounds. The backdoor’s password is “dinamit,” the Russian word for dynamite.


Share on Google+

Read more of this story at Slashdot.


Original URL: http://rss.slashdot.org/~r/Slashdot/slashdot/~3/XldKZd9pBWI/wordpress-sites-under-attack-from-new-zero-day-in-wp-mobile-detector-plugin

Original article

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑

%d bloggers like this: