Tor: ‘Mystery’ spike in hidden addresses

By Chris Baraniuk
Technology reporter

People use Tor to browse the internet and communicate in privateImage copyright
Thinkstock

Image caption

People use Tor to browse the internet and communicate in private

A security expert has noticed an unprecedented spike in the number of hidden addresses on the Tor network.

Prof Alan Woodward at the University of Surrey spotted an increase of more than 25,000 .onion “dark web” services.

Prof Woodward said he was not sure how best to explain the sudden boom.

One possibility, he said, might be a sudden swell in the popularity of Ricochet, an app that uses Tor to allow anonymous instant messaging between users.

Tor, or The Onion Router, allows people to browse the web anonymously by routing their connections through a chain of different computers and encrypting data in the process.

‘Unprecedented’ activity

On his blog, Prof Woodward noted there had not been a similar increase in .onion sites in the history of the Tor network.

“Something unprecedented is happening, but at the moment that is all we know,” he told the BBC.

“It is hard to know for certain what the reason is for the jump because one of the goals of Tor is to protect people’s privacy by not disclosing how they are using Tor,” said Dr Steven Murdoch at University College London.

Another curiosity described by Prof Woodward was the fact that, despite the rise of hidden addresses, traffic on the network has not seen a comparable spike.

Image copyright
Thinkstock

Image caption

It is generally not possible to decipher the content of traffic on the Tor network

He said there was a chance the spike was due to a network of computers called a botnet suddenly using Tor – or hackers launching ransomware attacks.

It could even be the result of malware that might be creating unique .onion addresses when it infects a victim’s computer – though there is no evidence yet for this.

Prof Woodward said that he believed a rise in the use of an anonymous chat app called Ricochet – which has just received a largely positive security audit – is the most likely explanation.

Dr Murdoch said this was indeed a possibility but added that the spike could also be the result of someone running an experiment on Tor.

What is Ricochet?

Ricochet uses the Tor network to set up connections between two individuals who want to chat securely.

The app’s website states that this is achieved without revealing either user’s location or IP address and that, instead of a username, each participant receives a unique address such as “ricochet:rs7ce36jsj24ogfw”.

Ricochet has been available for some time, but on 15 February reasonably positive results of an audit by security firm NCC Group were published.

On his blog, Prof Woodward noted that every new user of Ricochet would create a unique .onion address when setting up the service.

That could account for the surge in services, though he admitted 25,000 new users for the app in just a few days would suggest “spectacular” growth.


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/Hyc2quNjoKA/technology-35614335

Original article

Creating your own IPython server from scratch

Lately I’ve been using Jupyter (formerly, IPython) notebooks frequently for reproducible research, and I’ve been wondering how it all works underneath the hood. Furthermore, I’ve needed some custom functionality that IPython doesn’t include by default. Instead of extending IPython, I decided I would take a stab at building my own simple IPython kernel that runs on a remote server where my GPU farm lives. I won’t be worrying about security or concurrency, since I will be the only person with access to the server. The exercise should give you an idea about how server-based coding environments work in Python.

Since this is not a production server, Flask is perfect for our needs. Let’s start with a simple Flask server that does nothing. I’ll include some imports we will need later.

1
2
3
4
5
6
7
8
9
import sys
import traceback
from cStringIO import StringIO
from flask import Flask, jsonify, request

app = Flask(__name__)

if __name__ == "__main__":
app.run()

Executing Code

There is really only one magical piece to cover here: how does Python take a string of code, execute it, then return the output? Let’s start with the novel approach.

You can execute any Python statement using the exec() command. I’m going to create a Flask endpoint that takes a POST parameter named ‘code’, splits the command by newlines, and runs each command in sequence. Here is what the code looks like.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
import sys
import traceback
from cStringIO import StringIO
from flask import Flask, jsonify, request

app = Flask(__name__)

@app.route("/", methods=['POST'])
def kernel():
code_lns = request.form['code'].split('n')
for line in code_lns: exec(line)
return 'Success'

if __name__ == "__main__":
app.run()

Easy enough! You already have a minimal, Python-executing server in 15 lines of code (including unused imports and correct spacing). To test this, I use the POSTMAN client to hit my local server with POST requests.

Send a POST request to http:localhost:5000/ with the POST parameter ‘code’ set to print('hello world') like the picture below and hit ‘Send’. As expected, the server reads, the code, prints out ‘Hello world’, then exits.

output1.png

Redirecting Output

This isn’t very useful to us yet — although the server successfully receives and executes the code, the client only receives a “Success” message. Ideally, we would want to redirect the output from the program executing back to the client. To achieve this, we must capture what is being written to standard out buffer into a string buffer and return this string to the client. After some research, I determined this could be done by temporarily redirecting standard out to a StringIO buffer, like so:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
@app.route("/", methods=['POST'])
def kernel():
code_lns = request.form['code'].split('n')


old_stdout = sys.stdout


sys.stdout = strstdout = StringIO()

for line in code_lns:
exec(line)


sys.stdout = old_stdout


return strstdout.getvalue()

Looking at the output from the Postman Client, we can see that the server is now relaying back the stdout to the client as expected.

output2.png

Note: Redirecting standard out in this way will redirect the output for all clients connecting. Thus, if you have multiple people running code at the exact same time, the outputs will overlap. Don’t do this. That’s why I noted this was not a production ready server.

Different Environments

There is another major problem in our implementation — everything is executed in the same environment. One of the nice things about IPython is that you can work in several different notebooks at the same time, and none of the variables or functionality overlap. This concept does not exist in our design: if I’m working on two different ideas at the same time, all of the variables between the two scripts would be shared.

The problem lies in the exec() command, which I mentioned was the novel approach earlier. Remember that in Python, everything in the environment (technically a namespace in Python) is just stored as a dict in the __dict__ field (see this post for more information). We can execute code in different environments by doing something like this:

1
2
3
env = {}
code = compile('j = 1', '', 'exec')
exec code in env

After these code snippet has executed, env['j'] would have a value of 1 stored. Furthermore, any variable in env is able to be used in our code. We can take advantage of this technique to run code in multiple different environments.

First, let’s introduce some boilerplate functionality for creating, deleting, and getting information about a new environments variable (a dict of dicts containing all of the environments for a given environment id).

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
environments = {}

@app.route('/env/create', methods=['POST'])
def create():
env_id = request.form['id']
if env_id not in environments:
environments[env_id] = {}
return jsonify(envs=environments.keys())

@app.route('/env/delete', methods=['POST'])
def delete():
env_id = request.form['id']
if env_id in environments:
del environments[env_id]
return jsonify(envs=environments.keys())

@app.route('/env/get', methods=['POST'])
def getenv():
env_id = request.form['id']
if env_id in environments:
return jsonify(env=environments[env_id].keys())
else:
return jsonify(error='Environment does not exist!')

Now, if I send a POST request to http://localhost:5000/env/create with the POST parameters set to {id: 1}, the server creates a blank dictionary for the environment id and sends me back all environments that have been created. Similarly, I could delete environments or get all available information in the environment.

Hooking this up with our code execution is pretty simple as well.

1
2
3
4
5
6
7
8
9
10
11
12
13
@app.route("/", methods=['POST'])
def kernel():
env_id = request.form['id']
if env_id not in environments:
return jsonify(error='Kernel does not exist!')
code_lns = request.form['code'].split('n')
old_stdout = sys.stdout
sys.stdout = strstdout = StringIO()
for line in code_lns:
code = compile(line, '', 'exec')
exec code in environments[env_id]
sys.stdout = old_stdout
return jsonify(message=strstdout.getvalue())

Note that now, I have taken care to execute each code statement in the environment id provided.

Error Handling

There is one last, glaringly obvious bug in our code: our design fails miserably when an error occurs. If you had mistyped anything so far in the tutorial, such as sending prnt('hi') to the server, you would have received a solemn 500 error with no extra information from our server. Ideally, we would much rather receive the stack trace on the client side than a response that is so opaque!

Adding error handling to our server is as simple as catching errors and printing the stack trace to standard out. We can get the stacktrace by calling traceback.format_exc(). Since I like to make it blatantly obvious that an error has occurred, I watch for an error to occur, then send back the stacktrace under the ‘error’ key.

We can modify our kernel method slightly to get the functionality we require.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
@app.route("/", methods=['POST'])
def kernel():
error = False
env_id = request.form['id']
if env_id not in environments:
return jsonify(error='Kernel does not exist!')
code_lns = request.form['code'].split('n')
old_stdout = sys.stdout
sys.stdout = strstdout = StringIO()
for line in code_lns:
try:
code = compile(line, '', 'exec')
exec code in environments[env_id]
except:
print(traceback.format_exc())
error = True
sys.stdout = old_stdout
if error: return jsonify(error=strstdout.getvalue())
else: return jsonify(message=strstdout.getvalue())

Final Thoughts

All in all, this code gets us a long way towards creating our own IPython-like server. Writing up a simple frontend to interact back and forth with the JSON-based server is outside the scope of what I was trying to do here, but it certainly isn’t hard.

As for the issues with concurrency and security, many of these could be resolved by the use of Docker containers, which allow sandboxing and could be spun up or broken down as clients connect. This sandboxing would also fix the standard out redirection issue.

Below is the final code. 52 lines of code for a fully functioning, elegant, session-based Python kernel is not too shabby if I do say so myself. Please let me know if you have any other ideas on how to simplify/improve the code.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
import sys
import traceback
from cStringIO import StringIO
from flask import Flask, jsonify, request

app = Flask(__name__)
environments = {}

@app.route('/env/create', methods=['POST'])
def create():
kernel_id = request.form['id']
if kernel_id not in environments:
environments[kernel_id] = {}
return jsonify(envs=environments.keys())

@app.route('/env/delete', methods=['POST'])
def delete():
kernel_id = request.form['id']
if kernel_id in environments:
del environments[kernel_id]
return jsonify(envs=environments.keys())

@app.route('/env/get', methods=['POST'])
def getenv():
kernel_id = request.form['id']
if kernel_id in environments:
return jsonify(env=environments[kernel_id].keys())
else:
return jsonify(error='Environment does not exist!')

@app.route("/", methods=['POST'])
def kernel():
error = False
kernel_id = request.form['id']
if kernel_id not in environments:
return jsonify(error='Kernel does not exist!')
code_lns = request.form['code'].split('n')
old_stdout = sys.stdout
sys.stdout = strstdout = StringIO()
for line in code_lns:
try:
code = compile(line, '', 'exec')
exec code in environments[kernel_id]
except:
print(traceback.format_exc())
error = True
sys.stdout = old_stdout
if error: return jsonify(error=strstdout.getvalue())
else: return jsonify(message=strstdout.getvalue())

if __name__ == "__main__":
app.run()


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/q7hS4mTzPjM/

Original article

The iPhone Camera As A Professional Tool

Screen Shot 2016-02-18 at 9.48.18 PM Foodie magazine Bon Appétit has done something quite risky with this month’s issue. Photographers have left their cameras at their desks and used iPhones to shoot all the photos for the 43-page feature story of the magazine. This wasn’t Apple’s idea — Bon Appétit was working on a Culture issue, and the iPhone is part of the food culture now. Read More


Original URL: http://feedproxy.google.com/~r/Techcrunch/~3/tMbY5Kva6Es/

Original article

Web Powered SMS Inbox with Service Worker: Push Notifications

Recently I have been building a web application that I can use as a fully featured SMS messaging application for a Twilio number. It has a list of all messages sent and received and can be used to send new messages and reply to existing conversations.

It’s a pretty tidy little application that hasn’t taken long to build so far, but it currently has one drawback. To check for new messages you have to open the application up and look at it. Nightmare. This is how web applications have worked for a long time, however, starting last year with Chrome and earlier this year with Firefox, this is no longer a limitation of the web. The Service Worker is the API that powers this. From MDN:

Service workers essentially act as proxy servers that sit between web applications, and the browser and network (when available). They are intended to (amongst other things) enable the creation of effective offline experiences […] . They will also allow access to push notifications and background sync APIs.

Both Chrome and Firefox now support push notifications via Service Workers and in this post we are going to add a feature to the SMS application to send a push notification whenever the connected Twilio number receives an incoming message.

The tools we will need

In order to build this feature today you will need:

Got all that sorted? Great, let’s get the application up and running.

Running the application

First we’ll need to clone the base application from GitHub:

$ git clone b addingpushnotifications https://github.com/philnash/smsmessagesapp.git

The application will be going through further updates, so the above commands include checking out the version of the application which we will be working with in this post. If you just want the code from this post, check out the repo’s with-push-notifications branch.

Once you have the application downloaded install the dependencies:

We need to add some configuration to the app so that we can access our Twilio number. Copy the file named .env.example to .env and fill in your Twilio Account SID and Auth Token, available in your account portal, and your Twilio number that you want to use with this application.

Now start the app:

Load up the app in your browser, it will be available at http://localhost:3000. Send an SMS to your Twilio number, refresh the app and you’ll see the incoming message. Now we’ve dealt with that user experience, let’s add push notifications to the application.

Introducing the Service Worker

To use a Service Worker we need to install it from the front end of our application. We’ll then need to get the user’s permission to send push notifications. Once that is done we’ll write the Service Worker code to handle incoming push notifications. Finally, we’ll need to update our back end to receive webhooks from Twilio when it receives an SMS for our number and trigger the push notification.

If you want to read a bit more in depth about how the Service Worker actually works, then check out this introduction to the Service Worker on HTML5 Rocks. If you want to dive straight into the code, carry on below.

Note that to work in production, Service Workers require HTTPS to be setup on the server. In development however, they do work on localhost.

Installing the Service Worker

We need to create a couple of new files, our application’s JavaScript and the Service Worker file.

$ touch public/js/app.js public/serviceworker.js

Add the app.js file to the bottom of views/layout.hbs:

  <script type=“text/javascript” src=“/js/material.min.js”>
  <script type=“text/javascript” src=“/js/app.js”>

Open up public/js/app.js and let’s install our Service Worker:

if (“serviceWorker” in navigator) {
  swPromise = navigator.serviceWorker.register(“/service-worker.js”)
  swPromise.then(function(registration) {
        return registration.pushManager.subscribe({ userVisibleOnly: true });
  }).then(function(subscription) {
        console.log(subscription);
  }).catch(function(err) {
        console.log(“There was a problem with the Service Worker”);
        console.log(err);

We check for the existence of the Service Worker in the navigator object and then attempt to register our script. That registration returns a Promise which resolves with a registration object. That object has a pushManager which we need to subscribe to. 

We pass one argument to the pushManager‘s subscribe method. The argument is currently required and it indicates that this subscription will be used to show visible notifications to the end user. Subscribing also returns a Promise which resolves with a subscription object. We’ll just log this for now. We also finish the Promise chain to catch and log any errors that may occur during the process.

Save the file and load up the application in Firefox (this is important, we haven’t done everything we need for Chrome just yet). As the page loads you will see a permissions dialog asking whether you would like to receive notifications from this site. If you approve the dialog and check the console you will see the subscription object in the log.

When you load the page in Firefox a permissions dialog will ask you whether you would like to receive notifications from this site

Inspecting the subscription object you will find an endpoint property. This endpoint is a unique URL that refers to this browser and this application and is what you use to send the push notification to this user. We need to save this on our server so that our back end application can send the notifications when we get to implementing that part.

Storing the endpoint

Let’s build a route on the server side of our application to receive that endpoint and save it for use later. Open up routes/index.js and declare a new variable after we instantiate our Twilio client:

const client = twilio(config.accountSid, config.authToken);

We’re just going to save the endpoint to memory for this application as there is currently no other storage in the app and including a database is out of scope for this article. Now, underneath that, create a new route for the application that receives the endpoint and sets it to the variable that we just created.

router.post(“/subscription”, function(req, res, next) {
  pushEndpoint = req.body.endpoint;

This route just saves the endpoint and returns a 200 OK status. Let’s update our Service Worker installation script to post the endpoint to this route:

if (“serviceWorker” in navigator) {
  swPromise = navigator.serviceWorker.register(“/service-worker.js”)
  swPromise.then(function(registration) {
    return registration.pushManager.subscribe({ userVisibleOnly: true });
  }).then(function(subscription) {
    return fetch(“/subscription”, {
        “Content-type”: “application/x-www-form-urlencoded; charset=UTF-8”
      body: “endpoint=” encodeURI(subscription.endpoint)
  }).catch(function(err) {
    console.log(“There was a problem with the Service Worker”);

I’m using the new Fetch API here which is both a significantly nicer API than the old XMLHttpRequest API that we’ve all come to live with love. Browsers that support Service Workers also support the Fetch API, so we don’t need to any more feature detection.

In this case we don’t expect to do anything with the result from fetch but as it returns a Promise our catch at the end will log any issues with it.

Now we’re delivering our push notification endpoint to our server, let’s write the Service Worker code itself.

Implementing the Service Worker

The Service Worker listens to incoming events, so we need to write handlers for the ones we care about. For this application we are going to listen for the push event, which is fired when the Service Worker receives a push notification, and the notificationclick event, which is fired when a notification is clicked.

Within public/service-worker.js the keyword self refers to the worker itself and is what we will attach the event handlers to.

Open up public/service-worker.js and paste in the following code that responds to the push event.

// public/service-worker.js
self.addEventListener(“push”, function(event){
        self.registration.showNotification(“New message”)

When the Service Worker receives a push notification this will show a very simple notification with a title of “New message”. We’re just adding a title to the notification here, but there’s more options available.

The other thing to note in this example is that we pass the result of the call to showNotification to event.waitUntil. This method allows the push event to wait for asynchronous operations in its handler to complete before it is deemed over. This is important because Service Workers can be killed by the browser to conserve resources when they are not actively doing something. Ensuring the event stays active until the asynchronous activities are over will prevent that from happening whilst we try to show our notification. In this case, showNotification returns a Promise so the push event will remain active until the Promise resolves and our notification shows to the user.

Next, let’s create a simple handler for when the notification we show above is clicked on.

// public/service-worker.js
self.addEventListener(“notificationclick”, function(event){
        clients.openWindow(“http://localhost:3000/”)

For this, we listen for the notificationclick event and then use the Service Workers Clients interface to open our application in a new browser tab. Like the notification, there’s more we can do with the clients API, but we’ll keep it simple for now.

Now that we’ve set our Service Worker up we need to actually trigger some push notifications.

Receiving webhooks and sending push notifications

We want to trigger a Service Worker push notification when our Twilio number receives an incoming text message. Twilio tells us about this incoming message by making an HTTP request to our server. This is known as a webhook. We’ll create a route on our server that can receive the webhook and then dispatch a push notification.

Let’s create the route for our webhook on our server. Open up routes/index.js and add the following code:

router.post(“/webhooks/message”, function(req, res, next) {
  console.log(req.body.From, req.body.Body);
  res.set(‘Content-Type’, ‘application/xml’);nt
  res.send(“”);

Here we are just writing two of the parameters we receive from Twilio in the webhook, the number that sent the message and the body of the message, to the console and then returning an empty element as XML to let Twilio know that we don’t want to do anything more with this message now.

Let’s hook up our Twilio number to this webhook route to check that it’s working. Restart your server. It will be running on localhost:3000 so we need to make that available to Twilio. This is where ngrok comes into play. Start ngrok up tunnelling traffic through to port 3000 with the following command:

Grab the URL that ngrok gives you as the public URL for your application and open up the Twilio account portal.

Your ngrok URL will be shown in the ngrok console

Edit the phone number you bought for this application and enter your ngrok URL + /webhooks/message into the Request URL field for messages.

Enter your ngrok URL and the path to the route into the Messaging Request URL field when editing your Twilio number

Now, send a message to your Twilio number. You should see the parameters appear in the console. Great, we’re receiving our incoming text messages.  Now we need to trigger our push notification.

The web push module

To help us send push notifications, especially as it is currently different between Firefox and Chrome, we are going to use the web-push module that is available on npm. Install that in the application with the following command:

$ npm install webpush save

Next  require the web-push module in our routes/index.js file.

const express = require(‘express’);
const router = express.Router();
const twilio = require(‘twilio’);
const values = require(‘object.values’);
const webPush = require(‘web-push’);

Now, in the /webhooks/message route, we can trigger a push notification. We’ll use the endpoint we saved earlier and we can also set a time limit for how long the push service will keep the notification if it can’t be sent through immediately. Update the webhook route to the following:

router.post(“/webhooks/message”, function(req, res, next) {
  console.log(req.body.From, req.body.Body);
        webPush.sendNotification(pushEndpoint, 120);
  res.set(‘Content-Type’, ‘application/xml’);
  res.send(“”);

I’ve set the timeout for the notification to 2 minutes (120 seconds) in this case, but you can choose the most appropriate for your application.

Let’s test this again. Restart your server, visit the application in Firefox and then send an SMS to your Twilio number. You should receive the push notification and see the notification on screen.

When you send an SMS message the notification will trigger on your desktop

Even better, close the tab with the application loaded and send another text message.

You can even send the SMS message and receive the notification when the site isn't open

Woohoo, push notifications are working… in Firefox.

Push notifications for Chrome

As Firefox only recently launched support for push notifications, they were able to conform closely to the W3C Push API spec. When support in Chrome was released the spec wasn’t as mature. So right now, Chrome uses Google Cloud Messaging to send notifications, the same service that Android developers use to send notifications to their mobile apps. Thankfully the Web Push module covers most of the difference, we just need to add a couple of things.

To add support for Chrome to our application we need to create ourselves a project in the Google Developer Console. You can call the project whatever you want, but take note of the project number that is generated.

Once you create your project in the Google Developer Console, the number is listed next to the name of your project

Once you have created the project, click through to “Enable and manage APIs”, find the Google Cloud Messaging service and enable it. Once that is enabled, click “Go to credentials” and fill in the fields with “Google Cloud Messaging” and “Web server” and submit. Then name the key and generate it.

Generate an API key by selecting 'Web Server' then filling in a name for the key

Now you have your API key and project number, head back to the code. We need to provide the project number to the browser and the API key to our server. We do that by adding a web app manifest to our front end and by configuring the Web Push module with the API key on the server.

Web App Manifest

A Web App Manifest is a JSON file that gives metadata about a web application to a browser or operating system to make the installable web application experience better. We are going to use a very minimal app manifest in order to get our push notifications working, so create the manifest file in the public directory:

$ touch public/manifest.json

And fill the manifest file with a few details:

  “name”: “SMS Messages App”,
        “name”: “YOUR_NAME”,
        “url”: “YOUR_URL”
  “gcm_sender_id”: “YOUR_PROJECT_NUMBER”

Note, this is where you need to fill in your project number from the Google Developer Console.

Now we need to make our application aware of the manifest. Open up views/layout.hbs and add the following tag to the of the layout:

  <meta name=“viewport” content=“width=device-width, initial-scale=1”>
  <link rel=“manifest” href=“/manifest.json”>

That’s the front end sorted, now to the server. Open up your .env file and add one line with your API key:

GCM_API_KEY=YOUR_GOOGLE_API_KEY

Finally, open up index/routes.js and set the API after you require the Web Push module.

webPush = require(“web-push”);
webPush.setGCMAPIKey(process.env.GCM_API_KEY);

Restart the application, load up localhost:3000 in Chrome, start sending text messages and watch the notifications arrive!

The notifications also appear at the top right of the screen, as in Firefoxbranch on the GitHub repo.

There’s lots more we could do with Service Workers now, how about:

  • Implement browser push notifications for IP Messaging or TaskRouter
  • Show information about the incoming SMS in the notification
  • Use the Service Worker to make this application work offline too

If you’re excited about what the Service Worker brings to the web then I’d love to hear about it. Hit me up on Twitter at @philnash or drop me an email at philnash@twilio.com.


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/vU06A1-P8IQ/web-powered-sms-inbox-with-service-worker-push-notifications.html

Original article

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑

%d bloggers like this: