Quantum Weirdness Now a Matter of Time

In November, construction workers at the Massachusetts Institute of Technology came across a time capsule 942 years too soon. Buried in 1957 and intended for 2957, the capsule was a glass cylinder filled with inert gas to preserve its contents; it was even laced with carbon-14 so that future researchers could confirm the year of burial, the way they would date a fossil. MIT administrators plan to repair, reseal and rebury it. But is it possible to make it absolutely certain that a message to the future won’t be read before its time?

Quantum physics offers a way. In 2012, Jay Olson and Timothy Ralph, both physicists at the University of Queensland in Australia, laid out a procedure to encrypt data so that it can be decrypted only at a specific moment in the future. Their scheme exploits quantum entanglement, a phenomenon in which particles or points in a field, such as the electromagnetic field, shed their separate identities and assume a shared existence, their properties becoming correlated with one another’s. Normally physicists think of these correlations as spanning space, linking far-flung locations in a phenomenon that Albert Einstein famously described as “spooky action at a distance.” But a growing body of research is investigating how these correlations can span time as well. What happens now can be correlated with what happens later, in ways that elude a simple mechanistic explanation. In effect, you can have spooky action at a delay.

These correlations seriously mess with our intuitions about time and space. Not only can two events be correlated, linking the earlier one to the later one, but two events can become correlated such that it becomes impossible to say which is earlier and which is later. Each of these events is the cause of the other, as if each were the first to occur. (Even a single observer can encounter this causal ambiguity, so it’s distinct from the temporal reversals that can happen when two observers move at different velocities, as described in Einstein’s special theory of relativity.)

Space-time might not be a God-given backdrop to the world, but instead might derive from the material contents of the universe.

 The time-capsule idea is only one demonstration of the potential power of these temporal correlations. They might also boost the speed of quantum computers and strengthen quantum cryptography.

But perhaps most important, researchers hope that the work will open up a new way to unify quantum theory with Einstein’s general theory of relativity, which describes the structure of space-time. The world we experience in daily life, in which events occur in an order determined by their locations in space and time, is just a subset of the possibilities that quantum physics allows. “If you have space-time, you have a well-defined causal order,” said Časlav Brukner, a physicist at the University of Vienna who studies quantum information. But “if you don’t have a well-defined causal order,” he said — as is the case in experiments he has proposed — then “you don’t have space-time.” Some physicists take this as evidence for a profoundly nonintuitive worldview, in which quantum correlations are more fundamental than space-time, and space-time itself is somehow built up from correlations among events, in what might be called quantum relationalism. The argument updates Gottfried Leibniz and Ernst Mach’s idea that space-time might not be a God-given backdrop to the world, but instead might derive from the material contents of the universe.

How Time Entanglement Works

To understand entanglement in time, it helps to first understand entanglement in space, as the two are closely related. In the spatial version of a classic entanglement experiment, two particles, such as photons, are prepared in a shared quantum state, then sent flying in different directions. An observer, Alice, measures the polarization of one photon, and her partner, Bob, measures the other. Alice might measure polarization along the horizontal axis while Bob looks along a diagonal. Or she might  choose the vertical angle and he might measure an oblique one. The permutations are endless.

The outcomes of these measurements will match, and what’s weird is that they match even when Alice and Bob vary their choice of measurement — as though Alice’s particle knew what happened to Bob’s, and vice versa. This is true even when nothing connects the particles — no force, wave or carrier pigeon. The correlation appears to violate “locality,” the rule that states that effects have causes, and chains of cause and effect must be unbroken in space and time.

In the temporal case, though, the mystery is subtler, involving just a single polarized photon. Alice measures it, and then Bob remeasures it. Distance in space is replaced by an interval of time. The probability of their seeing the same outcome varies with the angle between the polarizers; in fact, it varies in just the same way as in the spatial case. On one level, this does not seem to be strange. Of course what we do first affects what happens next. Of course a particle can communicate with its future self.

The strangeness comes through in an experiment conceived by Robert Spekkens, a physicist who studies the foundations of quantum mechanics at the Perimeter Institute for Theoretical Physics in Waterloo, Canada. Spekkens and his colleagues carried out the experiment in 2009. Alice prepares a photon in one of four possible ways. Classically, we could think of these four ways as two bits of information. Bob then measures the particle in one of two possible ways. If he chooses to measure the particle in the first way, he obtains Alice’s first bit of information; if he chooses the second, he obtains her second bit. (Technically, he does not get either bit with certainty, just with a high degree of probability.) The obvious explanation for this result would be if the photon stores both bits and releases one based on Bob’s choice. But if that were the case, you’d expect Bob to be able to obtain information about both bits — to measure both of them or at least some characteristic of both, such as whether they are the same or different. But he can’t. No experiment, even in principle, can get at both bits — a restriction known as the Holevo bound. “Quantum systems seem to have more memory, but you can’t actually access it,” said Costantino Budroni, a physicist at the University of Siegen in Germany.

The photon really does seem to hold just one bit, and it is as if Bob’s choice of measurement retroactively decides which it is. Perhaps that really is what happens, but this is tantamount to time travel — on an oddly limited basis, involving the ability to determine the nature of the bit but denying any glimpse of the future.

Another example of temporal entanglement comes from a team led by Stephen Brierley, a mathematical physicist at the University of Cambridge. In a paper last year, Brierley and his collaborators explored the bizarre intersection of entanglement, information and time. If Alice and Bob choose from just two polarizer orientations, the correlations they see are readily explained by a particle carrying a single bit. But if they choose among eight possible directions and they measure and remeasure the particle 16 times, they see correlations that a single bit of memory can’t explain. “What we have proven rigorously is that, if you propagate in time the number of bits that corresponds to this Holevo bound, then you definitely cannot explain what quantum mechanics predicts,” said Tomasz Paterek, a physicist at Nanyang Technological University in Singapore, and one of Brierley’s co-authors. In short, what Alice does to the particle at the beginning of the experiment is correlated with what Bob sees at the end in a way that’s too strong to be easily explained. You might call this “supermemory,” except that the category of “memory” doesn’t seem to capture what’s going on.

What exactly is it about quantum physics that goes beyond classical physics to endow particles with this supermemory? Researchers have differing opinions. Some say the key is that quantum measurements inevitably disturb a particle. A disturbance, by definition, is something that affects later measurements. In this case, the disturbance leads to the predicted correlation.

In 2009 Michael Goggin, a physicist who was then at the University of Queensland, and his colleagues did an experiment to get at this issue. They used the trick of spatially entangling a particle with another of its kind and measuring that stand-in particle rather than the original. The measurement of the stand-in still disrupts the original particle (because the two are entangled), but researchers can control the amount that the original is disrupted by varying the degree of entanglement. The trade-off is that the experimenter’s knowledge of the original becomes less reliable, but the researchers compensate by testing multiple pairs of particles and aggregating the results in a special way. Goggin and his team reduced the disruption to the point where the original particle was hardly disturbed at all. Measurements at different times were still closely correlated. In fact, they were even more closely correlated than when the measurements disturbed the particle the most. So the question of a particle’s supermemory remains a mystery. For now, if you ask why quantum particles produce the strong temporal correlations, physicists basically will answer: “Because.”

Quantum Time Capsules

Things get more interesting still — offering the potential for quantum time capsules and other fun stuff — when we move to quantum field theory, a more advanced version of quantum mechanics that describes the electromagnetic field and other fields of nature. A field is a highly entangled system. Different parts of it are mutually correlated: A random fluctuation of the field in one place will be matched by a random fluctuation in another. (“Parts” here refers both to regions of space and to spans of time.)

Even a perfect vacuum, which is defined as the absence of particles, will still have quantum fields. And these fields are always vibrating. Space looks empty because the vibrations cancel each other out. And to do this, they must be entangled. The cancellation requires the full set of vibrations; a subset won’t necessarily cancel out. But a subset is all you ever see.

If an idealized detector just sits in a vacuum, it will not detect particles. However, any practical detector has a limited range. The field will appear imbalanced to it, and it will detect particles in a vacuum, clicking away like a Geiger counter in a uranium mine. In 1976 Bill Unruh, a theoretical physicist at the University of British Columbia, showed that the detection rate goes up if the detector is accelerating, since the detector loses sensitivity to the regions of space it is moving away from. Accelerate it very strongly and it will click like mad, and the particles it sees will be entangled with particles that remain beyond its view.

In 2011 Olson and Ralph showed that much the same thing happens if the detector can be made to accelerate through time. They described a detector that is sensitive to photons of a single frequency at any one time. The detector sweeps through frequencies like a police radio scanner, moving from lower to higher frequencies (or the other way around). If it sweeps at a quickening pace, it will scan right off the end of the radio dial and cease to function altogether. Because the detector works for only a limited period of time, it lacks sensitivity to the full range of field vibrations, creating the same imbalances that Unruh predicted. Only now, the particles it picks up will be entangled with particles in a hidden region of time — namely, the future.

“We cannot really explain these correlations,” said Baumeler. “They don’t really fit into our notion of space-time.”

Olson and Ralph suggest constructing the detector from a loop of superconducting material. Tuned to pick up near-infrared light and completing a scan in a few femtoseconds (10–15 second), the loop would see the vacuum glowing like a gas at room temperature. No feasible detector accelerating through space could achieve that, so Olson and Ralph’s experiment would be an important test of quantum field theory. It could also vindicate Stephen Hawking’s ideas about black-hole evaporation, which involve the same basic physics.

If you build two such detectors, one that accelerates and one that decelerates at the same rate, then the particles seen by one detector will be correlated with the particles seen by the other. The first detector might pick up a string of stray particles at random intervals. Minutes or years later, the second detector will pick up another string of stray particles at the same intervals — a spooky recurrence of events. “If you just look at them individually, then they’re randomly clicking, but if you get a click in one, then you know that there’s going to be a click in the other one if you look at a particular time,” Ralph said.

These temporal correlations are the ingredients for that quantum time capsule. The original idea for such a contraption goes back to James Franson, a physicist at the University of Maryland, Baltimore County. (Franson used spacelike correlations; Olson and Ralph say temporal correlations may make it easier.) You write your message, encode each bit in a photon, and use one of your special detectors to measure those photons along with the background field, thus effectively encrypting your bits. You then store the outcome in the capsule and bury it.

At the designated future time, your descendants measure the field with the paired detector. The two outcomes, together, will reconstitute the original information. “The state is disembodied for the time between [the two measurements], but is encoded somehow in these correlations in the vacuum,” Ralph said. Because your descendants must wait for the second detector to be triggered, there’s no way to unscramble the message before its time.

The same basic procedure would let you generate entangled particles for use in computation and cryptography. “You could do quantum key distribution without actually sending any quantum signal,” Ralph said. “The idea is that you just use the correlations that are already there in the vacuum.”

The Nature of Space-Time

These temporal correlations are also challenging physicists’ assumptions about the nature of space-time. Whenever two events are correlated and it’s not a fluke, there are two explanations: One event causes the other, or some third factor causes both. A background assumption to this logic is that events occur in a given order, dictated by their locations in space and time. Since quantum correlations — certainly the spatial kind, possibly the temporal — are too strong to be explained using one of these two explanations, physicists are revisiting their assumptions. “We cannot really explain these correlations,” said Ämin Baumeler, a physicist at the University of Italian Switzerland in Lugano, Switzerland. “There’s no mechanism for how these correlations appear. So, they don’t really fit into our notion of space-time.”

Building on an idea by Lucien Hardy, a theoretical physicist at the Perimeter Institute, Brukner and his colleagues have studied how events might be related to one another without presupposing the existence of space-time. If the setup of one event depends on the outcome of another, you deduce that it occurs later; if the events are completely independent, they must occur far apart in space and time. Such an approach puts spatial and temporal correlations on an equal footing. And it also allows for correlations that are neither spatial nor temporal — meaning that the experiments don’t all fit together consistently and there’s no way to situate them within space and time.

Related Articles:

Wormholes Untangle a Black Hole Paradox
A bold new idea aims to link two famously discordant descriptions of nature. In doing so, it may also reveal how space-time owes its existence to the spooky connections of quantum information.

How Quantum Pairs Stitch Space-Time
New tools may reveal how quantum information builds the structure of space.

Brukner’s group devised a strange thought experiment that illustrates the idea. Alice and Bob each toss a coin. Each person writes the result of his or her own toss on a piece of paper, along with a guess for the other person’s outcome. Each person also sends the paper to the other with this information. They do this a number of times and see how well they do.

Normally the rules of the game are set up so that Alice and Bob do this in a certain sequence. Suppose Alice is first. She can only guess at Bob’s outcome (which has yet to occur), but she can send her own result to Bob. Alice’s guess as to Bob’s flip will be right 50 percent of the time, but he will always get hers right. In the next round, Bob goes first, and the roles are reversed. Overall the success rate will be 75 percent. But if you don’t presume they do this in a certain sequence, and if they replace the sheet of paper with a quantum particle, they can succeed 85 percent of the time.

If you try to situate this experiment within space and time, you’ll be forced to conclude that it involves a limited degree of time travel, so that the person who goes second can communicate his or her result backward in time to the one who goes first. (The Time Patrol will be relieved that no logical paradoxes can arise: No event can become its own cause.)

Brukner and his colleagues at Vienna have performed a real-world experiment that is similar to this. In the experiment, Alice-and-Bob manipulations were carried out by two optical filters. The researchers beamed a stream of photons at a partially silvered mirror, so that half the photons took one path and half another. (It was impossible to tell, without measuring, which path each individual photon went down; in a sense, it took both paths at once.) On the first path, the photons passed through Alice’s filter first, followed by Bob’s. On the second path, the photons navigated them in reverse order. The experiment took quantum indeterminacy to a whole new level. Not only did the particles not possess definite properties in advance of measurement, the operations performed on them were not even conducted in a definite sequence.

On a practical level, the experiment opens up new possibilities for quantum computers. The filters corresponding to Alice and Bob represent two different mathematical operations, and the apparatus was able to ascertain in a single step whether the order of those operations matters — whether A followed by B is the same as B followed by A. Normally you’d need two steps to do that, so the procedure is a significant speedup. Quantum computers are sometimes described as performing a series of operations on all possible data at once, but they might also be able to perform all possible operations at once.

Now imagine taking this experiment a step further. In Brukner’s original experiment, the path of each individual photon is placed into a “superposition” — the photon goes down a quantum combination of the Alice-first path and the Bob-first path. There is no definite answer to the question, “Which filter did the photon go through first?”— until a measurement is carried out and the ambiguity is resolved. If, instead of a photon, a gravitating object could be put into such a temporal superposition, the apparatus would put space-time itself into a superposition. In such a case, the sequence of Alice and Bob would remain ambiguous. Cause and effect would blur together, and you would be unable to give a step-by-step account of what happened.

Only when these indeterminate causal relations between events are pruned away — so that nature realizes only some of the possibilities available to it — do space and time become meaningful. Quantum correlations come first, space-time later. Exactly how does space-time emerge out of the quantum world? Brukner said he is still unsure. As with the time capsule, the answer will come only when the time is right.

Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/CkU6MCMlY4o/  

Original article

Build a Dashboard that Displays Any Data You Can Imagine with a Raspberry Pi

Whether it’s keeping track of your computer’s memory usage or displaying information about the weather, dashboards are a fun way to track information. Adafruit shows off a way to build one using a Raspberry Pi that can track all kinds of data.

Read more…

Original URL: http://feeds.gawker.com/~r/lifehacker/full/~3/syFogOSBsIo/build-a-dashboard-that-displays-any-data-you-can-imagin-1754335467  

Original article

Posting successful SSH logins to Slack

January 21, 2016 · slack

I use Slack for many things and it’s great to see how many integrations are available out of the box. But building integrations yourself is extremely easy using Incoming Web Hooks.

Wouldn’t it be nice if you could see a message in Slack each time a user connects to one of your machines over SSH? Yes it would!

Slack Setup

So first you would need to configure an Incoming Web Hook in Slack:


Configuring this will give you a Webhook URL to which you can post your messages.

Machine Setup

Now connect to your machine and create a script in your ssh folder:

sudo nano /etc/ssh/notify.sh  

Add the following code to the script which we’ll configure to run each time a user signs in:

if [ "$PAM_TYPE" != "close_session" ]; then  
    content=""attachments": [ { "mrkdwn_in": ["text", "fallback"], "fallback": "SSH login: $PAM_USER connected to `$host`", "text": "SSH login to `$host`", "fields": [ { "title": "User", "value": "$PAM_USER", "short": true }, { "title": "IP Address", "value": "$PAM_RHOST", "short": true } ], "color": "#F35A00" } ]"
    curl -X POST --data-urlencode "payload={"channel": "$channel", "mrkdwn": true, "username": "ssh-bot", $content, "icon_emoji": ":computer:"}" $url

Now make the script executable:

sudo chmod +x /etc/ssh/notify.sh  

Finally add the following line to /etc/pam.d/sshd:

session optional pam_exec.so seteuid /etc/ssh/notify.sh  


Well that’s it. That was easy!

Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/9wgrtv8MkS4/  

Original article

Free the Law: all U.S. case law online

Our common law – the written decisions issued by our state and federal courts – is not freely accessible online. This lack of access harms justice and equality and stifles innovation in legal services.

The Harvard Law School Library has one of the world’s largest, most comprehensive collections of court decisions in print form. Our collection totals over 42,000 volumes and roughly 40 million pages. The Free The Law project aims to transform the official print versions of these court decisions into digital files made freely accessible online.

To realize this ambitious vision, we’re teaming up with Ravel Law, an innovative legal research and analytics company. Ravel is funding the costs of digitization and will be making all of the resulting cases publicly available for free search and API access. You can learn more about the key terms of our collaboration with Ravel by reading a detailed overview here.

Free The Law is possible only because of the dedicated work of a long, distinguished line of librarians and other staff members over the last 200 years, who expertly collected and preserved the print volumes now available for digitization. The project continues to rely heavily on huge contributions from many at the Law School Library, the Law School and from across the University.

We also express our deepest appreciation for the brilliant advice and extraordinary efforts of Jeffrey P. Cunard, Maxine Sharavsky and their colleagues Michael Gillespie, Sarah A.W. Fitts and Robert Williams, Jr. at Debevoise & Plimpton, Henry B. Gutman and colleagues at Simpson Thacher & Bartlett LLP, and Jonathan H. Hulbert and his fellow members of the Office of the General Counsel.

Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/ex-IaI_oiHI/free-the-law  

Original article

The Three Cultures of Machine Learning

by Jason Eisner (2015)

Everyone should read Leo Breiman’s 2001 article,
Modeling: The Two Cultures
. (Summary: Traditional statisticians
start with a distribution. They try to identify the parameters of the
distribution from data that were actually generated from it. Applied
statisticians start with data. They have no idea where their data
really came from and are happy to fit any model that makes good

I think there are currently three cultures of machine
learning. Different people or projects will fall in different places
on this “ML simplex” depending on what they care about most.
They start with something in green
and attempt to get blue as a way of
achieving red.

  • At the top of the triangle, we have exuberantly rationalist
    approaches for when we think we know something about the data (e.g., a
    generative story). These “scientific” approaches are not exclusively
    Bayesian, but Bayesian ML practitioners cluster up here.

  • At the right vertex, we have Breiman’s know-nothing
    approach—high-capacity models like neural nets, decision
    forests, and nonparametrics that will fit anything given enough data.
    This is engineering with less science
    (see these
    ). Deep learning people cluster here.

  • Estimators for both of the above approaches usually have to
    solve intractable optimization problems. Thus, they fall back on
    approximations and get stuck in local maxima, and you don’t
    really know what you’re getting.

    But in simple settings, the errors of both approaches can be
    analyzed. This gratifies the people at the left vertex.
    Frequentist statisticians and COLT folks (computational learning
    theorists) cluster around that vertex; they try to bound the error.
    For my take on the different priorities of frequentists
    and Bayesians,
    see here.

Finding an ML attack on an applied problem usually involves
combining elements of multiple traditions. It also involves using
various computational tricks (MCMC, variational approximations, convex
relaxations, optimization algorithms, etc.) to try to handle the
maximizations and integrations that are needed for learning and

(I suppose the drawing is mostly about prediction. It omits
reinforcement learning and causal learning. But such problems
involve prediction, so the same competing priorities guide how
practitioners approach problems.)

Update: Mark Tygert alerted me
that Efron
gave a similar simplex diagram — Fig. 8, “A
barycentric picture of modern statistical research” — whose
corners were the Bayesian, frequentist and Fisherian

This page online: http://cs.jhu.edu/~jason/tutorials/ml-simplex

Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/MZpeNiGca1s/ml-simplex.html  

Original article

DIY A2J 2: Work With Others and Others’ Work

In most urban centres, you can’t swing a stick without hitting a social service or social service connected agency. Most of these agencies are glad to have any legal materials they can get their hands on, and most are willing to share the materials they have. Most importantly, each of these agencies serves a specific target population with specific legal needs.

Groups like SUCCESS Settlement Services in British Columbia, for example, help newcomers to Canada overcome language and cultural barriers; groups like the Atira Women’s Resource Centre help women dealing with abuse through advocacy and education. Various other social service agencies have been set up to address the needs and interests of Canada’s first nations, the LGBTTQ community, the elderly, people coping with mental illness, the homeless and the addicted, and so on. Many of these groups also have in-house legal advocacy centres, and sometimes advocates who provide legal information and assist clients with common legal tasks. Those that don’t have something as organized as this will at least have a handful of pamphlets, information sheets, printouts and web links they routinely hand out.

Although there’s rarely any shortage of pamphlets and fact sheets, the reality is that there is always some specific subject or area of the law which could be addressed or addressed in more detail. Whether that’s the case or not, legal materials do not age well and inevitably need refreshing from time to time as the law changes.

A useful and easy way to promote access to justice is to connect with a few of the social service agencies in your neighbourhood, find out where the holes are in their library of legal resources, and fill them. Most agencies will be keenly aware of where the gaps are, but if that’s not the case you could arrange to visit their offices and root through their brochure rack to identify to any stale materials that could use updating. They’ll also welcome your interest and will be happy to work with you.

What’s fun and rewarding about this sort of work is not just the opportunity to connect with a group providing important community service, but to think and write about the law in a way that addresses the unique legals needs and realities of each group’s target population. Here, for example, is a screen capture of an information sheet for parents and parents-to-be that I wrote for the BC Council for Families, directed specifically toward the LGBTTQ community:


Other work of mine has focussed on family law for youth with children (for the BC Council for Families), abused women (for the BC Society of Transition Houses), parents living in poverty (for the Salvation Army’s defunct pro bono program), people in polyamorous relationships (for the Canadian Polyamory Advocacy Association), recent immigrants (for SUCCESS Settlement Services), grandparents caring for grandchildren (for the Parent Support Services Society of BC) and other populations.

Working with community media and larger social service groups is another way to enhance access to justice. Organizations such as these generally have a broader reach and better funding, and the work you do often goes much further. Here, for example, are screen captures of an article I wrote for the online magazine LawNow and of the front cover of a booklet I wrote for the People’s Law School:



LawNow is an Edmonton-based magazine published by the fantastic Centre for Public Legal Education Alberta and aimed, at least partially, at public school teachers and youth that talks about how law relates to every day life. The People’s Law School is a public legal education organization in Vancouver that has special experience working with other community groups to create information on the law, offers its publications in multiple languages and formats, and allows bulk orders of its print material. (It’s been my experience that social service agencies like these are more than happy to share whatever resources they have, often at no cost to the recipient save for postage and copying.) Both organizations are great to work with.

If you decide to tackle this sort of project, here are some tips and suggestions.

1. Don’t reinvent the wheel. Sometimes it’s enough just to update existing materials; the group you’re working with may even have the original document in an editable format.

2. Always use plain language, and be sensitive to the fluency of your audience. Aim for a reading level the group’s target population will be comfortable with.

3. Be neutral, but be alive to and respectful of the social and political perspective of the group you are working with.

4. Always explain that the information you are providing is general information and not a substitute for proper legal advice. It is important to protect yourself and the group you are working with from liability.

5. Encourage the translation and sharing of your work, but be wary of accepting responsibility for the accuracy of a translated document unless you can verify the translation. 

On this last point, my preference has been to ask to not be identified as the author of translated material; a statement to the effect that the translated material is based on my original will usually do. I’ve made exceptions to this general rule is where the organization I’m working with is clearly assuming ownership of the publication and can pay for professional translation. For example, this booklet, which Nate Prosser and I wrote for the Legal Services Society


… has been translated into French, Chinese (simplified and traditional), Punjabi and Spanish. Here are the Punjabi and Spanish covers:



Given the size, funding and outstanding professionalism of LSS, I have few concerns about the likely accuracy of the translations it obtained.

Finally, my personal practice has always been to avoid using legal materials like these to promote myself or my firm. Firstly, social service agencies are unlikely to be terribly enthused about working with you on a marketing tool. This is not unreasonable. Second, legal materials have a great deal more public credibility when they’re not seen as vehicles for rank self-promotion. Third, your firm may not wish to be seen as adopting one particular cause or affiliation over another.

Original URL: http://www.slaw.ca/2016/01/22/diy-a2j-2-work-with-others-and-others-work/  

Original article

Trello clone with Phoenix and React (5 part tutorial)

This post belongs to the Trello clone with Phoenix Framework and React series.

  1. Intro and selected stack
  2. Phoenix Framework project setup
  3. The User model and JWT auth
  4. Front-end for sign up with React and Redux
  5. Database seeding and sign in controller
  6. Coming soon

Trello is one of my favorite web applications of all time. I’ve been using
it since its very beginning and I love the way it works, its simpleness and
flexibility. Every time I start learning a new technology I like
to create a real-case application where I can put in practice everything I’m learning
into possible real-life problems and test out how to solve them.
So when I started to learn Elixir and it’s Phoenix
framework it was clear to me: I had to put in practice all the awesome stuff I was
learning and share it as a tutorial on how to code a simple, but functional,
tribute to Trello.

Basically we are going to code a single-page application where existing users are
will be able to sign in, create some boards, share them with other existing
users and add lists and cards to them. While viewing a board, connected users will
be displayed and any modification will be automatically reflected on every
connected user’s browser in real-time a la Trello style.

The current stack

Phoenix manages static assets with npm and builds them using Brunch or
Webpack out of the box, so it’s pretty simple to really separate both the
front-end and the back-end, while having them in the same codebase. So for the back-end
we are going to use:

  • Elixir.
  • Phoenix framework.
  • Ecto.
  • PostgreSQL.

And to build the single-page front-end we are going for:

  • Webpack.
  • Sass for the stylesheets.
  • React.
  • React router.
  • Redux.
  • ES6/ES7 JavaScript.

We’ll be using some more Elixir dependencies and npm packages, but
I will talk about them as soon as we use them.

Why this stack?

Elixir is a very fast and powerful functional language based on Erlang and with friendly
syntax very similar to Ruby. It’s very robust and specialized in concurrency so it can
automatically manage thousands of concurrent processes thanks to the Erlang VM.
I’m an Elixir newbie so I still have a lot to learn, but I can say that from what I’ve
tested so far it is really impressive.

We are going to use Phoenix which is Elixir’s most popular web
framework right now which not only uses some of the parts and standards that Rails
brought to web development, but also it offers many other cool features like the
way it manages static assets I mentioned before and, the most important to me,
real-time functionality out of the box through websockets easy as pie and with no
need of any external dependency (and trust me, it works like a charm).

On the other hand we are using React, react-router and Redux because
I just love this combination to create single-page applications and manage the their
state. Instead of using CoffeeScript as I always do, this new year I want to start using ES6 and
ES7, so it’s the perfect occasion to start doing so and get used to it.

The final result

The application will consist of four different screens.
The first two are the sign up and sign in screens.

The main screen will consist of the list of owned boards by the user and the list of
boards he’s been added as member by other users:

And finally the board screen where all users will be able to see who is connected,
and manage lists and cards around.

So that’s enough talk for now. Let’s leave it here so I can start preparing the second
part in which we will see how to create a new Phoenix project and what changes we
need to make in order to use Webpack instead of Brunch and how to setup the
front-end foundations.

Happy coding!

Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/fT7nf8OMyKg/trello-clone-with-phoenix-and-react-pt-1  

Original article

RetroArch 1.3 released

RetroArch 1.3 was just released for iOS, OSX, Windows, Linux, Android, Wii, Gamecube, PS3, PSP, PlayStation Vita and 3DS.

You can get them from this page:


Once again the changelist is huge but we will run down some of the more important things we should mention:


Reicast (Dreamcast core)

RetroArch-1219-085722 RetroArch-1225-080518 RetroArch-1215-190216RetroArch-1215-193138

We have ported Reicast over to the libretro API. This is a Sega Dreamcast emulator.

Supported platforms:

Right now, it runs on OSX, Windows, and Linux (64bit Intel only for now). Over the next coming days/weeks we will be porting it to Android and iOS as well and making it work for 32bit Intel too, so keep watching this space.

There were some improvements made over regular Reicast. Render To Texture features are enabled (they are disabled by default in Reicast), which means that certain effects like the heat room in Resident Evil: Code Veronica and the pause screen in Crazy Taxi render correctly. A bug got fixed which led to a bunch of sprite tile glitches in Capcom Vs SNK 2. We added a workaround so that Marvel Vs Capcom 2 no longer crashes (by  detecting the game and changing it on the fly to rec_cpp). We made the x64 dynarec work on OSX which wasn’t working before. We implemented a workaround for Soul Calibur (whitelisted for OSX/Linux only for now) that should prevent some of the Z-fighting.


1 – Reicast NEEDS BIOS files. You can run it without BIOS files but the success rate is so low that we really stress you always use it with bios files. The BIOS files go inside /dc. If you have not set up a system directory, then it will look for a folder called ‘dc’ inside the same directory you loaded the ISO from.

2 – Make sure that ‘Shared Context Enable’ is enabled in RetroArch. To verify this, go to Settings -> Core -> HW Shared Context Enable and make sure it says ‘ON’. If you don’t do this, you might find that there are severe graphical glitches or that nothing even shows up at all.

3 – It should be stressed that the 64-bit dynarec for x86_64 is a lot slower than Reicast’s 32bit dynarec. Right now, the Reicast libretro core on PC is only available in 64bit form. When the 32bit version comes out, you might want to try it on a 32bit version of RetroArch too, it might give you a big speedup vs. the current 64bit version.

I fully intend on doing more work on Reicast once we have these releases out of the way.


You will find installation instructions for the platforms here

MacOSX PowerPC (10.5) (NEW)

Starting with version 1.3, RetroArch is now available on PowerPC Macs running OSX.

You need at least MacOSX 10.5 (Leopard) to be able to run the PowerPC version of RetroArch OSX.

I have included the cores which are known to work so far with the RetroArch bundle itself since we don’t yet have OSX PowerPC cores on our buildbot.

PlayStation Vita/PlayStation TV version (NEW)

There is also a RetroArch version available now for PlayStation Vita and PlayStation TV. To use this, you need to use the (still quite impractical for daily usage) Rejuvenate jailbreak. No better jailbreaks are available as of this time, sorry. We will continue fleshing out this port for when a more mainstream jailbreak comes around.

Nintendo 3DS version (NEW)

The Nintendo 3DS version is in a satisfactory enough state to be released now. It has received quite some attention on the Internet already. For instance, some of our cores (like dosbox libretro) has received quite some attention in the press. Others have been impressed by PCSX ReARMed being able to run on 3DS. Overall, the 3DS version has made quite the splash.

You will get the best experience using a New Nintendo 3DS since it has a much faster CPU than the regular 3DS. All of the cores available for 3DS will benefit from this bump in specs.

Lakka – the RetroArch turnkey solution for HTPCs/ARM devboards (NEW)


Remember all that talk about RetroBox last year? RetroBox has now turned into Lakka.

With Lakka, you can turn nearly any ARM or x86 hardware into a fully-functioning retro videogame console capable of running countless games with a very nice userfriendly console user interface. We strive to make it as plug-and-play as possible so that you are not reminded at any point in time that this is actually a PC running Linux.

Startup time is very quick (less than 5 seconds is a conservative figure), and in most cases we use DRM/KMS graphics drivers to ensure the best possible latency given the hardware. The entire user interface is gamepad controlled, you don’t have to bring a keyboard and mouse to this thing, it should be as plug-and-play as possible.

Please check out our sister project’s website here: lakka.tv. It’s quite overwhelming the amount of ARM hardware and HTPCs that Lakka can run on.

This is our antidote and answer to the kind of ripoff RetroArch Android boxes that have begun to pop up like Retro Freak and Retron5. The only thing we lack yet obviously compared to those is reading from the original cartridges (inso far as that is important), but we encourage developers to contribute to the project so we can build it up to have support for that too. In the end having something that you can get free of charge and bring your own hardware to it is in your own best interests vs. these kinds of retro hustles.

For the more technical minded, I can’t stress this enough: you are getting an inferior experience emulating on some underpowered Android SoC the likes of which are in these el-cheapo devices that are nevertheless sold at huge markups, you’d very much want to bring your own hardware and run Lakka on it for the best experience and also so that you aren’t held hostage by the hardware itself with forced firmware updates crippling what you can do with the hardware. In most cases we use DRM/KMS drivers with Lakka, so you don’t get the overhead of an X11 server to begin with. What little we lack in terms of spit and polish we hope we can fix with some collaborative effort.

MacOS X (10.6/10.7/10.8+)

RetroArch is available for both 32bit and 64bit Macs running OSX.


RetroArch is available on iOS, both on iPhones and iPads. You can use the Cydia version if you are on a jailbroken device, otherwise you will have to sideload it yourself on your non-jailbroken device.


The user interface got a total overhaul (see ‘Revamped User interface’). Numerous bugs were fixed.

I removed the camera and location API permissions, these were experimental augmented reality features which in hindsight were not really worth it to have to release several different APKs for and it was preventing us from being able to appear on Android TV. It was only ever used in one test core which wasn’t even available through the buildbot anyway so it was just a big inconvenience in the long run.

I know the v1.2 release did not go over well with Android users and there was a lot of criticism. I hope that with v1.3 we are able to win some of these users back. Needless to say it has been hard making RetroArch less CLI-focused but I think it’s finally starting to bear fruit.

You can get the Android version either from the Google Play Store or (better yet), through F-Droid. F-Droid is a really convenient way of being able to run RetroArch. As soon as we push an update to the code, F-Droid within no time will inform you an update is available and allow you to upgrade to the latest version. This makes it possible for you to update RetroArch daily without having to wait for a while for some stable version to come out.

PlayStation3 (PS3)

The PS3 version features some big changes. MaterialUI and XMB are both available now as menu drivers. The XMB menu driver is now enabled by default. Libretro database support is in, and things should generally work fine.

I released only the DEX version. PS3 sceners can take the DEX release and create a CEX version of it.

Wii version

Wii and Gamecube ports are the same as ever. More work is needed on the USB HID part of RetroArch Wii before we can reliably use the DualShock3/4 and other USB pads.

PSP version

The PSP version has been updated. There are no big changes to mention for this port.

RetroArch Improvements

Improved playlist support

Both PlayStation1 and PSP games can now be scanned. Once the games are scanned and identified inside the database, they are added to a system-specific playlist. This makes it easy and convenient for you to start these games by just going to the playlist and selecting them instead of having to go through the filesystem and manually select the file again.

Revamped user interface 

There was a ton of criticism about version 1.2’s user interface on mobile devices, like Android. We have decided to give it a makeover. It now resembles a Material Design user interface and some of the annoying display bugs were fixed on Android. It now also has tabs at the bottom which you can touch. This can quickly take you to the playlists and/or settings screen.

XMB has also received a makeover. A bunch of actions now have their own tabs inside the horizontal menu. For instance, you can now scan for content by going to the ‘Add’ tab and selecting a directory instead of having to navigate through a bunch of menus. The history list is also now on a separate tab, as is the settings themselves.

RetroArch/libretro’s reach

We don’t like to brag a lot and we don’t like self posturing. Nevertheless, RetroArch and libretro is definitely starting to become omnipresent. Plenty of people have used it in some form or another without even being aware of its name or its existence. For instance, there have been plenty of projects unrelated to us or the team (like NewRetroArcade) which were actually just libretro frontends implementing the libretro API, and the popular Raspberry Pi distribution RetroPie for instance uses RetroArch for a lot of the videogame emulation it provides.

Not everything has gone according to plan. We had to do a very substantial rewrite this past year that has definitely been daunting and some of you noticed that things didn’t progress as fast as we’d like but we are finally out of development hell again and ready to fire on all cylinders again. There will never again be another half a year delay in development due to endless rewrites.

What’s next for RetroArch

RetroArch 1.4 (the next version) will bring RetroArch to two new platforms.

New platforms – tvOS/Windows Phone

There will be a tvOS port, and there will be a Windows Runtime (WinRT) port so that RetroArch can run on Windows Phones among other things. Those are the two big porting endeavors I will be busy with. I hope to get something substantial done by end of February/March, but don’t hold me to that.

There will be a new user interface as well added to the mix that will cater more to people used to point and click-based interfaces. We showed some screenshots before of the ‘Zarch’ interface we have been working on, and there is also a ‘Hexa’ user interface which might make it out of the prototype stages as well.

There will also be a lot of new cores. In fact, cores are being updated and added daily through the cor e updater, so definitely keep watching that space. You don’t have to wait for version 1.4 in order to be able to use these new cores.

Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/uDIw_xtI_Vs/  

Original article

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑

%d bloggers like this: