A recent report from Perception Point claims that a vulnerability in the Linux kernel could affect millions of devices. Here’s what you need to know.
Original URL: http://techrepublic.com.feedsportal.com/c/35463/f/670841/s/4d020580/sc/15/l/0L0Stechrepublic0N0Carticle0Cmillions0Eof0Elinux0Eservers0Eand0Epcs0E660Eof0Eandroid0Edevices0Evulnerable0Eto0Eserious0E30Eyear0Eold0Ebug0C0Tftag0FRSS56d97e7/story01.htm
Argosy began in 1882 as a magazine for children and ceased publication ninety-six years later as soft-core porn for men, but for ten years in between it was the home of a true-crime column by Erle Stanley Gardner, the man who gave the world Perry Mason. In eighty-two novels, six films, and nearly three hundred television episodes, Mason, a criminal-defense lawyer, took on seemingly guilty clients and proved their innocence. In the magazine, Gardner, who had practiced law before turning to writing, attempted to do something similar—except that there his “clients” were real people, already convicted and behind bars. All of them met the same criteria: they were impoverished, they insisted that they were blameless, they were serving life sentences for serious crimes, and they had exhausted their legal options. Gardner called his column “The Court of Last Resort.”
To help investigate his cases, Gardner assembled a committee of crime experts, including a private detective, a handwriting analyst, a former prison warden, and a homicide specialist with degrees in both medicine and law. They examined dozens of cases between September of 1948 and October of 1958, ranging from an African-American sentenced to die for killing a Virginia police officer after a car chase—even though he didn’t know how to drive—to a nine-fingered convict serving time for the strangling death of a victim whose neck bore ten finger marks.
The man who didn’t know how to drive was exonerated, at least partly thanks to coverage in “The Court of Last Resort,” as were many others. Meanwhile, the never terribly successful Argosy also got a reprieve. “No one in the publishing field had ever considered the remote possibility that the general reading public could ever be so interested in justice,” Gardner wrote in 1951. “Argosy’s circulation began to skyrocket.” Six years later, the column was picked up by NBC and turned into a twenty-six-episode TV series.
Although it subsequently faded from memory, “The Court of Last Resort” stands as the progenitor of one of today’s most popular true-crime subgenres, in which reporters, dissatisfied with the outcome of a criminal case, conduct their own extrajudicial investigations. Until recently, the standout representatives of this form were “The Thin Blue Line,” a 1988 Errol Morris documentary about Randall Dale Adams, who was sentenced to death for the 1976 murder of a police officer; “Paradise Lost,” a series of documentaries by Joe Berlinger and Bruce Sinofsky about three teen-agers found guilty of murdering three second-grade boys in West Memphis in 1993; and “The Staircase,” a television miniseries by Jean-Xavier de Lestrade about the novelist Michael Peterson, found guilty of murdering his wife in 2001. Peterson has been granted a new trial. Randall Dale Adams was exonerated a year after “The Thin Blue Line” was released. Shortly before the final “Paradise Lost” documentary was completed, in 2011, all three of its subjects were freed from prison on the basis of DNA evidence.
In the past fifteen months, this canon has grown considerably in both content and prestige. First came “Serial,” co-created by Sarah Koenig and Julie Snyder, which revisited the case of Adnan Syed, convicted for the 1999 murder of his high-school classmate and former girlfriend, eighteen-year-old Hae Min Lee. That was followed by Andrew Jarecki’s “The Jinx,” a six-part HBO documentary that, uncharacteristically for the genre, sought to implicate rather than exonerate its subject, Robert Durst. A New York real-estate heir, Durst was acquitted in one murder case, is currently awaiting trial in another, and has long been suspected in the 1982 disappearance of his wife, Kathleen Durst.
The latest addition to this canon is Laura Ricciardi and Moira Demos’s “Making a Murderer,” a ten-episode Netflix documentary that examines the 2007 conviction of a Wisconsin man named Steven Avery. Like the prisoners featured in “The Court of Last Resort,” Avery is a poor man serving time for a violent crime that he insists he didn’t commit. The questions his story raises, however, are not just about his own guilt and innocence. Nearly seventy years have passed since Erle Stanley Gardner first tried a criminal case before the jury of the general public. Yet we still have not thought seriously about what it means when a private investigative project—bound by no rules of procedure, answerable to nothing but ratings, shaped only by the ethics and aptitude of its makers—comes to serve as our court of last resort.
If you know anything about “Making a Murderer,” you know that Steven Avery has a particularly troubling and convoluted relationship with the criminal-justice system. In July of 1985, Avery was picked up by the Manitowoc County Sheriff’s Department after a woman named Penny Beerntsen was brutally attacked while out for a run in a Wisconsin state park. Beerntsen, who had been conscious throughout most of the attack, deliberately sought to memorize her assailant’s features, and subsequently picked Avery out of both a photo array and a live lineup. At trial six months later, Avery was found guilty and sentenced to thirty-two years in prison. He served eighteen of those before being exonerated by DNA testing, a technology not available at the time of the trial. That DNA test also identified Beerntsen’s actual assailant: a man named Gregory Allen, who was, by then, imprisoned for another assault.
This was bad news for the Manitowoc County Sheriff’s Department. As the public learned soon after the exoneration, local police had gone to the sheriff’s department within days after the attack to report that Allen may have been responsible; the department, convinced that it had the right man, declined to investigate. Ten years later, while serving time, Allen confessed to the assault. Again, the sheriff’s department was alerted and, again, no one acted; Avery remained in prison for another eight years. In light of this information, he filed a lawsuit against the county for thirty-six million dollars.
In 2005, while the defendants in that civil suit were being deposed, Avery was arrested again—this time for the murder of a twenty-five-year-old photographer named Teresa Halbach. Four months later, his sixteen-year-old nephew, Brendan Dassey, was arrested as well, after he confessed to helping Avery rape and murder Halbach and burn her body. In 2007, after separate trials, both were found guilty and sentenced to life in prison.
Ricciardi and Demos examine those convictions in “Making a Murderer,” and the information they present has led viewers to respond with near-universal outrage about the verdicts. Because of the pending civil litigation, the Manitowoc County Sheriff’s Department was supposed to have nothing to do with the Halbach investigation beyond lending any necessary equipment to the jurisdiction in charge. Yet members of the department were involved in the case at every critical juncture. One of them was allegedly left alone with Halbach’s vehicle for several hours after it was located and before Avery’s blood was discovered inside. Another found the key to Halbach’s S.U.V. in Avery’s home—in plain view, even though the property had previously been searched by other investigators six times. A third found a bullet fragment in Avery’s garage, again after the premises had been repeatedly searched. The analyst who identified Halbach’s DNA on that bullet had been instructed by a county detective to try to come up with evidence that Halbach had been in Avery’s house or garage. Perhaps most damning, the defense discovered that a vial of Avery’s blood, on file from the 1985 case, had been tampered with; the outer and inner seal on the box in which it was kept had been broken, and the vial itself had a puncture in the top, as from a hypodermic needle.
That is sobering stuff, but the most egregious misconduct shown in the documentary concerns not Avery but his nephew, Brendan Dassey—a stone-quiet, profoundly naïve, learning-disabled teen-ager with no prior criminal record, who is interrogated four times without his lawyer present. In the course of those interrogations, the boy, who earlier claimed to have no knowledge of Halbach, gradually describes an increasingly lurid torture scene that culminates in her murder by gunshot. The gun comes up only after investigators prod Dassey to describe what happened to Halbach’s head. Dassey first proposes that Avery cut off her hair, and then adds that his uncle punched her. Finally, one of the investigators, growing impatient, says, “I’m just going to come out and ask you: Who shot her in the head?” After the confession is signed, the prosecutor calls a press conference and turns Dassey’s story into the definitive account of what happened—a travesty of justice for Dassey and Avery, given the questionable nature of the interrogation, and a terrible cruelty to the Halbach family.
Dassey repeatedly recanted his confession, including in a letter to the judge and on the witness stand. But it was too late. “Put the tape of his confession in the VCR or DVD player and play it, there’s our case right there,” Halbach’s brother told the press. He was right, but he shouldn’t have been. Most people find it impossible to imagine why anyone would confess to a crime he didn’t commit, but, watching Dassey’s interrogation, it is easy to see how a team of motivated investigators could alternately badger, cajole, and threaten a vulnerable suspect into saying what they wanted to hear. When Dassey’s mother asked him how he came up with so many details if he was innocent, he said, “I guessed.” “You don’t guess with something like this, Brendan,” she replied. “Well,” he said, “that’s what I do with my homework, too.”
By chance, I have known many of the details of the Avery case since long before the release of “Making a Murderer,” because in 2007 I spoke at length with Penny Beerntsen. At the time, I was working on a book about being wrong—about how we as a culture think about error, and how we as individuals experience it—and Beerntsen, in identifying Avery as her assailant, had been wrong in an unusually tragic and consequential way.
Beerntsen had also been unusual among crime victims involved in wrongful convictions in that she had instantly accepted the DNA evidence—and, with it, her mistake. “It ain’t all her fault, you know,” Avery had said at the time of his release. “Honest mistake, you know.” But Beerntsen had felt horrifically guilty. “This might sound unbelievable,” she told me when we first talked, “but I really feel this way: the day I learned I had identified the wrong person was much worse than the day I was assaulted. My first thought was, I don’t deserve to live.” She wrote Avery a letter, apologizing to him and his family, and, concerned by the missteps and misconduct that led to his incarceration, became involved with the Innocence Project, which seeks to free the wrongfully convicted and to reform legal practices to help prevent miscarriages of justice.
Given her history, Beerntsen does not need any convincing that a criminal prosecution can go catastrophically awry. But when Ricciardi and Demos approached her about participating in “Making a Murderer” she declined, chiefly because, while her own experience with the criminal-justice system had led her to be wary of certitude, the filmmakers struck her as having already made up their minds. “It was very clear from the outset that they believed Steve was innocent,” she told me. “I didn’t feel they were journalists seeking the truth. I felt like they had a foregone conclusion and were looking for a forum in which to express it.”
Ricciardi and Demos have dismissed that idea, claiming that they simply set out to investigate Avery’s case and didn’t have a position on his guilt or innocence. Yet “Making a Murderer” never provokes the type of intellectual and psychological oscillation so characteristic of Koenig and Snyder’s “Serial.” Instead, the documentary consistently leads its viewers to the conclusion that Avery was framed by the Manitowoc County Sheriff’s Department, and it contains striking elisions that bolster that theory. The filmmakers minimize or leave out many aspects of Avery’s less than savory past, including multiple alleged incidents of physical and sexual violence. They also omit important evidence against him, including the fact that Brendan Dassey confessed to helping Avery move Halbach’s S.U.V. into his junk yard, where Avery lifted the hood and removed the battery cable. Investigators subsequently found DNA from Avery’s perspiration on the hood latch—evidence that would be nearly impossible to plant.
Perhaps because they are dodging inconvenient facts, Ricciardi and Demos are never able to present a coherent account of Halbach’s death, let alone multiple competing ones. Although “Making a Murderer” is structured chronologically, it fails to provide a clear time line of events, and it never answers such basic questions as when, where, and how Halbach died. Potentially critical issues are raised and summarily dropped; we hear about suspicious calls to and messages on Halbach’s cell phone, but these are never explored or even raised again. In the end, despite ten hours of running time, the story at the heart of “Making a Murderer” remains a muddle. Granted, real life is often a muddle, too, especially where crime is involved—but good reporters delineate the facts rather than contribute to the confusion.
Despite all this, “Making a Murderer” has left many viewers entirely convinced that Avery was framed. After the documentary aired, everyone from high-school students to celebrities jumped on the “Free Avery and Dassey” bandwagon. In the weeks since, people involved in the conviction have been subjected to vicious and in some cases threatening messages from Netflix-watching strangers. (So have people who were not involved, including the Manitowoc Police Department, a separate entity from the county sheriff’s department.)
For those people, and for others close to the original case, “Making a Murderer” seems less like investigative journalism than like highbrow vigilante justice. “My initial reaction was that I shouldn’t be upset with the documentarians, because they can’t help that the public reacted the way that it did,” Penny Beerntsen said. “But the more I thought about it, the more I thought, Well, yeah, they do bear responsibility, because of the way they put together the footage. To me, the fact that the response was almost universally ‘Oh, my God, these two men are innocent’ speaks to the bias of the piece. A jury doesn’t deliberate twenty-some hours over three or four days if the evidence wasn’t more complex.”
“Making a Murderer” raises serious and credible allegations of police and prosecutorial misconduct in the trials of Steven Avery and Brendan Dassey. It also implies that that misconduct was malicious. That could be true; vindictive prosecutions have happened in our justice system before and they will happen again. But the vast majority of misconduct by law enforcement is motivated not by spite but by the belief that the end justifies the means—that it is fine to play fast and loose with the facts if doing so will put a dangerous criminal behind bars.
That same reasoning, with the opposite aims, seems to govern “Making a Murderer.” But while people nearly always think that they are on the side of the angels, what finally matters is that they act that way. The point of being scrupulous about your means is to help insure accurate ends, whether you are trying to convict a man or exonerate him. Ricciardi and Demos instead stack the deck to support their case for Avery, and, as a result, wind up mirroring the entity that they are trying to discredit.
Partway through “Making a Murderer,” we hear a “Dateline NBC” producer discuss the death of Teresa Halbach in disturbingly chipper tones. “This is the perfect ‘Dateline’ story,” she says. “It’s a story with a twist, it grabs people’s attention. . . . Right now murder is hot, that’s what everyone wants, that’s what the competition wants, and we’re trying to beat out the other networks to get that perfect murder story.”
That clip, presented without context, is meant to make the “Dateline” producer look shallow and exploitative, and it does. But it is also meant to inoculate Ricciardi and Demos against the charge that they, too, are pursuing a hot murder case with a dramatic twist in order to grab people’s attention. The implication is that, unlike traditional true-crime shows—“Dateline,” “48 Hours,” “America’s Most Wanted,” “Nancy Grace”—their work is too intellectually serious to be thoughtless, too morally worthy to be cruel.
Yet the most obvious thing to say about true-crime documentaries is something that, surprisingly often, goes unsaid: they turn people’s private tragedies into public entertainment. If you have lost someone to violent crime, you know that, other than the loss itself, few things are as painful and galling as the daily media coverage, and the license it gives to strangers to weigh in on what happened. That experience is difficult enough when the coverage is local, and unimaginable when a major media production turns your story into a national pastime. “Sorry, I won’t be answering any questions because . . . TO ME ITS REAL LIFE,” the younger brother of Hae Min Lee, the murder victim in “Serial,” wrote on Reddit in 2014. “To you listeners, its another murder mystery, crime drama, another episode of CSI. You weren’t there to see your mom crying every night . . . and going to court almost every day for a year seeing your mom weeping, crying, and fainting. You don’t know what we went through.”
Like the Lee family, the Halbachs and Penny Beerntsen declined to participate in a journalistic investigation into their personal tragedies. But no one in such a situation has any real way to opt out. “Making a Murderer” takes Halbach’s death as its subject (her life is represented by a few photos and video clips, which do not rise above the standard mise en scène of murder shows), and footage of her family appears in almost every episode. Beerntsen, for her part, was dismayed to discover that the filmmakers had obtained a photograph of her battered face from the 1985 attack and used it without her knowledge. “I don’t mind looking at it, but my children should not have to relive that,” she said. “And everything we’re dealing with, the Halbachs are dealing with a thousandfold.”
This is not to suggest that reporting on violence is always morally abhorrent. Crimes themselves vary widely, as does crime coverage, and it is reasonable to hold that at some point the demands of private grief are outweighed by the public good. But neither “Serial” (which is otherwise notable for its thoroughness) nor “Making a Murderer” ever addresses the question of what rights and considerations should be extended to victims of violent crime, and under what circumstances those might justifiably be suspended. Instead, both creators and viewers tacitly dismiss the pain caused by such shows as collateral damage, unfortunate but unavoidable. Here, too, the end is taken to justify the means; someone else’s anguish comes to seem like a trifling price to pay for the greater cause a documentary claims to serve.
But what, exactly, is that cause in “Making a Murderer”? As of January 12th, more than four hundred thousand people had signed a petition to President Obama demanding that “Steven Avery should be exonerated at once by pardon.” That outrage could scarcely have been more misdirected. For one thing, it was addressed to the wrong person: Avery was convicted of state crimes, not federal ones, and the President does not have the power to pardon him. For another, it was the wrong demand. “Making a Murderer” may have presented a compelling case that Avery (and, more convincingly, Dassey) deserved a new trial, but it did not get anywhere close to establishing that either one should be exonerated.
The petition points to another weakness of “Making a Murderer”: it is far more concerned with vindicating wronged individuals than with fixing the system that wronged them. The series presents Avery’s case as a one-off—a preposterous crusade by a grudge-bearing county sheriff’s department to discredit and imprison a nemesis. (Hence the ad-hominem attacks the show has inspired.) But you don’t need to have filed a thirty-six-million-dollar suit against law enforcement to be detained, denied basic rights, and have evidence planted on your person or property. Among other things, simply being black can suffice. While Avery’s story is dramatic, every component of it is sadly common. Seventy-two per cent of wrongful convictions involve a mistaken eyewitness. Twenty-seven per cent involve false confessions. Nearly half involve scientific fraud or junk science. More than a third involve suppression of evidence by police.
Those statistics reflect systemic problems. Eyewitness testimony is dangerously persuasive to juries, yet it remains admissible in courts almost without caveat. Some interrogation methods are more likely than others to produce false confessions, yet there are no national standards; fewer than half of states require interrogations to be videotaped, and all of them allow interrogators to lie to suspects. With the exception of DNA evidence (which emerged from biology, not criminology), forensic tests are laughably unscientific; no independent entity exists to establish that such tests are reliable before their results are admissible as evidence.
It is largely because of these systemic weaknesses in our judicial system that we find ourselves with a court of last resort. While that court cannot directly operate the levers of the law, it has drawn attention to cases that need review, and innocent people have been freed as a result. Yet in the decades since Erle Stanley Gardner launched his column, none of the forces that put those people in prison in the first place have changed for the better. Nor have we evolved a set of standards around extrajudicial investigations of criminal cases. However broken the rules that govern our real courts, the court of last resort is bound by no rules at all.
That does not automatically compromise independent investigations into crime; some remarkable and important work has been done in the tradition of the court of last resort. But it does enable individual journalists to proceed as they choose, and the choices made by Ricciardi and Demos fundamentally undermine “Making a Murderer.” Defense attorneys routinely mount biased arguments on behalf of their clients; indeed, it is their job to make the strongest one-sided case they can. But that mandate is predicated on the existence of a prosecution. We make moral allowances for the behavior of lawyers based on the knowledge that the jury will also hear a strong contrary position. No such structural protection exists in our extrajudicial courts of last resort, and Ricciardi and Demos chose not to impose their own.
Toward the end of the series, Dean Strang, Steven Avery’s defense lawyer, notes that most of the problems in the criminal-justice system stem from “unwarranted certitude”—what he calls “a tragic lack of humility of everyone who participates.” Ultimately, “Making a Murderer” shares that flaw; it does not challenge our yearning for certainty or do the difficult work of helping to foster humility. Instead, it swaps one absolute for another—and, in doing so, comes to resemble the system it seeks to correct. It is easy to express outrage, comforting to have closure, and satisfying to know all the answers. But, as defense lawyers remind people every day, it is reasonable to doubt. ♦
Sign up for the daily newsletter.Sign up for the daily newsletter: the best of The New Yorker every day.
Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/VeGapy70XXw/dead-certainty
Well today, Apple launched a huge new update to GarageBand for iOS that has a lot more appeal to novices (like myself) and will undoubtedly be much more popular with electronic musicians thanks to the addition of Audio Units and a new feature for crafting beats. The real star of the show here is a feature called Live Loops which gives you a brand new way of producing tracks that’s… Read More
Original URL: http://feedproxy.google.com/~r/Techcrunch/~3/X2f3gPJYkmQ/
Do you remember connecting to the Internet in 1994 or 1995?
Those of you do probably remember Trumpet Winsock. That little blue-and-gold icon turned your modem from a BBS-ringing machine into an Internet-connecting machine.
What you probably didn’t know is that the author of Trumpet Winsock — Peter Tattam from Tasmania, Australia — didn’t see much money for his efforts. Millions of copies were distributed by ISPs and on magazine covers, but only a fraction of those copies were ever paid for.
Peter’s little program enabled millions of people to get online for the first time ever, right when the web was in its infancy. It made ISPs possible for the vast majority of users running Windows. In short, Peter is an unsung hero of the web revolution.
Well he missed out on the fame and riches. But you can do your part to reward Peter for his efforts. It’s simple.
Choose how much you want to pay for that copy you got off a CD, off a friend, off your ISP. Some numbers to consider: Trumpet Winsock cost USD $25 back in 1993, or about $38 in today’s dollars.
Go on. Reward a guy who deserves our thanks for helping to open up the internet to the masses.
As of this afternoon, more than 200 people have donated.
Do you work for or own an internet firm that is not an ISP? Consider encouraging your company to make a corporate donation. These will be written up on the Donors page.
Update 22 April 2011
PayPal became anxious about the definition of “donation” and briefly suspended Peter’s account. To keep them happy the button has been changed to a “pay now” button and your payments will be treated as “post payments” for book-keeping purposes.
I’ve noticed a trend lately. Rather than replacing a router when it literally stops working, I’ve needed to act earlier—swapping in new gear because an old router could no longer keep up with increasing Internet speeds available in the area. (Note, I am duly thankful for this problem.) As the latest example, a whole bunch of Netgear ProSafe 318G routers failed me for the last time as small businesses have upgraded from 1.5-9mbps traditional T1 connections to 50mbps coax (cable).
Yes, coax—not fiber. Even coax has proved too much for the old ProSafe series. These devices didn’t just fail to keep up, they fell flat on their faces. Frequently, the old routers dropped speed test results from 9mbps with the old connection to 3mbps or less with the 50mbps connection. Obviously, that doesn’t fly.
These days, the answer increasingly seems to be wireless routers. These tend to be long on slick-looking plastic and brightly colored Web interfaces but short on technical features and reliability. What’s a mercenary sysadmin to do? Well, at its core, anything with two physical network interfaces can be a router. And today, there are lots and lots of relatively fast, inexpensive, and (super important!) fully solid-state generic boxes out there.
So, the time had finally come. Faced with aging hardware and new consumer offerings that didn’t meet my needs, I decided to build my own router. And if today’s morphing connectivity landscape leaves you in a similar position, it turns out that both the building and the build are quite fast.
Why do it the hard way
A lot of you are probably muttering, “right, pfSense, sure.” Some of you might even be thinking about smoothwall or untangle NG. I played with most of the firewall distros out there, but I decided to go more basic, more old school: a plain, CLI-only install of Ubuntu Server and a few iptables rules.
Admittedly, this likely isn’t the most practical approach for every reader, but it made sense for me. I have quite a bit of experience finessing iptables and the Linux kernel itself for high throughput at Internet scale, and the fewer shiny features and graphics and clicky things that are put between me and the firewall table, the less fluff I have to get out of the way and the fewer new not-applicable-in-the-rest-of-my-work things I have to learn. Any rule I already know how to create in iptables to manage access to my servers, I also know how to apply to my firewall—if my firewall’s running the same distro as my servers are.
Also, I work pretty heavily with OpenVPN, and I want to be able to continue setting up both its servers and clients in the way I already rely on. Some firewall distros have OpenVPN support built in and some do not, but even the ones with built-ins tend to expect things to run differently than I do. Again, the more the system stays out of my way, the happier I’ll be.
As an additional bonus, I know that I can very easily keep everything completely up to date on my new and completely vanilla Ubuntu router. It’s all supported directly by Canonical, and it can (and does) all have automatic updates turned on. Add the occasional cron job to reboot the router (to get new kernels), and I’m golden.
Hardware, hardware, hardware
We’ll go through the how-to in a future piece, but today it’s important to establish why a DIY router-build may be the best option. To do that, you first need to understand today’s general landscape.
In the consumer world, routers mostly have itty-bitty little MIPS CPUs under the hood without a whole lot of RAM (to put it mildly). These routers largely differentiate themselves from one another based on the interface: How shiny is it? How many technical features does it have? Can users figure it out easily?
At the higher end of the SOHO market, you start seeing some smartphone-grade ARM CPUs and a lot more RAM. These routers—like the Nightgear Nighthawk series, one of which we’ll be hammering on later—feature multiple cores, higher clock speeds, and a whole lot more RAM. They also feature much higher price tags than the cheaper competition. I picked up a Linksys EA2750 for $89, but the Netgear Nighthawk X6 I got with it was nearly three times more expensive (even on holiday sale!) at $249.
Still, I wanted to go a different route. A lot of interesting and reasonably inexpensive little x86-64 fanless machines have started showing up on the market lately. The trick for building a router is finding one with multiple NICs. You can find a couple of fairly safe bets on Amazon, but they’re older Atom-based processors, and I wanted a newer Celeron. After some good old-fashioned Internet scouring and dithering, finally I took the Alibaba plunge and ordered myself a new Partaker Mini PC from Shenzhen Inctel Technology Company. After $240 for the router itself and another $48 for a 120GB Kingston SSD from Newegg, I’d spent about $40 more on the Homebrew Special than I had on the Nighthawk. Would it be worth it?
A challenger appears
Before we get testing started, let’s take a quick visual look at the competitors.
That Nighthawk is, by comparison to the others, HUGE and imposing (even more so than the picture makes it appear). It’s actually significantly larger than my Homebrew Special, which is a fully functional, general purpose PC you could use as a perfectly competent desktop. It’s like DC Comics asked H.R. Giger to lend a hand designing a wireless router for Batman.
The Homebrew Special itself is kinda adorable. It has one blue and one red LED inside the case, and at night, the light from both spills out of its cooling vents indirectly, giving the network stack a festive party look. If there were any fans to cause a flickering it would drive me insane, but since it’s a steady-state soft glow, I actually like it.
The Homebrew Special—looking a bit blurry, because I wanted to take a low-light shot to try to capture the disco glow.
The Homebrew Special—looking a bit blurry, because I wanted to take a low-light shot to try to capture the disco glow.
oontz oontz oontz oontz…
The Linksys and the Buffalo, on the other hand, look like exactly what they are—cheap routers. However, it’s worth noting the styling on the Linksys is a big improvement over the brand’s past. It looks more like something professional and less like a children’s toy. (But enough about the styling—it’s time to put these poor routers through the gauntlet.)
The obvious first challenge is a simple bandwidth test. You put one computer on the LAN side and one computer on the WAN side, and you run a nifty little tool called iperf through the middle. Simple, right?
Well, that would make for a short, boring article. The network itself measures gigabit, the three gigabit routers measure gigabit, and the 100 megabit router measures 100 megabit.
In actuality, a test this simple doesn’t even begin to tell the story. The only reason to do it may be to show how pointless it is. Router manufacturers are becoming more aware that people actually test their product, and no manufacturer wants its product to be anywhere but the top of something like smallnetbuilder’s router chart. In light, manufacturers are actively chasing stats these days.
The problem is, stats are just stats. Being able to hit a high number on a pure throughput test is better than nothing, but it’s a far cry from the whole story. I learned that lesson the hard way while working for a T-1 vendor in the early 2000s. Their extremely expensive Adtran modems could handle 50 to 100 people’s normal Internet usage just fine, but a single user running Limewire or some other P2P client would bring the whole thing down in a heartbeat. (The fix back then: put an inexpensive-but-awesome $150 Netopia router in front of that expensive Adtran modem. Problem solved.)
Even for relatively simple routing—no deep packet inspection, no streaming malware scanning or intrusion detection, no shaping—the CPU and the RAM available to the router are both important well above and beyond the ability to saturate the Internet link. Peer-to-peer filesharing is about the most brutal activity a network will see, these days (whether it’s bittorrent, one of the Gnutella or eDonkey variants, or a game company’s peer-to-peer download system). I was done playing WoW by the time Blizzard’s P2P distribution system was introduced, but my roommate at the time wasn’t. On its launch day, the new WoW peer download system unhelpfully defaulted to no throttling whatsoever. It cheerfully tried to find and maintain connections with literally thousands of clients simultaneously, and my home network went down like Gilbert Godfried getting tackled by Terry Tate. My roommate and I had words.
Based on such past experience, I don’t just want to minimally “test” my challengers and call it a day, I want to really make them sweat. So to do that, I’m going to hit them with workloads that stress three problem areas: saturating the network link, making and breaking individual TCP/IP connections really quickly, and holding massive numbers of individual TCP/IP connections open at the same time.
Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/DzMz7Dsn_-4/
By the end of February, Google will deploy at full scale its Accelerated Mobile Pages project by sending search results to amp-html pages. Because it integrates all monetization instruments —advertising, analytics and now subscription systems— Google’s AMP is likely to rally scores of publishers.
Over 5000 developers registered to the Github repository of Google’s Accelerated Mobile Pages (AMP) program. Some are just sniffing around, others actually work for large or small organizations and are truly committed to build something.
In a few weeks, Google will open the floodgates connecting AMP to its search engine. Twitter and Pinterest will follow. A request from a mobile phone will call a AMP-coded page (when available) that will load at blazing speed. That’s the plan. For a glimpse of what it will look like, try the demo version from your mobile, or add “/amp” at the end of any Guardian page’s URL. Tested from a poor mobile connection, the result is compelling.
How does Google pull this off? AMP redesigns core components of the Internet’s historic Hypertext Markup Language, now re-christened “amp-html”, and supports it with a massive distributed caching system in which Google hosts pages for a few seconds or hours, in multiple caches spread around the world to be closest to the user. All publishers have to do is provide two versions of their pages, one for direct access from a well-connected desktop, and an AMP version for mobile. I goes like this:
When a page is called from a mobile through search, Twitter or other, the user is directed to the super-fast page. Et Voilà.(More about the whole concept in this previous Monday Note and on AMP official blog, written in plain English.)
As of today, Google is reluctant to speculate about the number of publishers that will be on board when the AMP program launches late February. Expect several hundreds in Europe and in the United States. All the big players are working on AMP, from the New York Times to Vox Media; it will spread quickly as specifications cover more web components. As the number of publishers rises, the system will become more visible in Google search pages thanks to a larger corpus of news elements coded in AMP.
A positive effect of page ranking
The snowball effect will also apply for another simple reason: AMP-HTML coded items will show better in Google SERPs (Search Engine Results Pages.) If Google product managers adamantly insist on the absolute neutrality of Search, it is a known fact that rendering speed is a key contributor to better rankings. In itself, this factor should act as a powerful stimulus to create AMP pages.
Other incentives center on the monetization side of the program. On the advertising side, most ad servers (not just Google-owned DFP) will be able to send ads in AMP pages. Some work remains to be done on the formats that will be deemed acceptable in AMP pages.
Privately, Google people make no mystery of their intention to clean the advertising mess. They want to get rid of the invasive formats that, by ruining the user experience, contributed to the explosion of ad blockers and threatened a large segment of the digital economy. To that end, the AMP ecosystem is their weapon of choice
Between Google’s AMP engineering team and the advertising community, interests collide. The former are focused on accelerating the rendering of mobile pages and restoring a decent user experience on mobile; the latter prioritizes value extraction of above all other considerations.
Even if the company won’t publicly admit it, Google plans to lean on AMP to curb advertising excesses on media sites. Hence the initial idea to constrain the formats allowed in the AMP ecosystem. According to Google indisputable argument, pages that render four times faster on a smartphone will cause users to increase their page-views per session and to see more ads as a result.
For their part, ad buyers face pressure from creative people who want to splashiest possible formats to fit the (presumed) aspirations of their brands clients. From the mobile user’s perspective, fluffy ads translate into pages that take forever to load, and into a strong incentive to jump elsewhere: about half of users will leave a mobile site that takes more than 6 to 10 seconds to load.
Nevertheless, the advertising community now seems on board with the AMP project. Richard Gingras, head of News at Google and project lead, wrote this in a recent blog post:
[Media] Buyers have also been engaged: Annalect (Omnicom Media Group) is currently reviewing the project (…) Advertising companies that have expressed their intention to support AMP include: Outbrain, AOL, Taboola, OpenX, DoubleClick, AdSense, Pubmatic, Integral Ad Science, Moat, Smart AdServer, Krux, Polar, Nativo and Teads.tv.
I’m not so sure that having on board internet nuisances such as Outbrain or Taboola is such good news. With remarkable consistency, these two disfigure thousands of sites across the world by inserting grotesque promotional or editorial recommendations in news pages. As for Teads.tv, it created many unseen-before video formats that should be rewarded by ad blocker companies for their ability to trigger user rejection. It will be interesting to watch how Google contains these companies’ propensity to impact user experience in such negative fashion.
For good measure, let’s also mention that AMP places a bet on native ads integration; Google will propose its own product as well as third party solutions such as the Polar platform.
From Google’s standpoint, the priority given to speed justifies a certain inflexibility in its drive to use AMP as the prime argument to get rid of bad ads. Even if some rapprochement is under way, the ad community seems a bit too slow to respond while publishers don’t feel like they are in a position to pressure them…
To me, the surest way to make progress is a decisive move from the creative advertising side. Instead of fighting for the status quo, agencies should devote more resources to come up with truly creative formats that fit AMP specs. And publishers should take the user’s side.
In building AMP, Google had to deal with the multiple flavors of analytics asked (and sometimes actually used) by publishers. News media have a strange relationship to analytics. They want to use most of them as if they would warrant performance. But, unlike e-commerce sites, media are not culturally accustomed to make intensive use of data. Instead of embedding just a couple of analytics code segments in their pages, some sites end up using dozens of those. Such non-choice proves efficient at slowing down the rendering of these pages. Smartly, the AMP team decided early on to team up with Chartbeat, probably the preferred analytics tool in the news community. Today, multiple web metrics providers are on board, including Moat (who partner with Chartbeat), Nielsen, ComScore, Parse.ly, ClickTale, Adobe Analytics, etc. It is unclear, though, how AMP will be able to discourage publishers to install too many of those.
A major break, through the paywalls
One of the most complicated issue the AMP engineering team had to deal with was the implementation of paywalls. These are insistently asked by publishers ranging from The Wall Street Journal, The New York Times and European publishers such as the FT or Les Echos. Due to the widely distributed architecture of AMP in which a single page could be replicated hundreds of times all over multiple caches, reproducing a metered system, the partial display of a story, or a login flow was quite a task.
The good news is: It’s done.
This week, AMP’s engineers will release the version 0.1 of the code that allows publishers to implement paywalls in the AMP ecosystem. This is a critical feature for the economy of quality news media — an advantage that Facebook’s Instant Articles, or Apple’s News are nowhere near to offering.
How does AMP paywall support work? Let’s start with a simplistic sketch summing up today’s situation:
The main drawback is obvious: each time users flush their cookies, they start over for free. That’s why the most stringent paywalls require a registration. This carries two advantages: cheating is more difficult, and the user can be followed from one device to another. Then, why did so few publishers elect to not require a registration? Again: the non-choice syndrome. They want to win on both ends, subscription and advertising.
Now, let’s move to the AMP-implemented paywall. It carries two complications: the caching system and the dual nature of the documents with the AMP pages coming from search, social etc, and non-AMP pages, that are not cached —in the latter case, the system works as described above.
Then let’s focus on the AMP-pages paywalls. Publishers involved in discussions came up with demands (not only is the publisher poor, rarely tech savvy, but also demanding and sometimes arrogant —just kidding.)
The requirements were: — A metered system with a number of free articles — [or] Limited viewing of a document (as the WSJ does when the main part of the article is masked to non subscribers) — Ability to customize the user experience for subscribers, such as removing ads. Naturally, publishers wanted to have full control of the parameters for the metering system, the login/authentication process, the tracking the users and the possibility ofchanging all sorts of settings on a per document level… “Vaste programme” would have quipped General De Gaulle.
It goes like this (the chart below is based on numerous discussions I had with the engineering team. It is a bit simplistic.)
Again, for the sake of clarity I passed over several features, including many events on the reader side that occur right within the browser’s AMP Runtime. As you might have noticed, AMP introduces several new components such as the AMP Reader ID, a unique reader identifier as seen by AMP and issued by it. At some point, the AMP Reader ID will be reconciled with the regular cookie issued by the publisher, to highlight a returning reader, or a known subscriber for instance. Also not shown are three elements provided by the Publisher: the Access Content Markup which defines which part of the document are visible under which circumstances; the Authorization endpoint that sends back a response stating which part of the document the reader can view; and the Pingback endpoint used to send the “view” impression of the document. Go to Github today or later in the week for complete specs.
Summing up, when it comes to implementing paid subscriptions support, the AMP team has gone way further than expected. What is still a preliminary version of specs already allows numerous paywall fine-tuning possibilities.And there is more to come.