Now I am become DOI, destroyer of gatekeeping worlds

Digital object identifiers (DOIs) are much sought-after commodities in the world of academic publishing. If you’ve never seen one, a DOI is a unique string associated with a particular digital object (most commonly a publication of some kind) that lets the internet know where to find the stuff you’ve written. For example, say you want to know where you can get a hold of an article titled, oh, say, Designing next-generation platforms for evaluating scientific output: what scientists can learn from the social web. In the real world, you’d probably go to Google, type that title in, and within three or four clicks, you’d arrive at the document you’re looking for. As it turns out, the world of formal resource location is fairly similar to the real world, except that instead of using Google, you go to a website called dx.DOI.org, and then you plug in the string ’10.3389/fncom.2012.00072′, which is the DOI associated with the aforementioned article. And then, poof, you’re automagically linked directly to the original document, upon which you can gaze in great awe for as long as you feel comfortable.

Historically, DOIs have almost exclusively been issued by official-type publishers: Elsevier, Wiley, PLoS and such. Consequently, DOIs have had a reputation as a minor badge of distinction–probably because you’d traditionally only get one if your work was perceived to be important enough for publication in a journal that was (at least nominally) peer-reviewed. And perhaps because of this tendency to view the presence of a DOIs as something like an implicit seal of approval from the Great Sky Guild of Academic Publishing, many journals impose official or unofficial commandments to the effect that, when writing a paper, one shalt only citeth that which hath been DOI-ified. For example, here’s a boilerplate Elsevier statement regarding references (in this case, taken from the Neuron author guidelines):

References should include only articles that are published or in press. For references to in press articles, please confirm with the cited journal that the article is in fact accepted and in press and include a DOI number and online publication date. Unpublished data, submitted manuscripts, abstracts, and personal communications should be cited within the text only.

This seems reasonable enough until you realize that citations that occur “within the text only” aren’t very useful, because they’re ignored by virtually all formal citation indices. You want to cite a blog post in your Neuron paper and make sure it counts? Well, you can’t! Blog posts don’t have DOIs! You want to cite a what? A tweet? That’s just crazy talk! Tweets are 140 characters! You can’t possibly cite a tweet; the citation would be longer than the tweet itself!

The injunction against citing DOI-less documents is unfortunate, because people deserve to get credit for the interesting things they say–and it turns out that they have, on rare occasion, been known to say interesting things in formats other than the traditional peer-reviewed journal article. I’m pretty sure if Mark Twain were alive today, he’d write the best tweets EVER. Well, maybe it would be a tie between Mark Twain and the NIH Bear. But Mark Twain would definitely be up there. And he’d probably write some insightful blog posts too. And then, one imagines that other people would probably want to cite this brilliant 21st-century man of letters named @MarkTwain in their work. Only they wouldn’t be allowed to, you see, because 21st-century Mark Twain doesn’t publish all, or even most, of his work in traditional pre-publication peer-reviewed journals. He’s too impatient to rinse-and-repeat his way through the revise-and-resubmit process every time he wants to share a new idea with the world, even when those ideas are valuable. 21st-century @MarkTwain just wants his stuff out there already where people can see it.

Why does Elsevier hate 21st-century Mark Twain, you ask? I don’t know. But in general, I think there are two main reasons for the disdain many people seem to feel at the thought of allowing authors to freely cite DOI-less objects in academic papers. The first reason has to do with permanence—or lack thereof. The concern here is that if we allowed everyone to cite just any old web page, blog post, or tweet in academic articles, there would be no guarantee that those objects would still be around by the time the citing work was published, let alone several years hence. Which means that readers might be faced with a bunch of dead links. And dead links are not very good at backing up scientific arguments. In principle, the DOI requirement is supposed to act like some kind of safety word that protects a citation from the ravages of time—presumably because having a DOI means the cited work is important enough for the watchful eye of Sauron Elsevier to periodically scan across it and verify that it hasn’t yet fallen off of the internet’s cliffside.

The second reason has to do with quality. Here, the worry is that we can’t just have authors citing any old opinion someone else published somewhere on the web, because, well, think of the children! Terrible things would surely happen if we allowed authors to link to unverified and unreviewed works. What would stop me from, say, writing a paper criticizing the idea that human activity is contributing to climate change, and supporting my argument with “citations” to random pages I’ve found via creative Google searches? For that matter, what safeguard would prevent a brazen act of sockpuppetry in which I cite a bunch of pages that I myself have (anonymously) written? Loosening the injunction against formally citing non-peer-reviewed work seems tantamount to inviting every troll on the internet to a formal academic dinner.

To be fair, I think there’s some merit to both of these concerns. Or at least, I think there used to be some merit to these concerns. Back when the internet was a wee nascent flaky thing winking in and out of existence every time a dial-up modem connection went down, it made sense to worry about permanence (I mean, just think: if we had allowed people to cite GeoCities webpages in published articles, every last one of those citations links would now be dead!) And similarly, back in the days when peer review was an elite sort of activity that could only be practiced by dignified gentlepersons at the cordial behest of a right honorable journal editor, it probably made good sense to worry about quality control. But the merits of such concerns have now largely disappeared, because we now live in a world of marvelous technology, where bits of information cost virtually nothing to preserve forever, and a new post-publication platform that allows anyone to review just about any academic work in existence seems to pop up every other week (cf. PubPeer, PubMed Commons, Publons, etc.). In the modern world, nothing ever goes out of print, and if you want to know what a whole bunch of experts think about something, you just have to ask them about it on Twitter.

Which brings me to this blog post. Or paper. Whatever you want to call it. It was first published on my blog. You can find it–or at least, you could find it at one point in time–at the following URL: http://www.talyarkoni.org/blog/2015/03/04/now-i-am-become-doi-destroyer-of-gates.

Unfortunately, there’s a small problem with this URL: it contains nary a DOI in sight. Really. None of the eleventy billion possible substrings in it look anything like a DOI. You can even scramble the characters if you like; I don’t care. You’re still not going to find one. Which means that most journals won’t allow you to officially cite this blog post in your academic writing. Or any other post, for that matter. You can’t cite my post about statistical power and magical sample sizes; you can’t cite Joe Simmons’ Data Colada post about Mturk and effect sizes; you can’t cite Sanjay Srivastava’s discussion of replication and falsifiability; and so on ad infinitum. Which is a shame, because it’s a reasonably safe bet that there are at least one or two citation-worthy nuggets of information trapped in some of those blog posts (or millions of others), and there’s no reason to believe that these nuggets must all have readily-discoverable analogs somewhere in the “formal” scientific literature. As the Elsevier author guidelines would have it, the appropriate course of action in such cases is to acknowledge the source of an idea or finding in the text of the article, but not to grant any other kind of formal credit.

Now, typically, this is where the story would end. The URL can’t be formally cited in an Elsevier article; end of story. BUT! In this case, the story doesn’t quite end there. A strange thing happens! A short time after it appears on my blog, this post also appears–in virtually identical form–on something called The Winnower, which isn’t a blog at all, but rather, a respectable-looking alternative platform for scientific publication and evaluation.

Even more strangely, on The Winnower, a mysterious-looking set of characters appear alongside the text. For technical reasons, I can’t tell you what the set of characters actually is (because it isn’t assigned until this piece is published!). But I can tell you that it starts with “10.15200/winn”. And I can also tell you what it is: It’s a DOI! It’s one bona fide free DOI, courtesy of The Winnower. I didn’t have to pay for it, or barter any of my services for it, or sign away any little pieces of my soul to get it*. I just installed a WordPress plugin, pressed a few buttons, and… poof, instant DOI. So now this is, proudly, one of the world’s first N (where N is some smallish number probably below 1000) blog posts to dress itself up in a nice DOI (Figure 1). Presumably because it’s getting ready for a wild night out on the academic town.

sticks and stones may break my bones, but DOIs make me feel pretty

Figure 1. Effects of assigning DOIs to blog posts: an anthropomorphic depiction. (A) A DOI-less blog post feels exposed and inadequate; it envies its more reputable counterparts and languishes in a state of torpor and existential disarray. (B) Freshly clothed in a newly-minted DOI, the same blog post feels confident, charismatic, and alert. Brimming with energy, it eagerly awaits the opportunity to move mountains and reshape scientific discourse. Also, it has longer arms.

Does the mere fact that my blog post now has a DOI actually change anything, as far as the citation rules go? I don’t know. I have no idea if publishers like Elsevier will let you officially cite this piece in an article in one of their journals. I would guess not, but I strongly encourage you to try it anyway (in fact, I’m willing to let you try to cite this piece in every paper you write for the next year or so—that’s the kind of big-hearted sacrifice I’m willing to make in the name of science). But I do think it solves both the permanence and quality control issues that are, in theory, the whole reason for journals having a no-DOI-no-shoes-no-service policy in the first place.

How? Well, it solves the permanence problem because The Winnower is a participant in the CLOCKSS archive, which means that if The Winnower ever goes out of business (a prospect that, let’s face it, became a little bit more likely the moment this piece appeared on their site), this piece will be immediately, freely, and automatically made available to the worldwide community in perpetuity via the associated DOI. So you don’t need to trust the safety of my blog—or even The Winnower—any more. This piece is here to stay forever! Rejoice in the cheapness of digital information and librarians’ obsession with archiving everything!

As for the quality argument, well, clearly, this here is not what you would call a high-quality academic work. But I still think you should be allowed to cite it wherever and whenever you want. Why? For several reasons. First, it’s not exactly difficult to determine whether or not it’s a high-quality academic work—even if you’re not willing to exercise your own judgment. When you link to a publication on The Winnower, you aren’t just linking to a paper; you’re also linking to a review platform. And the reviews are very prominently associated with the paper. If you dislike this piece, you can use the comment form to indicate exactly why you dislike it (if you like it, you don’t need to write a comment; instead, send an envelope stuffed with money to my home address).

Second, it’s not at all clear that banning citations to non-prepublication-reviewed materials accomplishes anything useful in the way of quality control. The reliability of the peer-review process is sufficiently low that there is simply no way for it to consistently sort the good from the bad. The problem is compounded by the fact that rejected manuscripts are rarely discarded forever; typically, they’re quickly resubmitted to another journal. The bibliometric literature shows that it’s possible to publish almost anything in the peer-reviewed literature given enough persistence.

Third, I suspect—though I have no data to support this claim—that a worldview that treats having passed peer review and/or receiving a DOI as markers of scientific quality is actually counterproductive to scientific progress, because it promotes a lackadaisical attitude on the part of researchers. A reader who believes that a claim is significantly more likely to be true in virtue of having a DOI is a reader who is slightly less likely to take the extra time to directly evaluate the evidence for that claim. The reality, unfortunately, is that most scientific claims are wrong, because the world is complicated and science is hard. Pretending that there is some reasonably accurate mechanism that can sort all possible sources into reliable and unreliable buckets—even to a first order of approximation—is misleading at best and dangerous at worst. Of course, I’m not suggesting that you can’t trust a paper’s conclusions unless you’ve read every work it cites in detail (I don’t believe I’ve ever done that for any paper!). I’m just saying that you can’t abdicate the responsibility of evaluating the evidence to some shapeless, anonymous mass of “reviewers”. If I decide not to chase down the Smith & Smith (2007) paper that Jones & Jones (2008) cite as critical support for their argument, I shouldn’t be able to turn around later and say something like “hey, Smith & Smith (2007) was peer reviewed, so it’s not my fault for not bothering to read it!”

So where does that leave us? Well, if you’ve read this far, and agree with most or all of the above arguments, I hope I can convince you of one more tiny claim. Namely, that this piece represents (a big part of) the future of academic publishing. Not this particular piece, of course; I mean the general practice of (a) assigning unique identifiers to digital objects, (b) preserving those objects for all posterity in a centralized archive, and (c) allowing researchers to cite any and all such objects in their work however they like. (We could perhaps also add (d) working very hard to promote centralized “post-publication” peer review of all of those objects–but that’s a story for another day.)

These are not new ideas, mind you. People have been calling for a long time for a move away from a traditional gatekeeping-oriented model of pre-publication review and towards more open publication and evaluation models. These calls have intensified in recent years; for instance, in 2012, a special topic in Frontiers in Computational Neuroscience featured 18 different papers that all independently advocated for very similar post-publication review models. Even the actual attachment of DOIs to blog posts isn’t new; as a case in point, consider that C. Titus Brown—in typical pioneering form—was already experimenting with ways to automatically DOIfy his blog posts via FigShare way back in the same dark ages of 2012. What is new, though, is the emergence and widespread adoption of platforms like The Winnower, FigShare, or Research Gate that make it increasingly easy to assign a DOI to academically-relevant works other than traditional journal articles. Thanks to such services, you can now quickly and effortlessly attach a DOI to your open-source software packages, technical manuals and white papers, conference posters, or virtually any other kind of digital document.

Once such efforts really start to pick up steam—perhaps even in the next two or three years—I think there’s a good chance we’ll fall into a positive feedback loop, because it will become increasingly clear that for many kinds of scientific findings or observations, there’s simply nothing to be gained by going through the cumbersome, time-consuming conventional peer review process. To the contrary, there will be all kinds of incentives for researchers to publish their work as soon as they feel it’s ready to share. I mean, look, I can write blog posts a lot faster than I can write traditional academic papers. Which means that if I write, say, one DOI-adorned blog post a month, my Google Scholar profile is going to look a lot bulkier a year from now, at essentially no extra effort or cost (since I’m going to write those blog posts anyway!). In fact, since services like The Winnower and FigShare can assign DOIs to documents retroactively, you might not even have to wait that long. Check back this time next week, and I might have a dozen new indexed publications! And if some of these get cited—whether in “real” journals or on other indexed blog posts—they’ll then be contributing to my citation count and h-index too (at least on Google Scholar). What are you going to do to keep up?

Now, this may all seem a bit off-putting if you’re used to thinking of scientific publication as a relatively formal, laborious process, where two or three experts have to sign off on what you’ve written before it gets to count for anything. If you’ve grown comfortable with the idea that there are “real” scientific contributions on the one hand, and a blooming, buzzing confusion of second-rate opinions on the other, you might find the move to suddenly make everything part of the formal record somewhat disorienting. It might even feel like some people (like, say, me) are actively trying to game the very system that separates science from tabloid news. But I think that’s the wrong perspective. I don’t think anybody—certainly not me—is looking to get rid of peer review. What many people are actively working towards are alternative models of peer review that will almost certainly work better.

The right perspective, I would argue, is to embrace the benefits of technology and seek out new evaluation models that emphasize open, collaborative review by the community as a whole instead of closed pro forma review by two or three semi-randomly selected experts. We now live in an era where new scientific results can be instantly shared at essentially no cost, and where sophisticated collaborative filtering algorithms and carefully constructed reputation systems can potentially support truly community-driven, quantitatively-grounded open peer review on a massive scale. In such an environment, there are few legitimate excuses for sticking with archaic publication and evaluation models—only the familiar, comforting pull of the status quo. Viewed in this light, using technology to get around the limitations of old gatekeeper-based models of scientific publication isn’t gaming the system; it’s actively changing the system—in ways that will ultimately benefit us all. And in that context, the humble self-assigned DOI may ultimately become—to liberally paraphrase Robert Oppenheimer and the Bhagavad Gita—one of the destroyers of the old gatekeeping world.


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/o8F2poSLHQM/now-i-am-become-doi-destroyer-of-gatekeeping-worlds

Original article

Warning! Linux Mint hacked — operating system compromised

Shocked PC

Linux Mint is one of the best distos around, but if you’ve installed it recently you might have done so using a compromised ISO image.

The Linux Mint team today reveals that hackers made a modified Linux Mint ISO with a backdoor in it, and managed to hack the Mint website so it pointed to this bad version.

There is some good news however, and that’s the Linux Mint team managed to discover the intrusion and take action quickly. The site is currently down.

It also only (as far as the team knows) affects the one edition — Linux Mint 17.3 Cinnamon. If you downloaded a different release or version, or downloaded the OS via torrent or different direct HTTP link, you should be fine.

Also, the compromised version was only up on the site on the 20 February — so if you downloaded Mint before or after then, you don’t need to worry.

Mint says the hacked ISOs are hosted on 5.104.175.212 and the backdoor connects to absentvodka.com. Both are in Sofia, Bulgaria. The team is investigating the hack, but the reason for it remains a mystery for now.

“What we don’t know is the motivation behind this attack,” Mint admits. “If more efforts are made to attack our project and if the goal is to hurt us, we’ll get in touch with authorities and security firms to confront the people behind this”.

If you’re worried you might have downloaded and installed a compromised version Mint advises you check by following these instructions:

If you still have the ISO file, check its MD5 signature with the command “md5sum yourfile.iso” (where yourfile.iso is the name of the ISO).

The valid signatures are below:

6e7f7e03500747c6c3bfece2c9c8394f linuxmint-17.3-cinnamon-32bit.iso

e71a2aad8b58605e906dbea444dc4983 linuxmint-17.3-cinnamon-64bit.iso

30fef1aa1134c5f3778c77c4417f7238 linuxmint-17.3-cinnamon-nocodecs-32bit.iso

3406350a87c201cdca0927b1bc7c2ccd linuxmint-17.3-cinnamon-nocodecs-64bit.iso

df38af96e99726bb0a1ef3e5cd47563d linuxmint-17.3-cinnamon-oem-64bit.iso

If you still have the burnt DVD or USB stick, boot a computer or a virtual machine offline (turn off your router if in doubt) with it and let it load the live session.

Once in the live session, if there is a file in /var/lib/man.cy, then this is an infected ISO.

You can read more about the hack here.

Photo credit: v.gi / Shutterstock


Original URL: http://feeds.betanews.com/~r/bn/~3/or503JdULh8/

Original article

PeerTweet – Decentralized feeds using BitTorrent’s DHT network

README.md

Decentralized feeds using BitTorrent’s DHT. Idea from Arvid and The_8472 “DHT RSS feeds” http://libtorrent.org/dht_rss.html

Screenshot

PeerTweet

Abstract

BitTorrent’s DHT is probably one of the most resilient and censorship-resistant networks on the internet. PeerTweet uses this network to allow users to broadcast tweets to anyone who is listening. When you start PeerTweet, it generates a hash @33cwte8iwWn7uhtj9MKCs4q5Ax7B which is similar to your Twitter username (ex. @lmatteis). The difference is that you have entire control over what can be posted because only you own the private key associated with such address. Furthermore, thanks to the DHT, what you post cannot be stopped by any government or institution.

Once you find other PeerTweet addresses you trust (and are not spam), you can follow them. This configures your client to store this user’s tweets and broadcasts them to the DHT every once in a while to keep their feed alive. This cooperation of following accounts, allows for feeds to stay alive in the DHT network. The PeerTweet protocol also publishes your actions such as I just followed @919c.. or I just liked @9139.. and I just retweeted @5789... This allows the possibility for new users to find other addresses they can trust; if I trust the user @6749.. and they’re following @9801.., then perhaps I can mark @9801.. as not spam. This idea of publicly tweeting about your actions also allows for powerful future crawling analysis of this social graph.

Alpha quality you probably only want to use this if you like to send pull requests fixing things 🙂

How does it work?

PeerTweet follows most of the implementation guidelines provided by the DHT RSS feed proposal http://libtorrent.org/dht_rss.html. We implemented it on top of the current BEP44 proposal which provides get() and put() functionality over the DHT network. This means that, rather than only using the DHT to announce which torrents one is currently downloading, we can use it to also put and get small amounts of data (roughly 1000 bytes).

PeerTweet differentiates between two types of items:

  1. Your feed head. Which is the only mutable item of your feed, and is what your followers use to download your items and find updates. Your head’s hash is what your followers use to know about updates – it’s your identity and can be used to let others know about your feed (similar to your @lmattes handle). The feed head is roughly structured as follows:

    {
      "d": ,
      "next": ,
      "n": ,
      "a": ,
      "i": 
    }
    
  2. Your feed items. These are immutable items which contain your actual tweets and are structured:

    {
      "d": ,
      "next": ,
      "t": 
    }
    

Skip lists

The reason items have multiple pointers to other items in the list is to allow for parallel lookups. Our skip list implementation differs from regular implementations and is targeted for network lookups, where each item contains 4 pointers so that when we receive an item, we can issue 4 get() requests in parallel to other items in the list. This is crucial for accessing user’s feeds in a timely manner because DHT lookups have unpredictable response times.

Following

When you follow someone, you’re essentially informing your client to download their feed and republish it every so often. The DHT network is not a persistent one, and items quickly drop out of the network after roughly 30 minutes. In order to keep things alive, having many followers is crucial for the uptime of your feed. Otherwise you can still have a server somewhere running 24/7 which keeps your feed alive by republishing items every 30 minutes.

Install dependencies.

Installing native modules

The app comes with some native bindings. I used this code to make it run on my computer:

Source: https://github.com/atom/electron/blob/master/docs/tutorial/using-native-node-modules.md

npm install --save-dev electron-rebuild

# Every time you run "npm install", run this
./node_modules/.bin/electron-rebuild

# On Windows if you have trouble, try:
.node_modules.binelectron-rebuild.cmd

Run

Run this two commands simultaneously in different console tabs.

$ npm run hot-server
$ npm run start-hot

Note: requires a node version >= 4 and an npm version >= 2.

Toggle Chrome DevTools

  • OS X: Cmd Alt I or F12
  • Linux: Ctrl Shift I or F12
  • Windows: Ctrl Shift I or F12

See electron-debug for more information.

Toggle Redux DevTools

See redux-devtools-dock-monitor for more information.

Externals

If you use any 3rd party libraries which can’t be built with webpack, you must list them in your webpack.config.base.js

externals: [
  // put your node 3rd party libraries which can't be built with webpack here (mysql, mongodb, and so on..)
]

You can find those lines in the file.

CSS Modules support

Import css file as css-modules using .module.css.

Package

To package apps for all platforms:

Options

  • –name, -n: Application name (default: ElectronReact)
  • –version, -v: Electron version (default: latest version)
  • –asar, -a: asar support (default: false)
  • –icon, -i: Application icon
  • –all: pack for all platforms

Use electron-packager to pack your app with --all options for darwin (osx), linux and win32 (windows) platform. After build, you will find them in release folder. Otherwise, you will only find one for your os.

test, tools, release folder and devDependencies in package.json will be ignored by default.

Default Ignore modules

We add some module’s peerDependencies to ignore option as default for application size reduction.

  • babel-core is required by babel-loader and its size is ~19 MB
  • node-libs-browser is required by webpack and its size is ~3MB.

Note: If you want to use any above modules in runtime, for example: require('babel/register'), you should move them form devDependencies to dependencies.

Building windows apps from non-windows platforms

Please checkout Building windows apps from non-windows platforms.

Native-like UI

If you want to have native-like User Interface (OS X El Capitan and Windows 10), react-desktop may perfect suit for you.

Maintainers

This is a fork of the https://github.com/chentsulin/electron-react-boilerplate project.

License

MIT


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/NsE3hHeUdM4/peer-tweet

Original article

Archery company sues LARPer over patents, then files gag motion to silence him

LARP archers accused of patent infringement.

When Jordan Gwyther started Larping.org, a website that promotes his favorite hobby, he didn’t expect it would lead to him being sued for patent infringement over foam arrows. And when he spoke out about the lawsuit, neither he nor his attorney saw what was coming next: the patent-owner filed papers in court last week asking for a temporary restraining order (TRO) that would keep Gwyther quiet.

Live-action role-playing (aka LARPing) is an increasingly popular pastime in which ordinary folks transform into medieval weekend warriors, donning armor and using foam weapons to duke it out in local fields and parks.

Gwyther founded Larping.org five years ago as a community hub where LARPers could talk to each other and find local events. Over time, he also started selling certain items useful to LARPers, like leather and metal armor, latex weapons, and foam arrows. “It’s a hobby that grew into a side business,” Gwyther told Ars in a phone interview this morning.

He doesn’t make arrows; he simply buys them from a German company, imports them to the US, and sells them to LARPers. The arrows, sold through two websites, amount to a bit more than $2,000 per month, according to an affidavit—hardly enough to allow Gwyther, a father of two young kids, to quit his day job as a youth pastor at a Seattle church. But the sales apparently infuriated a competitor, Indiana-based Global Archery, which sells its own foam arrows under their trademarked brand name Archery Tag.

In October, Global Archery sued Gwyther for patent and trademark infringement. At first, Gwyther tried to fight the lawsuit (PDF) on his own.

But when Global Archery’s lawyer boasted to Gwyther’s attorney that the company has a $150,000 budget for its litigation, Gwyther knew he’d need help. Global Archery’s budget “will ensure their victory unless I can raise funds to defend myself,” he said in a YouTube video published last week. “As you can guess, I don’t have $150,000.” Gwyther created a GoFundMe campaign called “Save LARP archery,” and in a video, he laid out his view that the lawsuit could threaten LARP archers everywhere.

Global Archery has asked the judge overseeing the case to shut down Gwyther’s plea for help. In a motion (PDF) filed last week, Global says Gwyther should be slapped with a temporary restraining order that would force him to “cease issuing any press releases, advertisements, letters, promotional materials, articles, and oral or other written statements including posts on social media sites such as Gofundme, YouTube, Facebook, and Twitter, that falsely… imply that this action was initiated and is being prosecuted to interfere with the general public’s ability to engage in live action role playing (LARP).”

The motion also demands Gwyther to cough up all the cash he’s raised from GoFundMe, just over $4,600 so far (he’s seeking $100,000 to fight the case.)

Global Archery’s attempt to get a gag order has led the Electronic Frontier Foundation to file an amicus brief (PDF) in the case. “The First Amendment guarantees that even patent owners are subject to the slings and arrows of public criticism,” writes EFF lawyer Daniel Nazer in a blog post explaining the EFF’s decision to get involved.

Reached by telephone this morning, a Global Archery representative wouldn’t comment on the EFF brief. “On the advice of legal counsel, we’re just not commenting on that,” said the man, who identified himself as John but declined to give his full name. “That’s good enough,” he said before hanging up.

In a press release, Global Archery says it actively supports the LARP community and that Gwyther’s statements “were specifically calculated to serve as a rallying call for the LARPing community to turn against Global.” The release states:

The term ‘Larp Arrows’ used in Global’s complaint was a term of art used to describe the specific arrows involved… and certainly was not used to include any and all arrows used by persons in the LARPing community…

One must ask what is the true intention behind the Gwyther Statements. Is it alerting the LARPing community about a megalomaniac corporation/ “Patent Troll” trying to crush an individual who is simply engaging in his “hobby”? Or is it about playing on the LARPing community’s emotions so that Mr. Gwyther and his business can obtain a financial benefit at Global’s expense? Global believes it is the later.

Gwyther stands by his belief that Global could be planning to launch more lawsuits and that the company’s broad patent on foam arrows threatens the whole LARPing community.

“I didn’t know a patent for something like that even existed,” he told Ars. “Foam-tipped arrows have been used and sold in our hobby for a very long time.”

“Ruin LARP as we know it”

Global Archery’s lawsuit throws several claims at Gwyther, including allegations of patent and trademark infringement, tortious interference (for making sales calls to Global Archery customers), and false advertising (for buying Google ads claiming his arrows are “Better than Archery Tag!”)

The earlier of Global’s two patents on arrows, numbered 8,449,413 and 8,932,159, has a filing date of 2011—well after German arrow-maker iDV, Gwyther’s supplier, started manufacturing its product.

On his GoFundMe page, Gwyther says that his use of the trademark “archery tag” to buy Google ads is perfectly legal. (Use of trademarked keywords in advertising has been heavily litigated over the years, and it’s now a common practice.) As to the statements about his arrows being superior, “I also believe Coke is better than Pepsi,” Gwyther writes. “This is a statement of opinion and is used all the time in marketing.”

The lawsuit also accuses Gwyther of selling his arrows using the “Archery Tag” trademark on Amazon. Gwyther says those claims are straight-up wrong and based on screenshots of other sellers. He doesn’t sell on Amazon at all, he said.

Gwyther says he gets calls from summer camps and other groups seeking to buy arrows for non-LARP purposes, and he’s happy to fulfill their orders. They play games similar to laser tag, which they describe with terms like “archery tag,” “foam tag,” or “dodge bow,” he says.

In his video, Gwyther says he’s reaching out for help because the lawsuit could “ruin LARP in North America as we know it.” If Global succeeds, the company “will have created a legal precedent to enforce his patent against other distributors and resellers of LARP arrows in North America… this would mean if we lose this case, archery in LARP in North America might end, and at the very least be changed forever.”

Jordan Gwyther’s plea to “save LARP archery in North America.”

Gwyther thinks he’s been singled out as a small business person who could be overwhelmed in court before Global goes on to pursue larger targets.

“Reckless behavior”

Gwyther’s plea for assistance has clearly gotten under the skin of Global Archery and its attorney. Two days after Gwyther’s video went up, the company filed a motion (PDF) for a temporary restraining order and injunction that would force him to keep quiet about the case.

“Gwyther has deliberately mislead consumers regarding the nature and scope of this action” causing “irreparable damage to Global,” the company’s lawyer claims. The judge should issue a restraining order and injunction “preventing Gwyther from continuing this reckless behavior.”

“Gwyther’s conduct has created havoc and confusion in the marketplace,” Global’s lawyer states. “This was not an ‘accident’ or an ‘innocent’ mistake, but rather, [it] was a calculated maneuver done by Gwyther to unfairly damage Global’s reputation, divert sales of Global’s products to Gwyther, and to dupe the general public into funding Gwyther’s defense of this action.”

The company says it has been “inundated with hateful phone calls, e-mails, and posts on its social media site,” including one expressing hope that people at Global are “brutally murdered.”

Gwyther said the threats are “deplorable,” and he hopes it wasn’t LARPers that did it.

“It deeply saddens me that anyone has done that,” he told Ars. “I don’t want that for them. In a hobby where we are often ridiculed and made fun of, we know what it’s like for people to treat us poorly.”


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/7yh7WNA10Uw/

Original article

Self-hosted Git service built on Go

README.md

Current version: 0.8.39

NOTICES

简体中文

Purpose

The goal of this project is to make the easiest, fastest, and most painless way of setting up a self-hosted Git service. With Go, this can be done with an independent binary distribution across ALL platforms that Go supports, including Linux, Mac OS X, Windows and ARM.

Overview

  • Please see the Documentation for common usages and change log.
  • See the Trello Board to follow the develop team.
  • Want to try it before doing anything else? Do it online!
  • Having trouble? Get help with Troubleshooting.
  • Want to help with localization? Check out the guide!

Features

  • Activity timeline
  • SSH and HTTP/HTTPS protocols
  • SMTP/LDAP/Reverse proxy authentication
  • Reverse proxy with sub-path
  • Account/Organization/Repository management
  • Repository/Organization webhooks (including Slack)
  • Repository Git hooks/deploy keys
  • Repository issues, pull requests and wiki
  • Add/Remove repository collaborators
  • Gravatar and custom source
  • Mail service
  • Administration panel
  • Supports MySQL, PostgreSQL, SQLite3 and TiDB (experimental)
  • Multi-language support (14 languages)

System Requirements

  • A cheap Raspberry Pi is powerful enough for basic functionality.
  • 2 CPU cores and 1GB RAM would be the baseline for teamwork.

Browser Support

  • Please see Semantic UI for specific versions of supported browsers.
  • The official support minimal size is 1024*768, UI may still looks right in smaller size but no promises and fixes.

Installation

Make sure you install the prerequisites first.

There are 5 ways to install Gogs:

Tutorials

Screencasts

Deploy to Cloud

Software and Service Support

Product Support

Acknowledgments

  • Router and middleware mechanism of Macaron.
  • System Monitor Status is inspired by GoBlog.
  • Thanks lavachen and Rocker for designing Logo.
  • Thanks Crowdin for providing open source translation plan.
  • Thanks DigitalOcean for hosting home and demo sites.
  • Thanks KeyCDN for providing CDN service.

Contributors

License

This project is under the MIT License. See the LICENSE file for the full license text.


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/tp_vxoxZNF0/gogs

Original article

PeerTweet – Decentralized feeds using BitTorrent’s DHT network

README.md

Decentralized feeds using BitTorrent’s DHT. Idea from Arvid and The_8472 “DHT RSS feeds” http://libtorrent.org/dht_rss.html

Screenshot

PeerTweet

Abstract

BitTorrent’s DHT is probably one of the most resilient and censorship-resistant networks on the internet. PeerTweet uses this network to allow users to broadcast tweets to anyone who is listening. When you start PeerTweet, it generates a hash @33cwte8iwWn7uhtj9MKCs4q5Ax7B which is similar to your Twitter username (ex. @lmatteis). The difference is that you have entire control over what can be posted because only you own the private key associated with such address. Furthermore, thanks to the DHT, what you post cannot be stopped by any government or institution.

Once you find other PeerTweet addresses you trust (and are not spam), you can follow them. This configures your client to store this user’s tweets and broadcasts them to the DHT every once in a while to keep their feed alive. This cooperation of following accounts, allows for feeds to stay alive in the DHT network. The PeerTweet protocol also publishes your actions such as I just followed @919c.. or I just liked @9139.. and I just retweeted @5789... This allows the possibility for new users to find other addresses they can trust; if I trust the user @6749.. and they’re following @9801.., then perhaps I can mark @9801.. as not spam. This idea of publicly tweeting about your actions also allows for powerful future crawling analysis of this social graph.

Alpha quality you probably only want to use this if you like to send pull requests fixing things 🙂

How does it work?

PeerTweet follows most of the implementation guidelines provided by the DHT RSS feed proposal http://libtorrent.org/dht_rss.html. We implemented it on top of the current BEP44 proposal which provides get() and put() functionality over the DHT network. This means that, rather than only using the DHT to announce which torrents one is currently downloading, we can use it to also put and get small amounts of data (roughly 1000 bytes).

PeerTweet differentiates between two types of items:

  1. Your feed head. Which is the only mutable item of your feed, and is what your followers use to download your items and find updates. Your head’s hash is what your followers use to know about updates – it’s your identity and can be used to let others know about your feed (similar to your @lmattes handle). The feed head is roughly structured as follows:

    {
      "d": ,
      "next": ,
      "n": ,
      "a": ,
      "i": 
    }
    
  2. Your feed items. These are immutable items which contain your actual tweets and are structured:

    {
      "d": ,
      "next": ,
      "t": 
    }
    

Skip lists

The reason items have multiple pointers to other items in the list is to allow for parallel lookups. Our skip list implementation differs from regular implementations and is targeted for network lookups, where each item contains 4 pointers so that when we receive an item, we can issue 4 get() requests in parallel to other items in the list. This is crucial for accessing user’s feeds in a timely manner because DHT lookups have unpredictable response times.

Following

When you follow someone, you’re essentially informing your client to download their feed and republish it every so often. The DHT network is not a persistent one, and items quickly drop out of the network after roughly 30 minutes. In order to keep things alive, having many followers is crucial for the uptime of your feed. Otherwise you can still have a server somewhere running 24/7 which keeps your feed alive by republishing items every 30 minutes.

Install dependencies.

Installing native modules

The app comes with some native bindings. I used this code to make it run on my computer:

Source: https://github.com/atom/electron/blob/master/docs/tutorial/using-native-node-modules.md

npm install --save-dev electron-rebuild

# Every time you run "npm install", run this
./node_modules/.bin/electron-rebuild

# On Windows if you have trouble, try:
.node_modules.binelectron-rebuild.cmd

Run

Run this two commands simultaneously in different console tabs.

$ npm run hot-server
$ npm run start-hot

Note: requires a node version >= 4 and an npm version >= 2.

Toggle Chrome DevTools

  • OS X: Cmd Alt I or F12
  • Linux: Ctrl Shift I or F12
  • Windows: Ctrl Shift I or F12

See electron-debug for more information.

Toggle Redux DevTools

See redux-devtools-dock-monitor for more information.

Externals

If you use any 3rd party libraries which can’t be built with webpack, you must list them in your webpack.config.base.js

externals: [
  // put your node 3rd party libraries which can't be built with webpack here (mysql, mongodb, and so on..)
]

You can find those lines in the file.

CSS Modules support

Import css file as css-modules using .module.css.

Package

To package apps for all platforms:

Options

  • –name, -n: Application name (default: ElectronReact)
  • –version, -v: Electron version (default: latest version)
  • –asar, -a: asar support (default: false)
  • –icon, -i: Application icon
  • –all: pack for all platforms

Use electron-packager to pack your app with --all options for darwin (osx), linux and win32 (windows) platform. After build, you will find them in release folder. Otherwise, you will only find one for your os.

test, tools, release folder and devDependencies in package.json will be ignored by default.

Default Ignore modules

We add some module’s peerDependencies to ignore option as default for application size reduction.

  • babel-core is required by babel-loader and its size is ~19 MB
  • node-libs-browser is required by webpack and its size is ~3MB.

Note: If you want to use any above modules in runtime, for example: require('babel/register'), you should move them form devDependencies to dependencies.

Building windows apps from non-windows platforms

Please checkout Building windows apps from non-windows platforms.

Native-like UI

If you want to have native-like User Interface (OS X El Capitan and Windows 10), react-desktop may perfect suit for you.

Maintainers

This is a fork of the https://github.com/chentsulin/electron-react-boilerplate project.

License

MIT


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/0Wn_IrxxT0w/peer-tweet

Original article

Timeline Of Events: Linux Mint Website Hack That Distributed Malicious ISOs

An anonymous reader writes: The Linux Mint website was hacked last night and was pointing to malicious ISOs that contained an IRC bot known as TSUNAMI, used as part of an IRC DDoSing botnet. While the Linux Mint team says they were hacked via their WordPress site, security experts have discovered that their phpBB forum database was put up for sale on the Dark Web at around the same time of the hack. Also, it seems that after the Linux Mint team cleaned their website, the hackers reinfected it, which caused the developers to take it down altogether.


Share on Google+

Read more of this story at Slashdot.


Original URL: http://rss.slashdot.org/~r/Slashdot/slashdot/~3/EoQNe6LgRvA/timeline-of-events-linux-mint-website-hack-that-distributed-malicious-isos

Original article

Universities and the open web

I mentioned in an earlier post that I visited the MIT Media Lab on the 11th of Feb. It was a great trip, just one day back and forth. I wanted to see the Media Lab with my own eyes, and reconnect with two longtime friends who are working there now.

Ethan Zuckerman got involved in the web very early as the lead developer at Tripod. I worked with him at Berkman, where he, along with Rebecca MacKinnon, started the incredible Global Voices project. Ethan is now the director of the Center for Civic Media at MIT. 

And Joi Ito, one of the earliest bloggers, a good friend, discussion leader at BloggerCon, has been the director of the Media Lab since 2011. 

I wanted to reconnect because the Media Lab is in an incredible position to help the open web., especially because these two pioneers, Ethan and Joi, are there. 

Lots of emails, back and forth

Our meeting was a whirlwind, at least partially because my train from NYC was 45 minutes late (!) but there’s only so much you can get done in a face to face in one day. 

So we’ve been going back and forth via email since the meeting, and it’s been getting pretty interesting! I want to now surface at least part of what we’ve been talking about.

First, Joi wrote a post about our meeting and the open web. Please read.

It’s not only personally flattering, but you can see how thoroughly the blogging ethic flows through Joi. Err on the side of disclosure, saying what you really see, knowing that it will be received at face value. That’s blogging at its best, imho. 

Where to go?

In one of the follow-up emails I listed three things we could do to help the open web reboot. I had written about all these ideas before, in some cases, a number of times. 

  1. Every university should host at least one open source project.
  2. Every news org should build a community of bloggers, starting with a river of sources. 
  3. Every student journalist should learn how to set up and run a server.

These ideas came out of my work in booting up blogging and podcasting, and working successfully at Berkman to get the first academic blogging community going. Had I continued that work, this is where we would go.


Original URL: http://scripting.com/liveblog/users/davewiner/2016/02/21/1045.html

Original article

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑

%d bloggers like this: