Multiple assignees on Issues and Pull requests


Unlimited private repositories

All of our paid GitHub.com plans now include unlimited private repositories. Read the blog post…

Issues and pull requests often need to be assigned to more than one person. Multiple Assignees are now supported, allowing up to 10 people to be added to a given Issue or Pull request.

Using multiple assignees

Assignees can be added and removed on the web UI by clicking on the assignees dropdown in the sidebar and adding multiple users.

multipleassignees

Check out the documentation for more information on this feature.

Need help or found a bug? Contact us.


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/gb_o9HoZSdA/2178-multiple-assignees-on-issues-and-pull-requests

Original article

All European scientific publicly funded articles to be freely accessible by 2020

News item | 27-05-2016 | 13:57

All scientific articles in Europe must be freely accessible as of 2020. EU member states want to achieve optimal reuse of research data. They are also looking into a European visa for foreign start-up founders.

Photo: Tineke Dijkstra 

And, according to the new Innovation Principle, new European legislation must take account of its impact on innovation. These are the main outcomes of the meeting of the Competitiveness Council in Brussels on 27 May.

Sharing knowledge freely

Under the presidency of Netherlands State Secretary for Education, Culture and Science Sander Dekker, the EU ministers responsible for research and innovation decided unanimously to take these significant steps. Mr Dekker is pleased that these ambitions have been translated into clear agreements to maximise the impact of research. ‘Research and innovation generate economic growth and more jobs and provide solutions to societal challenges,’ the state secretary said. ‘And that means a stronger Europe. To achieve that, Europe must be as attractive as possible for researchers and start-ups to locate here and for companies to invest. That calls for knowledge to be freely shared. The time for talking about open access is now past. With these agreements, we are going to achieve it in practice.’   

Open access

Open access means that scientific publications on the results of research supported by public and public-private funds must be freely accessible to everyone. That is not yet the case. The results of publicly funded research are currently not accessible to people outside universities and knowledge institutions. As a result, teachers, doctors and entrepreneurs do not have access to the latest scientific insights that are so relevant to their work, and universities have to take out expensive subscriptions with publishers to gain access to publications.

Reusing research data

From 2020, all scientific publications on the results of publicly funded research must be freely available. It also must be able to optimally reuse research data. To achieve that, the data must be made accessible, unless there are well-founded reasons for not doing so, for example intellectual property rights or security or privacy issues.


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/06hsa7IbbJk/all-european-scientific-articles-to-be-freely-accessible-by-2020

Original article

How the ArXiv Decides What’s Science

Where do we draw the boundary between science and pseudoscience? It’s is a question philosophers have debated for as long as there’s been science – and last time I looked they hadn’t made much progress. When you ask a sociologist their answer is normally a variant of: Science is what scientists do. So what do scientists do?

You might have heard that scientists use what’s called the scientific method, a virtuous cycle of generating and testing hypotheses which supposedly separates the good ideas from the bad ones. But that’s only part of the story because it doesn’t tell you where the hypotheses come from to begin with.

Science doesn’t operate with randomly generated hypotheses for the same reason natural selection doesn’t work with randomly generated genetic codes: it would be highly inefficient and any attempt to optimize the outcome would be doomed to fail. What we do instead is heavily filtering hypotheses, and then we consider only those which are small mutations of ideas that have previously worked. Scientists like to be surprised, but not too much.

Indeed, if you look at the scientific enterprise today, almost all of its institutionalized procedures are methods not for testing hypotheses, but for filtering hypotheses: Degrees, peer reviews, scientific guidelines, reproduction studies, measures for statistical significance, and community quality standards. Even the use of personal recommendations works to that end. In theoretical physics in particular the prevailing quality standard is that theories need to be formulated in mathematical terms. All these are requirements which have evolved over the last two centuries – and they have proved to work very well. It’s only smart to use them.

But the business of hypotheses filtering is a tricky one and it doesn’t proceed by written rules. It is a method that has developed through social demarcation, and as such it has its pitfalls. Humans are prone to social biases and every once in a while an idea get dismissed not because it’s bad, but because it lacks community support. And there is no telling how often this happens because these are the stories we never get to hear.

It isn’t news that scientists lock shoulders to defend their territory and use technical terms like fraternities use secret handshakes. It thus shouldn’t come as a surprise that an electronic archive which caters to the scientific community would develop software to emulate the community’s filters. And that is, in a nutshell, basically what the arXiv is doing.

In an interesting recent paper, Luis Reyes-Galindo had a look at the arXiv moderators and their reliance on automated filters:

In the attempt to develop an algorithm that would sort papers into arXiv categories automatically, thereby supporting arXiv moderators to decide when a submission needs to be reclassified, it turned out that papers which scientists would mark down as “crackpottery” showed up as not classifiable or stood out by language significantly different from that in the published literature. According to Paul Ginsparg, who developed the arXiv more than 20 years ago:

“The first thing I noticed was that every once in a while the classifier would spit something out as ‘I don’t know what category this is’ and you’d look at it and it would be what we’re calling this fringe stuff. That quite surprised me. How can this classifier that was tuned to figure out category be seemingly detecting quality?

“[Outliers] also show up in the stop word distribution, even if the stop words are just catching the style and not the content! They’re writing in a style which is deviating, in a way. […]

“What it’s saying is that people who go through a certain training and who read these articles and who write these articles learn to write in a very specific language. This language, this mode of writing and the frequency with which they use terms and in conjunctions and all of the rest is very characteristic to people who have a certain training. The people from outside that community are just not emulating that. They don’t come from the same training and so this thing shows up in ways you wouldn’t necessarily guess. They’re combining two willy-nilly subjects from different fields and so that gets spit out.”

It doesn’t surprise me much – you can see this happening in comment sections all over the place: The “insiders” can immediately tell who is an “outsider.” Often it doesn’t take more than a sentence or two, an odd expression, a term used in the wrong context, a phrase that nobody in the field would ever use. It is only consequential that with smart software you can tell insiders from outsiders even more efficiently than humans. According to Ginsparg:

“We’ve actually had submissions to arXiv that are not spotted by the moderators but are spotted by the automated programme […] All I was trying to do is build a simple text classifier and inadvertently I built what I call The Holy Grail of Crackpot Filtering.”

Trying to speak in the code of a group you haven’t been part of at least for some time is pretty much impossible, much like it’s impossible to fake the accent of a city you haven’t lived in for some while. Such in-group and out-group demarcation is subject of much study in sociology, not specifically the sociology of science, but generally. Scientists are human and of course in-group and out-group behavior also shapes their profession, even though they like to deny it as if they were superhuman think-machines.

What is interesting about this paper is that, for the first time, it openly discusses how the process of filtering happens. It’s software that literally encodes the hidden rules that physicists use to sort out cranks. For what I can tell, the arXiv filters work reasonably well, otherwise there would be much complaint in the community. But the vast majority of researchers in the field are quite satisfied with what the arXiv is doing, meaning the arXiv filters match their own judgement.

There are exceptions of course. I have heard some stories of people who were working on new approaches that fell between the stools and were flagged as potential crackpottery. The cases that I know of could eventually be resolved, but that might tell you more about the people I know than about the way such issues typically end.

Personally, I have never had a problem with the arXiv moderation. I had a paper reclassified from gen-ph to gr-qc once by a well-meaning moderator, which is how I learned that gen-ph is the dump for borderline crackpottery. (How would I have known? I don’t read gen-ph. I was just assuming someone reads it.)

I don’t so much have an issue with what gets filtered on the arXiv, what bothers me much more is what does not get filtered and hence, implicitly, gets approval by the community. I am very sympathetic to the concerns of John The-End-Of-Science Horgan that scientists don’t clean enough on their own doorsteps. There is no “invisible hand” that corrects scientists if they go astray. We have to do this ourselves. In-group behavior can greatly misdirect science because, given sufficiently many people, even fruitless research can become self-supportive. No filter that is derived from the community’s own judgement will do anything about this.

It’s about time that scientists start paying attention to social behavior in their community. It can, and sometimes does, affect objective judgement. Ignoring or flagging what doesn’t fit into pre-existing categories is one such social problem that can stand in the way of progress.

In a 2013 paper published in Science, a group of researchers quantified the likeliness of combinations of topics in citation lists and studied the cross-correlation with the probability of the paper becoming a “hit” (meaning in the upper 5th percentile of citation scores). They found that having previously unlikely combinations in the quoted literature is positively correlated with the later impact of a paper. They also note that the fraction of papers with such ‘unconventional’ combinations has decreased from 3.54% in the 1980s to 2.67% in the 1990, “indicating a persistent and prominent tendency for high conventionality.”

Conventional science isn’t bad science. But we also need unconventional science, and we should be careful to not assign the label “crackpottery” too quickly. If science is what scientists do, scientists should pay some attention to the science of what they do.


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/mrHuGUrTbXU/the-holy-grail-of-crackpot-filtering.html

Original article

EU mandates open access for all publicly funded research by 2020

eu comp council (2) European Union officials announced today that, starting in 2020, any research that owes its existence in some way to public funding must be freely accessible and reusable. Recommendations were also made to encourage investment and ease the passage of startup founders between various European states. The decision arose from a meeting of EU ministers at the Competitiveness Council in Brussels; in… Read More


Original URL: http://feedproxy.google.com/~r/Techcrunch/~3/VDtG99Z-KVA/

Original article

Call for a Temporary Moratorium on “The DAO”

DRAFT (v0.2)

Dino Mark, Vlad Zamfir, Emin Gün Sirer

dino at smartwallet dot org, vlad@ethereum.org, egs@cs.cornell.edu
May 26, 2016


Over the past 3 weeks a Distributed Autonomous Organization (DAO) known simply as ‘The DAO’ and implemented as a smart contract on the Ethereum blockchain, has raised 11.5 million Ether, valued at $150 million at the time of writing. This is the largest crowd-funding event in history. The DAO now controls 16% of the total supply of Ether. It is arguably the most visible project in the Ethereum ecosystem.

In this paper, we analyze the rules of The DAO and identify problems with its mechanism design that incentivize investors to behave strategically; that is, at odds with truthfully voting to reveal their preferences. We then outline potential attacks against The DAO made possible by these behaviors.

In particular, we identify seven causes for concern that can lead DAO participants to engage in strategic behaviors. Some of these behaviors can cause honest DAO investors to have their investments hijacked or committed to proposals against their interest and intent.

We discuss these attacks, and provide concrete and simple suggestions that will mitigate the attacks, or in some cases make them completely impossible.

We would like to call for a moratorium on proposals to prevent losses to the DAO caused by unintended consequences of its mechanism design. A moratorium would give The DAO time to make security upgrades, and should be lifted only once the DAO is updated.

Introduction

Smart contracts enable the collection and disbursement of funds according to immutable computer programs. Built on a Turing-complete platform, such contracts have the capacity to create constrained and predictable financial constructs without a trusted entity. Distributed autonomous organizations are one such class of contracts that can carry out corporate functions in accordance with the will of their shareholders, while being constrained by programmatic bylaws. These programmatic bylaws, if written with sufficient care can obviate the need for a management team in certain constrained domains.

Perhaps one of the most suitable such domains is crowd-funding. In traditional crowd-funding, a corporation such as Kickstarter connects investors with individuals or organizations who propose ventures. When the proposal gathers sufficient opt-in from the investors, it can proceed. If it succeeds, it returns financial rewards to its investors. The crowd-funding platform extracts some overhead for the matchmaking service it provides in the middle.

Another potential domain is investment funds or venture capital firms. In traditional venture capital firms, the managers collect funds from investors, evaluate proposals for various ventures, and determine a subset of ventures to fund. Successful ventures bring returns to the fund, from which the fund managers extract some, often substantial, overhead for the decision-making service they provide in the middle.

Over the last month, we witnessed the emergence of a distributed autonomous organization, known as The DAO, that is a cross between these two domains and seeks to completely eliminate the middlemen. The DAO operates somewhat like a venture capital firm, in that it collects a pool of funds to invest in worthy proposals, but it differs in that all the individual investors are able to vote, in proportion to the size of their investment, on each investment proposal put forward to the fund. The aspirational goals for The DAO are to utilize the wisdom of the crowds for this decision-making process, and to eliminate the risks posed by middlemen using a programmatic approach to corporate management.

The DAO is unique in many ways. It was funded through a crowd-funding effort that quickly raised  11.6M ether (worth approximately $150M at the time of writing), making it the largest crowd-funded project in history. At this funding level, The DAO commands approximately 15% of the total ether in existence. Because The DAO is so large, and because it is one of the first smart contracts of its kind, it has garnered much attention. Consequently, public opinion about decentralized autonomous organizations rides on its success.

Yet smart contracts pose unique technical challenges. Recall that computer programs can and most often do contain bugs. When a desktop application has a bug, it may crash; when a smart contract has a bug, it may render funds irrecoverable. Moreover, the smart contract cannot be easily updated, unlike desktop apps and other traditional software. Thus, careful thought and considerations must be put into constructing a smart contract that carries out the intended operations of a complex decision-making investment fund, especially in the presence of potentially malicious participants.

In this paper, we focus specifically on The DAO and examine the operational details of The DAO’s smart contract with an emphasis on its mechanism design. We then identify seven causes for concern, where the mechanisms encoded into the current implementation of The DAO can give rise to unwanted strategic behaviors for the participants that are at odds with the primary function of the organization. In the case of The DAO, we show that in the current implementation can attacks with severe consequences are possible. We identify an attack that can indefinitely tie up investor funds and lead to ransom demands; an attack that enables a large cartel to usurp funds; and another attack that can enable an attacker to depress the value of the native fund tokens, among others.

At a fundamental level, these attacks all stem from unintended consequences of the mechanisms built into The DAO. Some are facilitated by an inherent bias towards voting to fund proposals; the current system discourages people from voting when they perceive a proposal to have negative expected value. A second fundamental problem stems from the structure of the withdrawal process: investors wanting to  exit from the fund by “splitting” are vulnerable to attack. Combined, these problems can give rise to complex strategic behaviors, all resulting in a corruption of the intended, honest debate and voting process to select the most deserving proposals.

In the rest of this paper, we describe the operation of The DAO, the voting bias, potential attacks, and then discuss some potential mitigations and solutions. The central take-away from our analysis and discussion is that it would be prudent to call for a temporary moratorium on whitelisting proposals so that reasonable measures can be taken to improve the mechanisms of The DAO.  Therefore, we call on the curators to put a moratorium in effect.

There are two alternatives to a curator-imposed moratorium. One is to ask The DAO token holders to place a self-imposed moratorium by voting down every proposal with overwhelming majority. Due to the flaws involving negative votes outlined in this paper, it would be a mistake to depend on this mechanism to protect against attacks targeting the same mechanism. The second alternative is to ask the DAO token holders to opt-in to the security measures by holding a vote for a new curator set who will implement a moratorium. We believe that The DAO’s default behavior should favor security. Since no one knows the percentage of non-voting, non-active token holders, the threshold required for curator changes may be too high for the voting process to meet.  For these reasons, we believe that the safest and most immediate course of action would be for the curators impose a moratorium, and allow the DAO token holders opt-out by means of a curator change vote.


The Structure of The DAO

The primary function of The DAO is to serve as a crowd-funding investment vehicle. To this end, The DAO API is structured around an initial creation phase that collects funds and an operational phase which consists of collecting proposals, voting on them, optionally funding them, and performing administrative functions such as paying out rewards and withdrawing funds. In the following discussion, we cover the operation of the DAO in each of these phases, and discuss the main abstractions behind the DAO to provide a context for game-theoretic analysis of the operation of this smart contract.

The DAO was created on April 30, 2016 at 10:00 UTC, based on a specific instantiation[1] of The DAO contract[2]. This paper describes the operation of this smart contract.

Creation and Funding Phase

The DAO creation phase started with the initial creation of the smart contract and lasted for 27 days. During this period, The DAO issues tokens, called The DAO Tokens (TDT), in exchange for Ether sent to a designated funding address[3].

The buy-in price of TDT varies during the creation phase. First, it starts at 1.00 ether for 100 TDT for the first 14 days. Then, there is an increase of 0.05 ether per 100 TDT for the following 10 days, then a final 3 day period at 1.50 ether per 100 TDT.

Late investors who paid more than 1.00 ether per 100 TDT have their surplus ether above 1.00 placed in a special account called extraBalance. Individual token holders cannot withdraw their funds from the extraBalance account; this money can only be moved after an amount equal to the extraBalance has been spent on proposals. In effect, the extraBalance represents additional money made available to the fund for spending on proposals, money earned by the DAO through additional fees paid by late joiners.  For example, if a token holder paid 1.05 Ether for 100 TDT, and if no Ether had been committed to any proposals, the token holder could still only withdraw 1.00 Ether. The extra 0.05 Ether will stay locked in until The DAO has funded proposals that, in aggregate, exceed the amount of the extraBalance. Only then is the extraBalance folded into the main balance of the DAO, where it is distributed proportionally to TDT holders.  At the time of writing the extraBalance is approximately 275,000 Ether.

The DAO follows a pattern[4] where the main contract acts as a factory for sub-contracts that split off from the main DAO. In what follows, we will refer to the initial contract simply as The DAO, its children as child-DAOs, and collectively to any contract that implements the ‘Standard DAO Framework’ as a DAO[5]. The process of generating child-DAOs can continue recursively, until a depth limit is reached.

The Curator

Every instance of The DAO has a designated curator that is responsible for adding addresses to and removing addresses from the proposal payment address whitelist. The ‘Curator’ account for the current instance of The DAO is a 5 out of 11 multi-signature address (note that one of the curators has announced that they will not participate, although his key technically still has the right to sign in the multisig).

Only addresses on the whitelist can submit proposals to, and be funded, by The DAO. Proposals that want funding from the DAO must to ask the curator to add their address to the whitelist. Thus, the curator ensures that some human supervision is involved in the selection of proposals to be funded for the DAO. In effort to shield curators from legal liability, their responsibilities are limited strictly to deterring “malicious proposals.” The main motivation for the curator abstraction[6] is a majority takeover attack where a large (53%) voting bloc votes to commit 100% of The DAO’s funds to a proposal that benefits solely that bloc. The curator concept was introduced mainly to weed out such proposals and either refuse to whitelist their payment addresses or to un-whitelist their addresses; curators are expected not to take profitability or business sense into account while making whitelisting decisions. The task of exercising business judgment over the proposals is left up to the wisdom of the crowds through the proposal and voting process.

Proposals and Voting

Once a proposal has its address whitelisted by the curator, token holders can then vote on whether or not they want to fund that proposal. All TDT holders are allowed to vote either YES or NO, and their votes are weighted by the amount of their TDT holdings. The voting commences for a minimum voting period of 14 days, at the end of which the weighted votes are tallied. A simple majority of YES votes is required for a proposal to be successfully funded, and a minimum quorum of voters is required in order for the voting phase to be closed. The minimum quorum varies between 20 to 53% depending on the size of the proposal. Very large proposals will require a 53% quorum, while small ones only need 20%. There is no limit to how many proposals can be simultaneously going through the voting process. In order to prevent proposal spam, there is a non-refundable listing fee for each proposal.


Voting is an activity that limits future actions available to a TDT holder. Critically, if a token holder votes either yes or no on a proposal, they cannot change their vote, nor can they
withdraw from the DAO through a split until the voting period has ended, nor can they transfer their TDT. Voting on any proposal places a TDT holder on a list of ‘blocked’ addresses that cannot perform splits or transfers. For a TDT holder who votes on multiple proposals, the block remains in effect until the latest of the voting deadlines. If the proposal on which a TDT holder voted succeeds, the holder can only withdraw their share of the Ether balance that is left after the winning proposal has been funded.

In contrast, token holders that do not vote can withdraw from the DAO by initiating a split. Splits take 7 days to fork off the funds; consequently, a split initiated by a user 7 days ahead of a proposal’s voting deadline can operate without any risk that her funds will be spent on that proposal.

Splitting and Withdrawals

The DAO does not permit funds to be withdrawn as Ether directly. Instead, token holders can take their TDT out by a process known as a ‘split’, a process that takes 34 days in total to complete and involves creating a new DAO.

The split process begins by having a token holder initiate a special proposal with a new curator address and a funding amount of 0 ether. The voting period on a split proposal lasts a minimum of 7 days. The outcome of the vote on a split proposal is inconsequential, as the proposal cannot be executed. Instead, the presence of a split proposal whose voting period has ended confers the right to split from The DAO to the party who initiated the proposal, as well as those who voted YES on it. This takes place when these parties call a function called ‘splitDAO’ to move their funds from The DAO into a newly formed child-DAO contract. This provides a way to withdraw one’s funds from The DAO; namely, individuals who wish to withdraw from The DAO initiate a new curator proposal, where they themselves are the new curator, wait for the voting period to expire, and then transfer their holdings to a newly created DAO.

When a token holder splits from The DAO through the above mechanism, the usual 27-day creation period for a new DAO still applies. This means that the whole process takes 34 days in total to initiate a split proposal (day 0), gather votes (for 7 days), split from The DAO, and then wait for the new DAO to be formed (for 27 days). The actual transfer takes place on the 7th day and the funds are tied down for 27 days.

When a token holder has successfully split into their own new DAO, they can create a proposal to pay themselves out the full balance of all the Ether left in the new DAO.  

Transferability of TDT

TDTs that are not blocked due to voting are fully transferrable to any valid Ethereum address, and therefore can be sold immediately on exchanges or over the counter. Thus, if a token holder does not want to wait 34 days to split from The DAO and withdraw their ether, they can just sell their TDT tokens directly on exchanges for ether, or perhaps even other cryptocurrencies such as Bitcoin.

Attacks and Concerns

Analyzing an investment vehicle such as The DAO is difficult. This is partly because game theoretic treatments typically require a full characterization of the actors, the potential moves available to them within the game, and the various payoffs associated as a result of each move.  In an interconnected financial system involving convertible assets with a large number of complex actors, there are many potential payoffs, not all of which can be expressed within the narrow confines of a game. That is, not all actors try to maximize their returns in ether, and instead may have exogenous payoffs in dollar terms that are difficult to capture. For instance, an actor who has purchased put options on ether and damages the system’s reputation via an attack on The DAO may well lose tokens in the game but come out ahead financially, and modeling their profit requires quantifying social factors and market effects. Many previous attempts to apply game theory to distributed systems or complex agent systems have suffered from simple-minded modeling that has, at times, led to incorrect conclusions. Consequently, we do not attempt to provide a full game theoretic treatment of The DAO in this paper. Instead, we discuss the guiding principles for good mechanism design that ought to apply to crowd-funding investment vehicles such as The DAO, and identify several weaknesses in the current structure of The DAO that violate these principles and open the shareholders to attack.

Guiding Principles

The central point of the DAO is to enable token holders to vote on proposals. A rational actor will cast her vote in a manner that is informed by the net present value she perceives for each proposal. Every proposal has a clear present cost, specified in the proposal itself. It returns value to the shareholders either through an expected profit denominated in ether and paid back to The DAO, or through the implicit appreciation of the TDTs. As with every investment, proposals to the DAO have a probability of success that depends on the nature of the venture and its business plan. For instance, a proposal may ask for 1000 Ether to make 1000 T-Shirts, and may estimate that they will sell 1000 T-Shirts at a profit of 5 Ether each over a time frame, and thus estimate they will return 5000 Ether to The DAO. It is expected that vigorous debate and discussion during the voting phase will enable each voter to independently estimate the chances of success, and thus, the expected value (EV).

A DAO is considered to have good mechanism design if actors incentivized to vote truthfully in accordance with their estimates of the expected value of each proposal. For the wisdom of the crowd to manifest itself, we would like a TDT holder to vote YES for a proposal that they believe has positive expected value (+EV), and NO for a proposal they believe has a negative expected value (-EV); alternatively, they may abstain if their vote is not going to change the outcome. We now describe why the current implementation of The DAO fails to uphold this principle.

The Affirmative Bias, and the Disincentive to Vote No

The current DAO has a strong positive bias to vote YES on proposals and to suppress NO votes as a side-effect of the way in which it restricts users’ range of options following the casting of a vote. Specifically, the current DAO restricts the ability of a token holder to split from the DAO or to sell their TDT once they have voted on a proposal until the outcome of the vote is determined. Thus, a voter who believes a proposal has a negative expected value is in a quandary: they can split from The DAO immediately without taking any risk, or else they can vote NO and hope that the proposal fails to be funded. A NO vote is therefore inherently risky for an investor who perceives the proposal to be -EV, in a way that voting YES is not for a +EV  voter. As a consequence, The DAO voting is likely to exhibit a bias: YES votes will arrive throughout the voting period, while a strategic token holder will want to cast their NO vote only when they have some assurance of success. Because strategic NO voters will cast their votes only after gaining information on others’ negative perception of the same proposal, the voting process itself will not yield uniform information about the token holders’ preferences over time. Preferences of the positive voters will be visible early on, but the negative sentiment will be suppressed during the voting process — a problematic outcome for a crowd-funding organization based on measuring the sentiment of the crowd through votes.


The Stalking Attack

Splitting from The DAO (the only existing method of extracting one’s Ether holdings from the main DAO contract) is currently open to a “stalking attack.” Recall that a user who splits from The DAO initiates a new DAO contract in which they are the sole investor and curator. The intent is that a user can extract his funds by whitelisting a proposal to pay himself the entire contents of this sub-contract, voting on it with 100% support, and then extracting the funds by executing the approved proposal. However, recall that the split and the resulting sub-contract creation takes place on a public blockchain. Consequently, an attacker can pursue a targeted individual by buying tokens during the creation phase. Since a splitting user is the new curator of the nascent sub-contract, a stalker cannot actually steal funds; the victim can refuse to whitelist proposals by the stalker (though note that, due to potential for confusion and human error, the expected outcome from such attacks is still positive). If the stalker commits funds that correspond to 53% or more of the sub-contract, he can effectively block the victim from withdrawing their funds out of the contract back into ether. Subsequent attempts by the victim to split from the sub-contract (to create a sub-sub-contract) can be followed recursively, effectively trapping the victim’s funds and prohibiting conversion back to ether. The attacker places no funds at risk, because she can split from the child-DAO at any time before the depth limit is reached. This creates the possibility for ransom and blackmail. While some remedies[7] have been suggested for preventing and counterattacking during a stalker attack, they require unusual technical sophistication and diligence on behalf of the token holders.

The Ambush Attack

In an ambush, a large investor takes advantage of the bias for DAO users to avoid voting NO by adding a large percent of YES votes at the last minute to fund a self-serving proposal. Recall that under the current DAO structure, a rational actor who believes a proposal is -EV is likely to refrain from voting, since doing so would restrict his ability to split his funds in the case that the proposal succeeds. This is especially true when the investor observes that sufficiently many NO votes already exist to reject the proposal. Consequently, even proposals that provide absurdly low returns to The DAO may garner NO votes that are barely sufficient to defeat them.

This kind of behavior opens the door to potential attack: A sufficiently large voting bloc can take advantage of this reticence by voting YES at the last possible moment to fund the proposal. Such attacks are very difficult to detect and defend against because they leave little to no time for The DAO token holders to withdraw their funds. Among the current DAO investors, there is already a whale who invested 888,888 Ether[8]. This investor currently commands 7.7% of all outstanding votes in The DAO. For a proposal that requires only a 20% quorum, this investor already has 77% of the required YES votes to pass the proposal, and just needs to conspire with 3.3% of remaining token holders, in return for paying the conspirators out from the stolen funds.

The Token-Value Attack

In a token-value attack, a large investor stands to benefit by driving TDTs lower in value, either to profit from such price motion directly (e.g. via shorts of put options), or to purchase TDTs back in the open market in order to acquire a larger share of The DAO. A Token-Value attack is most successful if the attacker can (i) incentivize a large portion of token holders not to split, but instead sell their TDT directly on exchanges, and (ii) incentivize a large portion of the public not to purchase TDT on exchanges. An attacker can achieve (i) by implementing the stalker attack on anyone who splits and then making that attack public on social media. Worse, since the existence of the stalker attack is now well-known, the attacker need not attack any real entity, but can instead create fictitious entities who post stories of being stalked in order to sow panic among The DAO investors.


An attacker can achieve (ii) by creating a self-serving proposal widely understood to be -EV, waiting for the 6th day before voting ends, and then voting YES on it with a large block of votes. This action has the effect of discouraging rational market actors from buying TDT tokens because (a) if the attackers proposal succeeds they will lose their money, and (b) they don’t have enough time to buy TDTs on an exchange and convert them back into Ether before the attackers proposal ends, thus eliminating any chance of risk-free arbitrage profits.  The combined result of (i) and (ii) means that there will be net selling pressure on TDT, leading to lower prices. The attacker can then buy up cheap TDT on exchanges
for a risk free profit, because he is the only TDT buyer who has no risk if the attacking proposal actually manages to pass.

The extraBalance Attack

In the extraBalance Attack, an attacker tries to scare all token holders into splitting from The DAO so that book value of TDT increases. The book value of TDT increases because token holders who split can not recover any extraBalance, so as more holders split, the extraBalance becomes a larger percentage of the total balance, thus increasing the book value of the TDT. Currently the extraBalance is 275,000 Ether, which means the book value of TDT should be 1.02. If the Attacker can scare away half the token holders, the TDT will increase in value to 1.04. If the Attacker can scare away ~95% of the token holders, the book value of the remaining TDT will be roughly 2.00. In this attack, the Attacking Whale would do the opposite of the token-value attack by creating a self-serving proposal with a negative return and then immediately voting YES on it with a large voting block of TDT, thus scaring all the token holders, and then giving them 14 days until the end of the voting period so that they have more than enough time to safely split. In this scenario, splitting will be risk free (assuming that it is not coupled with a stalking attack), since voting NO could result in losses if the attackers end up having enough YES votes.

The Split Majority Takeover Attack

Even though the DAO white paper specifically identifies the majority takeover attack and introduces the concept of curators to deter it, it is not clear that the deterrence mechanism is sufficient. Recall that in the majority takeover attack outlined in the DAO whitepaper, a large voting bloc, of size 53% or more, votes to award 100% of the funds to a proposal that benefits solely that bloc. Curators are expected to detect such instances by tracking identities of the beneficiaries. Yet it is not clear how a curator can detect such an attack if the voting bloc, made up of a cartel of multiple entities, proposes not just a single proposal for 100% of the funds, but multiple different proposals. The constituents of the voting bloc can achieve their goal of emptying out the fund piecemeal. Fundamentally, this attack  is indistinguishable “on the wire” from a number of investment opportunities that seem appealing to a majority. The key distinguishing factor here is the conflict of interest: the direct beneficiaries of the proposals are also token holders of The DAO.

The Concurrent Tie-Down Attack

The structure of The DAO can create undesirable dynamics in the presence of concurrent proposals. In particular, recall that a TDT holder who votes YES on a proposal is blocked from splitting or transferring until the end of the voting period on that proposal. This provides an attack amplification vector, where an attacker collects votes on a proposal with a long voting period, in effect trapping the voters’ shares in The DAO. She can then issue an attacking proposal with a much shorter voting period. The attack, if successful, is guaranteed to impact the funds from the voters who were trapped. Trapped voters are forced to take active measures to defend their investments.

Independence Assumption

A critical implicit assumption in the discussion so far was that the proposals are independent. That is, their chances of success, and their returns, are not interlinked or dependent on each other. It is quite possible for simultaneous proposals to The DAO to be synergistic, or even antagonistic; for instance, a cluster of competing projects in the same space may affect each others’ chances of success and thus, collective returns. Similarly, cooperating projects, if funded together, might create sufficient excitement to yield excess returns; evidence from social science indicates that social processes are driven by non-linear systems.

Yet the nature of voting on proposals in The DAO provide no way for investors to express complex, dependent preferences. For instance, an investor cannot indicate a conditional preference (e.g. “vote YES on this proposal if this other proposal is not funded or also funded”). In general, the construction of market mechanisms to elicit such preferences, and appropriate programmatic APIs for expressing them, requires a more detailed and nuanced contract. This does not constitute an attack vector, but it does indicate that we might see strategic voting behavior even in the absence of any ill will by participants.

Potential Mitigations and Solutions

There exist partial and complete remedies to some of the attacks outlined above. Discussion of these solutions is ongoing These require either technical changes to The DAO or a social agreement among The DAO’s curators, or both.

Supporting Withdrawals

A function that any token holder can call to have an instant and direct withdrawal of their share of the DAO’s Ether to regular addresses (and that would allow them to claim future rewards from proposals on which they already spent Ether) would make the Stalker attack impossible. It would also significantly mitigate the Token-Value attack.

Many token holders currently seem to believe that they can withdraw from The DAO at any time. Guaranteeing that this can happen, without having to resort to complex defense mechanisms, will ensure that the token holders’ expectations are met.

Post-voting Grace Periods

Adding a grace period after the end of the voting periods, but before the proposals can be funded/executed, would give token holders time to move TDT or to split the DAO after seeing voting results but before their money is spent. Voting periods and grace periods would not be allowed to happen concurrently, because voting tokens must be locked until all of the voting periods for the proposals those tokens voted on.

The addition of a grace period definitively solves the voting bias by allowing token holders to vote “no” without forfeiting their right to sell or split in response to the outcome. It also gives the curators time to defend the DAO against ambush attacks by un-whitelisting payment addresses after seeing the voting results. It significantly mitigates the majority takeover attack and the ambush attack, by letting token holders withdraw after the vote passes.

Shorter voting periods

Shortening the voting period on a proposal so that the voting period only occurs in the last 1-2 days of a 14-day or longer debating period would shorten the time for which tokens are locked. This mitigates the Token-Value attack, and also reduces the propensity for voters to wait until the last minute to vote so that their TDT are not locked up.

Vote “no” and withdraw on an affirmative decision

Having a special vote whose semantics are ‘NO_AND_WITHDRAW_IF_VOTE_SUCEEEDS’ allows token holders to signal that they will leave the DAO if a proposal passes. “NAW” votes publicly indicate that the voter believes this proposal would cause damage to the value of the TDT and no longer wants to be part of The DAO if it succeeds.

Waiting for Quiet

A potential defense to deter ambush attacks is to extend the voting deadline in response to last minute changes in the direction of the vote. While last minute votes are to be expected in a fair voting system, mechanism biases that incentivize token holders to sit on the sidelines can be countered by extending the voting period and giving people time to observe the direction of the vote and to participate.

Commit/Reveal Voting

A generally applicable technique is to have the TDT holders first commit to their (blinded) votes, and then to remove the blinding in a revelation phase at the end of the voting period. This has the downside that the client voters now need to be stateful in order to remember their blinding factor. Further, they can share their blinding factors with others in order to reveal, and even prove, the disposition of their vote. Most importantly, blinding the votes diminishes the value of The DAO’s voting process: by design, the votes can no longer act as a signal to other TDT holders about the holder’s financial preferences. The preference discovery process will thus end up shifting out of the smart contract into exogeneous mechanisms, such as message boards and the like.

Vote Delegation

TDT holders who do not participate in the voting process reduce the security of the system. One can improve participation, and thus improve security, by enabling TDT holders to delegate their vote to proxies. This delegation feature necessitates significant modifications and sufficient complexity to render it unsuitable as a short-term fix.

Curator-enforced independence of proposals

The independence assumption may be maintained voluntarily by the curators by ensuring that the proposals that are eligible for voting at any given time are indeed independent from each other.

Upgrading the DAO

The DAO (1.0) has a built-in upgrade mechanism called “newContract” that moves all the funds into a new DAO (1.1). While this mechanism is available, it might be prudent to save it for dire emergencies. A softer upgrade path might be to place a moratorium on new proposals, to create new DAOs, and then to provide proposals to shift funds from the 1.0 version to the 1.1 version (or versions).

Summary and Suggestions

This paper outlined the operation of The DAO contract, which currently holds a substantial portion of the Ether supply and has generated much excitement about decentralized autonomous organizations and smart contracts. It also identified seven causes for concern, which might cause The DAO voters to deviate from a truthful strategy. Some of these behaviors have the potential to lead to financial manipulation and even loss. It finally identified some potential mitigations and solutions to some of these biases and vulnerabilities.

Given the concerns outlined above, we believe it would be wise for the curators to not whitelist any proposals until the DAO is upgraded to mitigate the potential attacks described in this paper.


NOTE: THIS DOCUMENT DOES NOT CONSTITUTE FINANCIAL ADVICE. WE ARE NOT, AND WILL NOT BE HELD, RESPONSIBLE FOR YOUR FINANCIAL DECISIONS.

Acknowledgments

Many thanks to Rick Dudley, Christoph Jentzsch, Andrew Miller, Gustav Simonsson, and Alex Van de Sande for their comments and feedback on this draft.

[1] https://etherscan.io/tx/0xe9ebfecc2fa10100db51a4408d18193b3ac504584b51a4e55bdef1318f0a30f9

[3] The ether address for The DAO is 0xbb9bc244d798123fde783fcc1c72d3bb8c189413

[4] Vitalik Buterin, Bootstrapping A Decentralized Autonomous Corporation: Part I. https://bitcoinmagazine.com/articles/bootstrapping-a-decentralized-autonomous-corporation-part-i-1379644274

[5] Though the term DAO is more broad and can refer to any decentralized organization governed by a smart contract, in this paper, it is used solely to refer to Slock.it’s specific implementation https://github.com/slockit/DAO.

[8] From address 0x04c973aff06f64b880524f16ae8c821928233ee5


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/0rBHrYDLeJU/mobilebasic

Original article

Scaling Mercurial at Facebook

With thousands of commits a week across hundreds of thousands of files, Facebook’s main source repository is enormous–many times larger than even the Linux kernel, which checked in at 17 million lines of code and 44,000 files in 2013. Given our size and complexity—and Facebook’s practice of shipping code twice a day–improving our source control is one way we help our engineers move fast.

Choosing a source control system

Two years ago, as we saw our repository continue to grow at a staggering rate, we sat down and extrapolated our growth forward a few years. Based on those projections, it appeared likely that our then-current technology, a Subversion server with a Git mirror, would become a productivity bottleneck very soon. We looked at the available options and found none that were both fast and easy to use at scale.

Our code base has grown organically and its internal dependencies are very complex. We could have spent a lot of time making it more modular in a way that would be friendly to a source control tool, but there are a number of benefits to using a single repository. Even at our current scale, we often make large changes throughout our code base, and having a single repository is useful for continuous modernization. Splitting it up would make large, atomic refactorings more difficult. On top of that, the idea that the scaling constraints of our source control system should dictate our code structure just doesn’t sit well with us.

We realized that we’d have to solve this ourselves. But instead of building a new system from scratch, we decided to take an existing one and make it scale. Our engineers were comfortable with Git and we preferred to stay with a familiar tool, so we took a long, hard look at improving it to work at scale. After much deliberation, we concluded that Git’s internals would be difficult to work with for an ambitious scaling project.

Instead, we chose to improve Mercurial. Mercurial is a distributed source control system similar to Git, with many equivalent features. Importantly, it’s written mostly in clean, modular Python (with some native code for hot paths), making it deeply extensible. Just as importantly, the Mercurial developer community is actively helping us address our scaling problems by reviewing our patches and keeping our scale in mind when designing new features.

When we first started working on Mercurial, we found that it was slower than Git in several notable areas. To narrow this performance gap, we’ve contributed over 500 patches to Mercurial over the last year and a half. These range from new graph algorithms to rewrites of tight loops in native code. These helped, but we also wanted to make more fundamental changes to address the problem of scale.

Speeding up file status operations

For a repository as large as ours, a major bottleneck is simply finding out what files have changed. Git examines every file and naturally becomes slower and slower as the number of files increases, while Perforce “cheats” by forcing users to tell it which files they are going to edit. The Git approach doesn’t scale, and the Perforce approach isn’t friendly.

We solved this by monitoring the file system for changes. This has been tried before, even for Mercurial, but making it work reliably is surprisingly challenging. We decided to query our build system’s file monitor, Watchman, to see which files have changed. Mercurial’s design made integrating with Watchman straightforward, but we expected Watchman to have bugs, so we developed a strategy to address them safely.

Through heavy stress testing and internal dogfooding, we identified and fixed many of the issues and race conditions that are common in file system monitoring. In particular, we ran a beta test on all our engineers’ machines, comparing Watchman’s answers for real user queries with the actual file system results and logging any differences. After a couple months of monitoring and fixing discrepancies in usage, we got the rate low enough that we were comfortable enabling Watchman by default for our engineers.

For our repository, enabling Watchman integration has made Mercurial’s status command more than 5x faster than Git’s status command. Other commands that look for changed files–like diff, update, and commit—also became faster.

Working with large histories

The rate of commits and the sheer size of our history also pose challenges. We have thousands of commits being made every day, and as the repository gets larger, it becomes increasingly painful to clone and pull all of it. Centralized source control systems like Subversion avoid this by only checking out a single commit, leaving all of the history on the server. This saves space on the client but leaves you unable to work if the server goes down. More recent distributed source control systems, like Git and Mercurial, copy all of the history to the client which takes more time and space, but allows you to browse and commit entirely locally. We wanted a happy medium between the speed and space of a centralized system and the robustness and flexibility of a distributed one.

Improving clone and pull

Normally when you run a pull, Mercurial figures out what has changed on the server since the last pull and downloads any new commit metadata and file contents. With tens of thousands of files changing every day, downloading all of this history to the client every day is slow. To solve this problem we created the remotefilelog extension for Mercurial. This extension changes the clone and pull commands to download only the commit metadata, while omitting all file changes that account for the bulk of the download. When a user performs an operation that needs the contents of files (such as checkout), we download the file contents on demand using Facebook’s existing memcache infrastructure. This allows clone and pull to be fast no matter how much history has changed, while only adding a slight overhead to checkout.

But what if the central Mercurial server goes down? A big benefit of distributed source control is the ability to work without interacting with the server. The remotefilelog extension intelligently caches the file revisions needed for your local commits so you can checkout, rebase, and commit to any of your existing bookmarks without needing to access the server. Since we still download all of the commit metadata, operations that don’t require file contents (such as log) are completely local as well. Lastly, we use Facebook’s memcache infrastructure as a caching layer in front of the central Mercurial server, so that even if the central repository goes down, memcache will continue to serve many of the file content requests.

This type of setup is of course not for everyone—it’s optimized for work environments that have a reliable Mercurial server and that are always connected to a fast, low-latency network. For work environments that don’t have a fast, reliable Internet connection, this extension could result in Mercurial commands being slow and failing unexpectedly when the server is congested or unreachable.

Clone and pull performance gains

Enabling the remotefilelog extension for employees at Facebook has made Mercurial clones and pulls 10x faster, bringing them down from minutes to seconds. In addition, because of the way remotefilelog stores its local data on disk, large rebases are 2x faster. When compared with our previous Git infrastructure, the numbers remain impressive. Achieving these types of performance gains through extensions is one of the big reasons we chose Mercurial.

Finally, the remotefilelog extension allowed us to shift most of the request traffic to memcache, which reduced the Mercurial server’s network load by more than 10x. This will make it easier for our Mercurial infrastructure to keep scaling to meet growing demand.

How it works

Mercurial has several nice abstractions that made this extension possible. The most notable is the filelog class. The filelog is a data structure for representing every revision of a particular file. Each version of a file is identified by a unique hash. Given a hash, the filelog can reconstruct the requested version of a file. The remotefilelog extension replaces the filelog with an alternative implementation that has the same interface. It accepts a hash, but instead of reconstructing the version of the file from local data, it fetches that version from either a local cache or the remote server. When we need to request a large number of files from the server, we do it in large batches to avoid the overhead of many requests.

Open Source

Together, the hgwatchman and remotefilelog extensions have improved source control performance for our developers, allowing them to spend more time getting stuff done instead of waiting for their tools. If you have a large deployment of a distributed revision control system, we encourage you to take a look at them. They’ve made a difference for our developers, and we hope they will prove valuable to yours, too.


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/_JIijd1BSEs/

Original article

Drag and drop task list items on GitHub


Unlimited private repositories

All of our paid GitHub.com plans now include unlimited private repositories. Read the blog post…

You can now move checklist items around just by dragging and dropping them. Reorder items quickly and easily without editing the original comment’s Markdown.

re-order

How to re-order task list items

Create a task list item using - [ ] at the start of a new line. When you hover over the left-hand side of an item’s checkbox, you’ll see the option to drag and drop it into a new location.

Learn more from our our documentation.

Need help or found a bug? Contact us.


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/aZE6jGoAAmM/2179-drag-and-drop-tasks-in-markdown-task-lists

Original article

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑

%d bloggers like this: