Learn about the various legal education applications at CALIcon16

Welcome The Conference for Law School Computing® – aka CALIcon – is the only conference that brings together law professors, IT professionals, law librarians and law school administrators to discuss technology and its impacts on legal education. For more information about CALIcon, including registration, hotel and program news, please read on. EDUCATION:  The conference will […]

Original URL: http://spotlight.classcaster.net/2016/05/06/1366/  

Original article

Capti Narrator brings free text-to-speech to nearly any DRM-free e-book or document

I just learned about a Capti-vating new development in text-to-speech. PRWeb is carrying a press release about Capti Narrator, a free cloud-based text-to-speech Windows 64 and iOS app (Android coming soon) that will effectively read aloud just about any DRM-free document.

The press release pitches it as useful with Project Gutenberg (“50,000+ eBooks by Project Gutenberg are Now Available as Free Audiobooks” reads the headline) but perhaps the greater use for it is reading aloud pretty much any DRM-free e-book or other document you have on hand (such as titles by Baen, O’Reilly, etc.), especially if you already have it in the cloud. It will also read aloud news stories or sites you’ve saved to Instapaper—a very useful feature that goes most e-readers one better.

Not only will this be great for people with visual or reading difficulties, the press release also touts it as a way to help learn English by hearing words spoken as they’re read.

Apparently this project has been in development for a while—at least since before the 2011 death of Project Gutenberg founder Michael Hart. The press release proclaims:

“I corresponded with Michael Hart when we were just starting with Capti; he told me that he saw a great purpose in our mission of enabling everyone to listen to all they want to read” – said Dr. Yevgen Borodin, the CEO of Charmtech Labs LLC. “And, today, I am thrilled to finally deliver on my promise and make Project Gutenberg eBooks available as free audiobooks to everyone!”

The app is currently available as a download for 64-bit Windows and a freemium app for iOS. (An Android version is promised soon.) The way it works is that you load your document into it and press play, and it starts reading it aloud to you using one of your system default speech-synthesizer voices. (I found two on Windows, and five on iOS.) If you want other voices, you can add them for prices ranging from $4.99 to $29.99 each. The Windows app launches in a web browser window (though it also runs in your system tray), and the iOS version is its own standalone app.

imageOn the Windows version, you can add files to your playlist from a local file, Dropbox, Google Drive, Instapaper, OneDrive, Bookshare, or Project Gutenberg. The iOS app includes this, as well as OneDrive, Pocket, the clipboard, and letting you choose particular pages with the web browser. File types can be PDF, WORD, EPUB, DAISY, HTML, and “many other digital text formats.” I tried it with my e-book Joe & Julius from my Dropbox account and it worked just fine. The ease of uploading titles from the cloud rather reminds me of the way the iOS e-reader apps Marvin and Gerty will read and parse your Dropbox for e-books.

064220F6-8FB4-4F31-8218-A86909A92D4EOnce you’ve added titles to your playlist, when you hit play, the app reads along with your book, while showing the text on the screen and highlighting it word-by-word. As expected, the result is somewhat mechanical and awkward with cadence and pronunciation of some words (contractions get short shrift: “we’re” gets pronounced as “we-ree”), but probably no better or worse than the text-to-speech functions of the Kindle or any other e-reader or app that supports read-aloud. If you’re used to using those functions, you’ll have no problems with this one either.

Furthermore, after I originally posted this story, Dr. Borodin contacted me and noted that Capti’s premium voices sound a lot better than the system default voices, as the Capti-narrated YouTube video above this article demonstrates. I didn’t actually watch through the whole video before writing this, but now that I do I have to admit that’s pretty impressive. It might actually be worth shelling out some money for one of those voices if you plan to use this system very often.

One other noteworthy aspect of the app is that it incorporates cloud playlist file and position sync. If you tell it to sync the playlist from your desktop, then sync from the mobile app (or vice versa), your Capti playlist has all the same titles in it, and it picks up right where you left off playing them.

The free version of the app will be extremely useful to the majority of readers, but there is also a premium plan available at a cost of $1.99 per month, or $9.99 for six months. It includes a number of extra features such as the ability to view any images that were incorporated in the original document as it is read aloud, the ability to translate words in your texts into any of 28 different languages, a full-text playlist search, and a linguistic game called “Word Challenge.” Though I don’t particularly need those features, I could see they would be very useful, especially to English-as-second-language students. The app strikes a pretty good balance between being useful enough for free and more useful for a slight extra cost—it’s not one of those apps where you have to pay something to get any use out of it at all.

The app could stand to be a little more user-friendly—it was a little tricky for me to find the functions to add titles to my playlist at first, and they’re in different places in the Windows and iOS versions. But once you start reading, it works surprisingly well. The voice is loud and clear, even if I don’t like the artificial way the system version sounds.

I can’t see using this program too often myself—I just can’t get past the artificiality of the computer voice. Though then again, if I buy one of the premium voices like the one that narrated that video I might change my mind. But I know that’s not a problem for many people who swear by it, and were disappointed the latest Kindles dropped the feature altogether. I predict that this free text-to-speech app will find a place on many, many computers and mobile devices.

The post Capti Narrator brings free text-to-speech to nearly any DRM-free e-book or document appeared first on TeleRead News: E-books, publishing, tech and beyond.

Original URL: http://www.teleread.com/capti-narrator-brings-free-text-to-speech-to-nearly-any-drm-free-e-book-or-document/  

Original article

Lessons the Federal Courts Might Learn from Westlaw’s Prolonged Data Processing Error

The Thomson Reuters Errata Notice

On April 15, 2016 Thomson Reuters notified subscribers to its online and print case law services that a significant number of U.S. decisions it had published since November 2014 contained errors.


Here and there words had been dropped.  The company explained that the errors had been introduced by software run on the electronic texts it collected from the authoring courts.  Thomson posted a list of the affected cases.  The initial list contained some 600 casesA week later it had grown to over 2,500 through the addition of cases loaded on Westlaw but not published in the National Reporter Service (NRS).  Two weeks out the list included links to corrected versions of the affected cases with the restored language highlighted.  The process of making the corrections led Thomson to revise the number of casualties downward (See the list’s entry for U.S. v. Ganias, for example.), but only slightly.

Thomson Reuters sought to minimize the importance of this event, asserting that none of the errors “changed the meaning of the law in the case.”  Commendably, Thomson apologized, acknowledging and detailing the errata.  It spun its handling of the processing error’s discovery as a demonstration of the company’s commitment to transparency.  On closer analysis the episode reveals major defects in the current system for disseminating federal case law (and the case law of those states that, like the lower federal courts, leave key elements of the process to Thomson Reuters).

Failure to View Case Law Publication as a Public Function

Neither the U.S. Courts of Appeals nor the U.S. District Courts have an “official publisher.”  No reporter’s office or similar public agency produces and stamps its seal on consistently formatted, final, citable versions of the judicial opinions rendered by those courts in the way the Reporter of Decisions of the U.S. Supreme Court does for the nation’s highest court.  By default, cemented in by over a century of market dominance and professional practice, that job has fallen to a single commercial firm (originally the West Publishing Company, now by acquisition and merger Thomson Reuters) to gather and publish the decisions of those courts in canonical form.  Although that situation arose during the years in which print was the sole or principal medium of distribution, it has carried over into the digital era.  Failure of the federal judiciary to adopt and implement a system of non-proprietary, medium-neutral citation has allowed it to happen.

With varying degrees of effectiveness, individual court web sites do as they were mandated by Congress in the E-Government Act of 2002.  They provide electronic access to the court’s decisions as they are released.  The online decision files, spread across over one hundred sites, present opinion texts in a diversity of formats.  Crucially, all lack the citation data needed by any legal professional wishing to refer to a particular opinion or passage within it.  Nearly twenty years ago the American Bar Association called upon the nation’s courts to assume the task of assigning citations.  By now the judiciaries in close to one-third of the states have done so.  The federal courts have not.

Major Failings of the Federal Courts’ Existing Approach

Delivery of Decisions with PDF Pagination to Systems that Must Remove It

Several states, including a number that produce large volumes of appellate decisions, placed no cases on the Thomson Reuters errata list.  Conspicuous by their absence, for example, are decisions from the courts of California and New York.  The company’s identification of the software bug combined with inspection of the corrected documents explains why.  Wrote Thomson it all began with an “upgrade to our PDF conversion process.”

The lower federal courts, like those of many states, release their decisions to Thomson Reuters, other redistributors, and the public as PDF files.  The page breaks in these “slip opinion” PDFs have absolutely no enduring value.  Thomson (like Lexis, Bloomberg Law, Casemaker, FastCase, Google Scholar, Ravel Law, and the rest) must remove opinion texts from this electronic delivery package and pull together paragraphs and footnotes that straddle PDF pages.  All the words dropped by Thomson’s “PDF conversion process” were proximate to slip opinion page breaks.  Why are there no California and New York cases on list?  Those states release appellate decisions in less rigid document formats.  California decisions are available in Microsoft Word format as well as PDF.  The New York Law Reporting Bureau releases decisions in htmlSo does Oklahoma; no Oklahoma decisions appear on the Thomson errata list.

Failure to Employ One Consistent Format

The lower federal courts compound the PDF extraction challenge by employing no single consistent format.  Leaving individual judges of the ninety-four district courts to one side, the U.S. Courts of Appeals inflict a range of remarkable different styles on those commercial entities and non-profits that must process their decisions so that they will scroll and present text, footnotes, and interior divisions on the screens of computers, tablets, and phones with reasonable efficiency and consistency.  The Second Circuit’s format features double-spaced texts, numbered lines, and bifurcated footnotes; the Seventh Circuit’s has single-spaced lines, unnumbered, with very few footnotes (none in opinions by Judge Posner).

In contrast the decisions released by the Michigan Supreme Court, although embedded in PDF, reflect a cleanly consistent template.  The same is true of those coming from the supreme courts of Florida, Texas, and Wisconsin.  Decisions from these states do not appear on the Thomson list.

Lack of a Readily Accessible, Authenticated Archive of the Official Version

By its own account it took Thomson Reuters over a year to discover this data processing problem.  With human proofreaders it would not have taken so long.  Patently, they are no longer part of the company’s publication process.  Some of the omitted words would have been invisible to anyone or any software not performing a word-for-word comparison between the decision released by the court and the Westlaw/National Reporter Service version.  Dropping “So ordered” from the end of an opinion or the word “Plaintiff” prior to the party’s name at its beginning fall in this category.  However, the vast majority of the omissions rendered the affected sentence or sentences unintelligible.  At least one removed part of a web site URLOthers dropped citations.  In the case of a number of state courts, a reader perplexed by a commercial service’s version of a decision can readily retrieve an official copy of the opinion text from a public site and compare its language.  That is true, for example, in Illinois.  Anyone reading the 2015 Illinois Supreme Court decision in People v. Smith on Westlaw puzzled by the sentence “¶ 3 The defendant, Mickey D. Smith, was charged in a three-count indictment lawful justification and with intent to cause great bodily harm, shot White in the back with a handgun thereby causing his death.” could have pulled the original, official opinion from the judiciary web site simply by employing a Google search and the decision’s court attached citation (2015 IL 116572), scrolled directly to paragraph 3, and discovered the Westlaw error.  The same holds for the other six published Illinois decisions on the Thomson list.  Since New Mexico also posts final, official versions of its decisions outfitted with public domain citations, it, too, provides a straightforward way for users of Westlaw or any other commercial service to check the accuracy of dubious case data.

The growing digital repository of federal court decisions on the GPO’s FDsys site falls short of the standard set by these state examples.  To begin, it is seriously incomplete.  Over fifty of the entries on the Thomson Reuters list are decisions from the Southern District of New York, a court not yet included in FDsys.  Moreover, since the federal courts employ no system of court applied citation, there is no simple way to retrieve a specific decision from FDsys or to move directly to a puzzling passage within it.  With an unusual party name or docket number the FDsys search utility may prove effective but with a case name like “U.S. v. White” retrieval is a challenge.  A unique citation would make the process far less cumbersome.  However, since the lower federal courts rely on Thomson Reuters to attach enduring citations to their cases (in the form of volume and page numbers in its commercial publications) the texts flow into FDsys without them.

The Ripple of the Thomson Reuters Errors into Other Database Systems

Because the federal courts have allowed the citation data assigned by Thomson Reuters, including the location of interior page breaks, to remain the de facto citation standard for U.S. lawyers and judges, all other publishers are compelled in some degree to draw upon the National Reporter System.  They cannot simply work from the texts released by their deciding courts, but must, once a case has received Thomson editorial treatment and citation assignment, secure at least some of what Thomson has added.  That introduces both unnecessary expense and a second point of data vulnerability to case law dissemination.  Possible approaches range from: (a) extracting only the volume and pagination from the Thomson reports (print or electronic) and inserting that data in the version of the decision released by the court to (b) replacing the court’s original version with a full digital copy of the NRS version.  Whether the other publisher acquires the Thomson Reuters data in electronic form under license or by redigitizing the NRS print reports, the second approach will inevitably pick up errors injected by Thomson Reuters editors and software.  For that reason the recent episode illuminates how the various online research services assemble case data.

Services Unaffected by the Thomson Reuters Glitch

Lexis was not affected by the Thomson Reuters errors because it does not draw decision texts from the National Reporter System.  (That is not to say that Lexis is not capable of committing similar processing errors of its own.  See the first paragraph in the Lexis version of U.S. Ravensberg, 776 f.3d 587 (7th Cir. 2015).)   So that Lexis subscribers can cite opinions using the volume and page numbers assigned by Thomson, Lexis extracts them from the NRS reports and inserts them in the original text.  In other respects, however, it does not conform decision data to that found in Westlaw.  As explained elsewhere its approach is revealed in how the service treats cases that contain internal cross-references.  In the federal courts and other jurisdictions still using print-based citation, a dissenting judge referring to a portion of the majority opinion must use “slip opinion” pagination.  Later when published by Thomson Reuters these “ante at” references are converted by the company’s editors, software, or some combination of the two to the pagination of the volume in which the case appears.  Search recent U.S. Court of Appeals decision on Lexis on the phrase “ante at” and you will discover that in its system they remain in their original “slip opinion” form.  For a single example, compare Judge Garza’s dissenting opinion in In re Deepwater Horizon, 739 F.3d 790 (5th Cir. 2014) as it appears on Lexis with the version on Westlaw or in the pages of the Federal Reporter.

Bloomberg Law appears to draw more extensively on the NRS version of a decision.  Its version of the Garza dissent in In re Deepwater Horizon expresses the cross references in Federal Reporter pagination.  However, like Lexis it does not replace the original “slip opinions” with the versions appearing in the pages of the Federal Reporter.  Examination of a sample of the cases Thomson Reuters has identified as flawed finds that Bloomberg Law, like Lexis, has the dropped language.  Casemaker does as well.

Services that Copy Directly from Thomson’s Reports, Errors and All

In contrast, Fastcase, Google Scholar, and Ravel Law all appear to replace “slip opinions” with digitized texts drawn from the National Reporter System.  As a consequence when Thomson Reuters drops words or makes other changes in an original opinion text so do they.  The Westlaw errors are still to be found in the case data of these other services.

Might FDsys Provide a Solution?


Since 2011 decisions from a growing number of federal courts have been collected, authenticated, and digitally stored in their original format as part of the GPO’s FDsys program.  As noted earlier that data gathering is still seriously incomplete.  Furthermore, the GPO role is currently limited to authenticating decision files and adding a very modest set of metadata.  Adding decision identifiers designed to facilitate retrieval of individual cases, ideally designations consistent with emerging norms of medium-neutral citation, would be an enormously useful extension of that role.  So would be the assignment of paragraph numbers throughout decision texts, but regrettably that task properly belongs at the source.  It is time for the Judicial Conference of the United States to revisit vendor and medium neutral citation.

Original URL: http://citeblog.access-to-law.com/?p=598  

Original article

Atom: 2 Years Open Source

Two years ago today, Atom was released as open source software. One year ago, we wanted to share with the community a vision of how far Atom had come in that time.

We are impressed and delighted with the progress that Atom has made in the past year. The number of users and contributors continues to grow. The editor and the ecosystem around it improves every day. So we wanted to share some milestones with you again this year:

Infographic: 2 Years Open Source

We’re living in the future today thanks to your passion, commitment and belief in the ideas behind the Atom editor. We look forward to many more anniversaries with all of you.

Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/xB0hoyDhtOE/two-years-open-source.html  

Original article

Cipherli.st – Strong Ciphers for Apache, Nginx and Lighttpd


# replication:




ssl = on   
ssl_ciphers = 'AES128+EECDH:AES128+EDH'  
ssl_renegotiation_limit = 512MB  
password_encryption = on

Protocol 2
HostKey /etc/ssh/ssh_host_ed25519_key
HostKey /etc/ssh/ssh_host_rsa_key
KexAlgorithms curve25519-sha256@libssh.org,diffie-hellman-group-exchange-sha256
Ciphers chacha20-poly1305@openssh.com,aes256-gcm@openssh.com,aes128-gcm@openssh.com,aes256-ctr,aes192-ctr,aes128-ctr
MACs hmac-sha2-512-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-ripemd160-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-512,hmac-sha2-256,hmac-ripemd160,umac-128@openssh.com

HashKnownHosts yes
Host github.com
    MACs hmac-sha2-512-etm@openssh.com,hmac-sha2-128-etm@openssh.com,hmac-sha2-512
Host *
  ConnectTimeout 30
  KexAlgorithms curve25519-sha256@libssh.org,diffie-hellman-group-exchange-sha256
  MACs hmac-sha2-512-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-ripemd160-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-512,hmac-sha2-256,hmac-ripemd160,umac-128@openssh.com
  Ciphers chacha20-poly1305@openssh.com,aes256-gcm@openssh.com,aes128-gcm@openssh.com,aes256-ctr,aes192-ctr,aes128-ctr
  ServerAliveInterval 10
  ControlMaster auto
  ControlPersist yes
  ControlPath ~/.ssh/socket-%r@%h:%p

Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/cgAiIYBs7RA/  

Original article

Hackers are the new lawyers

Hackers are the new Lawyers are the new Knights.

Back in the feudal days, Lords had Knights to protect them and their possessions from enemies (or to attack their neighbors and plunder their wealth).

In the business world today, lawyers are much like the knights of old. Companies can attack each other with the law, and lawyers are both the attacker and defenders.

In the increasingly computerized world, business are hacking each other to steal their Intellectual Property, understand their strategy, and sometimes, take them down all together (like what happened to HBGary, though that wasn’t caused by another company). Today, these tactics are employed mostly by Chinese companies hacking competitors in other countries, but as multinational corporations continue to grow in power, this tactic might become more ubiquitous. Imagine a world where corporations grow more powerful than governments… cyber attacks might become as common as legal attacks, if there is no government with enough power to catch them or stop them.

In today’s world, and very likely even more so in the future, business that have no cyber-firepower will be analogous to peaceful farming villages in the feudal days who had no military capacity, just waiting to be attacked by their militarily superior neighbors.

Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/aX7swtnP4iA/  

Original article

Flexbox-layout: Flexbox for Android


FlexboxLayout is a library project which brings the similar capabilities of
CSS Flexible Box Layout Module to Android.


Add the following dependency to your build.gradle file.

dependencies {
    compile 'com.google.android:flexbox:0.1.1'


FlexboxLayout extends the ViewGroup like LinearLayout and RelativeLayout.
You can specify the attributes from a layout XML like:

    app:alignContent="stretch" >




Or from code like:

FlexboxLayout flexboxLayout = (FlexboxLayout) findViewById(R.id.flexbox_layout);

View view = flexboxLayout.getChildAt(0);
FlexboxLayout.LayoutParams lp = (FlexboxLayout.LayoutParams) view.getLayoutParams();
lp.order = -1;
lp.flexGrow = 2;

Supported attributes

You can specify following attributes for the FlexboxLayout.

  • flexDirection

    • The direction children items are placed inside the Flexbox layout, it determines the
      direction of the main axis (and the cross axis, perpendicular to the main axis).
      Possible values are:

      • row (default)
      • row_reverse
      • column
      • column_reverse

      Flex Direction explanation

  • flexWrap

    • This attribute controls whether the flex container is single-line or multi-line, and the
      direction of the cross axis. Possible values are:

      • nowrap (default)
      • wrap
      • wrap_reverse

      Flex Wrap explanation

  • justifyContent

    • This attribute controls the alignment along the main axis. Possible values are:

      • flex_start (default)
      • flex_end
      • center
      • space_between
      • space_around

      Justify Content explanation

  • alignItems

    • This attribute controls the alignment along the cross axis. Possible values are:

      • stretch (default)
      • flex_start
      • flex_end
      • center
      • baseline

      Align Items explanation

  • alignContent

    • This attribute controls the alignment of the flex lines in the flex container. Possible values

      • stretch (default)
      • flex_start
      • flex_end
      • center
      • space_between
      • space_around

      Align Content explanation

Also you can specify following attributes for the children of a FlexboxLayout

  • layout_order

    • This attribute can change the ordering of the children views are laid out.
      By default, children are displayed and laid out in the same order as they appear in the
      layout XML. If not specified, 1 is set as a default value.
  • layout_flexGrow

    • This attribute determines how much this child will grow if positive free space is
      distributed relative to the rest of other flex items included in the same flex line.
      If not specified, 0 is set as a default value.
  • layout_flexShrink

    • This attribute determines how much this child will shrink is negative free space is
      distributed relative to the rest of other flex items included in the same flex line.
      If not specified, 1 is set as a default value.
  • layout_alignSelf

    • This attribute determines the alignment along the cross axis (perpendicular to the
      main axis). The alignment in the same direction can be determined by the
      alignItems in the parent, but if this is set to other than
      auto, the cross axis alignment is overridden for this child. Possible values are:

      • auto (default)
      • flex_start
      • flex_end
      • center
      • baseline
      • stretch
  • layout_flexBasisPercent

    • The initial flex item length in a fraction format relative to its parent.
      The initial main size of this child View is trying to be expanded as the specified
      fraction against the parent main size.
      If this value is set, the length specified from layout_width
      (or layout_height) is overridden by the calculated value from this attribute.
      This attribute is only effective when the parent’s length is definite (MeasureSpec mode is
      MeasureSpec.EXACTLY). The default value is -1, which means not set.

Known differences from the original CSS specification

This library tries to achieve the same capabilities of the original
Flexible specification as much as possible,
but due to some reasons such as the way specifying attributes can’t be the same between
CSS and Android XML, there are some known differences from the original specification.

(1) There is no flex-flow
equivalent attribute

  • Because flex-flow is a shorthand for setting the flex-direction and flex-wrap properties,
    specifying two attributes from a single attribute is not practical in Android.

(2) There is no flex equivalent attribute

  • Likewise flex is a shorthand for setting the flex-grow, flex-shrink and flex-basis,
    specifying those attributes from a single attribute is not practical.

(3) layout_flexBasisPercent is introduced instead of

  • Both layout_flexBasisPercent in this library and flex-basis property in the CSS are used to
    determine the initial length of an individual flex item. The flex-basis property accepts width
    values such as 1em, 10px, and content as strings as well as percentage values such as
    10% and 30%, whereas layout_flexBasisPercent only accepts percentage values.
    But specifying initial fixed width values can be done by specifying width (or height) values in
    layout_width (or layout_height, varies depending on the flexDirection). Also, the same
    effect can be done by specifying “wrap_content” in layout_width (or layout_height) if
    developers want to achieve the same effect as ‘content’. Thus, layout_flexBasisPercent only
    accepts percentage values, which can’t be done through layout_width (or layout_height) for

(4) min-width and min-height can’t be specified

  • Which isn’t just implemented yet.

How to make contributions

Please read and follow the steps in CONTRIBUTING.md


Please see License

Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/gdRxMc2x5ak/flexbox-layout  

Original article

.NET Core RC2 – Improvements, Schedule, and Roadmap

This post was written by Scott Hunter.

You all want .NET Core 1.0 RC2, you want a schedule, and you want to go live. Today we’ve got a schedule to share plus some changes that will improve things for everyone in the .NET Community going forward.

The Core Schedule

  • .NET Core and ASP.NET Core 1.0 RC2 runtime and libraries will be available in mid-May.
    • Tooling will be Preview 1 and bundled with this release.
  • .NET Core and ASP.NET Core 1.0 RTM (release) runtime and libraries will be available by the end of June.
    • Tooling will be Preview 2 and bundled with this release.
  • We will continue to make changes and stabilize the tooling until it RTMs with Visual Studio “15”.

How We Got Here

The ASP.NET team started two and a half years ago, building a new version of ASP.NET that was modular, cross platform, and high-performance. This new version of ASP.NET was built on a new .NET Execution Environment (DNX) that was optimized for modern cloud-focused workloads (websites, microservices, etc.). We shipped an RC1 of those bits in November.

After shipping ASP.NET Core 1.0 RC1, it was very important to broaden .NET Core to also support building native console applications. So we started the process of reworking the tool chain so it could be used to build .NET console, class libraries and server applications. This process has proved to be harder than we anticipated and led to us removing dates for RC2/RTM from our schedule in February.

Unifying the frameworks and the tooling

Now that Xamarin is a part of Microsoft, more than ever we want to make it easy to share code between desktop, server and mobile applications.

We announced the .NET Standard at Build as part of our plan for making it easy to share code across .NET application models.

We also need to make it easy to work with projects across these application models. In order to do this we are working to merge the capabilities of .xproj/project.json and .csproj project systems into a single project system based on MSBuild. This transition will be automatic and will not require you to change your existing projects. This work will happen during the VS 15 release schedule and we will release another blog post with more details.

What does Preview mean?

Remember that .NET Core has two main parts:

  • The Runtime/Libraries – This is the CLR, libraries, compilers, etc.
  • The Tooling – This is all the support in the .NET Core command line tools, Visual Studio and Visual Studio Code that enable you to work with .NET Core projects.

We’re splitting the .NET Core “release train” so that those of you who are waiting can go live on .NET Core 1.0 RC2 with confidence, while we continue to deliver on our plans for the tooling:

  • The .NET Core 1.0 RC2 runtime is a true Release Candidate. It’s solid, stable, and it won’t change for RTM (except if something critical happens) and we feel good about it. It will have a “go-live” license, meaning you can get official support from Microsoft.
  • The tooling that supports .NET Core and ASP.NET Core, including the new command line tools and bits that plug into Visual Studio & Visual Studio Code, aren’t there yet. It’s going to change before it stabilizes. We’re going to call this tooling release Preview 1.


If you have questions, please share them below and the team will answer them here. We will also discuss this delivery schedule change on our next ASP.NET Community Standup with a full question and answer segment.

Scott Hunter – .NET Team

Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/o5G4Op83K70/  

Original article

Why Google App Engine Rocks: A Google Engineer’s Take

Posted by Luke Stone, Director of Technical Support

In December 2011, I had been working for Google for nine years and was leading a team of 10 software developers, supporting the AdSense business. Our portfolio consisted of over 30 software systems, mostly web apps for business intelligence that had been built over the past decade, each on a stack that seemed like a good idea at the time. Some were state-of-the-art custom servers built on the (then) latest Google web server libraries and running directly on Borg. Some were a LAMP stack on a managed hosting service. Some were running as a cron job on someone’s workstation. Some were weird monsters, like a LAMP stack running on Borg with Apache customized to work with production load balancers and encryption. Things were breaking in new and wonderful ways every day. It was all we could do to keep the systems running  just barely.

The team was stressed out. The Product Managers and engineers were frustrated. A typical conversation went like this:

          PM: “You thought it would be easy to add the foobar feature, but it’s been four
          Eng: “I know, I know, but I had to upgrade the package manager version
          first, and then migrate off some deprecated APIs. I’m almost done with that stuff.
          I’m eager to start on the foobar,  too.”
          PM: “Well, now, that’s disappointing.”

I surveyed the team to find the root cause of our inefficiency: we were spending 60% of our time on maintenance. I asked how much time would be appropriate, and the answer was a grudging 25%. We made a goal to reduce our maintenance to that point, which would free up the time equivalent of three and a half of our 10 developers.

Google App Engine had just come out of preview in September 2011. A friend recommended it heartily  he’d been using it for a personal site  and he raved that it was low-maintenance, auto-scaling and had built-in features like Google Cloud Datastore and user-management. Another friend, Alex Martelli, was using it for several personal projects. I myself had used it for a charity website since 2010. We decided to use it for all of our web serving. It was the team’s first step into PaaS.

Around the same time, we started using Dremel, Google’s internal version of BigQuery. It was incredibly fast compared to MapReduce, and it scaled almost as well. We decided to re-write all of our data processing to use it, even though there were still a few functional gaps between it and App Engine at the time, for example visualization and data pipelines. We whipped up solutions that are still in use by hundreds of projects at Google. Now Google Cloud Platform users can access similar functionality using Google Cloud Datalab.

What we saw next was an amazing transformation in the way that software developers worked. Yes, we had to re-write 30 systems, but they needed to be re-written anyway. WIth that finished, developing on the cloud was so much faster — I recall being astonished at seeing the App Engine logs, that I had done 100 code, test, and deploy cycles in a single coding session. Once things were working, they kept working for a long time. We stopped debating what stack to choose for the next project. We just grabbed the most obvious one from Google Cloud Platform and started building. If we found a bug in the cloud infrastructure, it was promptly fixed by an expert. What a change from spending hours troubleshooting library compatibility!

Best of all, we quickly got the time we spent on maintenance down to 25%, and it kept going down. At the end of two years I repeated the survey; the team reported that they now only spent 5% of their time on maintenance.

We started having good and different problems. The business wasn’t generating ideas fast enough to keep us busy, and we had no backlog. We started to take two weeks at the end of every quarter for a “hackathon” to see what we could dream up. We transferred half of the developers to another, busier team outside of Cloud. We tackled larger projects and started out-pacing much larger development teams.

After seeing how using PaaS changed things for my team, I want everyone to experience it. Thankfully, these technologies are available not only to Google engineers, but to developers the world over. This is the most transformational technology I’ve seen since I first visited Google Search in 1999  it lets developers stop doing dumb things and get on with developing the applications that add value to our lives.

Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/YZ6YStHtCG0/why-Google-App-Engine-rocks-a-Google-engineers-take.html  

Original article

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑

%d bloggers like this: