CERN Releases 300TB of Large Hadron Collider Data Into Open Access

An anonymous reader writes: The European Organization for Nuclear Research, known as CERN, has released 300 terabytes of collider data to the public. “Once we’ve exhausted our exploration of the data, we see no reason not to make them available publicly,” said Kati Lassila-Perini, a physicist who works on the Compact Muon Solenoid detector. “The benefits are numerous, from inspiring high school students to the training of the particle physicists of tomorrow. And personally, as CMS’s data preservation coordinator, this is a crucial part of ensuring the long-term availability of our research data,” she said in a news release accompanying the data. Much of the data is from 2011, and much of it is from protons colliding at 7 TeV (teraelectronvolts). The 300 terabytes of data includes both raw data from the detectors and “derived” datasets. CERN is providing tools to work with the data which is handy.


Share on Google+

Read more of this story at Slashdot.


Original URL: http://rss.slashdot.org/~r/Slashdot/slashdot/~3/L0u0loQINQM/cern-releases-300tb-of-large-hadron-collider-data-into-open-access

Original article

Apple tells developers watchOS apps must work without an iPhone

Apple has announced to developers that, starting June 1, all watchOS apps submitted for inclusion in the App Store must be native apps based on watchOS 2 SDK. What this means in practice is that Apple Watch apps must function without an iPhone. This is something that has plagued wearables from other manufacturers — including Samsung — and the new rules will almost certainly go down well with consumers. Ultimately this should lead to an improvement in the quality of Apple Watch apps, as developers will be forced to build in more functionality. It’s not, however, all good news. The… [Continue Reading]


Original URL: http://feeds.betanews.com/~r/bn/~3/1M7Sp43PtQc/

Original article

Windows Subsystem for Linux Architectural Overview

We recently announced Bash on Ubuntu on Windows which enables native Linux ELF64 binaries to run on Windows via the Windows Subsystem for Linux (WSL). This subsystem was created by the Microsoft Windows Kernel team and has generated a lot of excitement. One of the most frequent question we get asked is how is this approach different from a traditional virtual machine. In this first of a series of blog posts we will provide an overview of WSL that will answer that and other questions. In future posts we will dive deep into the component areas introduced. 

History of Windows Subsystems

Since its inception, Microsoft Windows NT was designed to allow environment subsystems like Win32 to present a programmatic interface to applications without being tied to implementation details inside the kernel. This allowed the NT kernel to support POSIX, OS/2 and Win32 subsystems at its initial release.

Early subsystems were implemented as user mode modules that issued appropriate NT system calls based on the API they presented to applications for that subsystem. All applications were PE/COFF executables, a set of libraries and services to implement the subsystem API and NTDLL to perform the NT system call. When a user mode application got launched the loader invoked the right subsystem to satisfy the application dependencies based on the executable header.

Later versions of subsystems replaced the POSIX layer to provide the Subsystem for Unix-based Applications (SUA). This composed of user mode components to satisfy:

  1. Process and signal management
  2. Terminal management
  3. System service requests and inter process communication

The primary role of SUA was to encourage applications to get ported to Windows without significant rewrites. This was achieved by implementing the POSIX user mode APIs using NT constructs. Given that these components were constructed in user mode, it was difficult to have semantic and performance parity for kernel mode system calls like fork(). Because this model relied on the need for programs to be recompiled it required ongoing feature porting and was a maintenance burden.

Over time these initial subsystems were retired.

Since the Windows NT Kernel was architected to allow new subsystem environments, we were able to use the initial investments made in this area and broaden them to develop the Windows Subsystem for Linux.

Windows Subsystem for Linux

WSL is a collection of components that enables native Linux ELF64 binaries to run on Windows. It contains both user mode and kernel mode components. It is primarily comprised of:

  1. User mode session manager service that handles the Linux instance life cycle
  2. Pico provider drivers (lxss.sys, lxcore.sys) that emulate a Linux kernel by translating Linux syscalls
  3. Pico processes that host the unmodified user mode Linux (e.g. /bin/bash)

It is the space between the user mode Linux binaries and the Windows kernel components where the magic happens. By placing unmodified Linux binaries in Pico processes we enable Linux system calls to be directed into the Windows kernel. The lxss.sys and lxcore.sys drivers translate the Linux system calls into NT APIs and emulate the Linux kernel.

LXSS diagram

Figure 1: WSL Components

LXSS Manager Service

The LXSS Manager Service is a broker to the Linux subsystem driver and is the way Bash.exe invokes Linux binaries. The service is also used for synchronization around install and uninstall, allowing only one process to do those operations at a time and blocking Linux binaries from being launched while the operation is pending.

All Linux processes launched by a particular user go into a Linux instance. That instance is a data structure that keeps track of all LX processes, threads, and runtime state. The first time an NT process requests launching a Linux binary an instance is created.

Once the last NT client closes, the Linux instance is terminated. This includes any processes that were launched inside of the instance including daemons (e.g. the git credential cache).

Pico Process

As part of Project DrawBridge, the Windows kernel introduced the concept of Pico processes and Pico drivers. Pico processes are OS processes without the trappings of OS services associated with subystems like a Win32 Process Environment Block (PEB). Furthermore, for a Pico process, system calls and user mode exceptions are dispatched to a paired driver.

Pico processes and drivers provide the foundation for the Windows Subsystem for Linux.  The subsystem is able to run native unmodified Linux code by loading a binary executable into the process’s address space and emulating the underlying Linux kernel.

System Calls

WSL executes unmodified Linux ELF64 binaries by virtualizing a Linux kernel interface on top of the Windows NT kernel.  One of the kernel interfaces that it exposes are system calls (syscalls). A syscall is a service provided by the kernel that can be called from user mode.  Both the Linux kernel and Windows NT kernel expose several hundred syscalls to user mode, but they have different semantics and are generally not directly compatible. For example, the Linux kernel includes things like fork, open, and kill while the Windows NT kernel has the comparable NtCreateProcess, NtOpenFile, and NtTerminateProcess.

The Windows Subsystem for Linux includes kernel mode drivers (lxss.sys and lxcore.sys) that are responsible for handling Linux system call requests in coordination with the Windows NT kernel. The drivers do not contain code from the Linux kernel but are instead a clean room implementation of Linux-compatible kernel interfaces. On native Linux, when a syscall is made from a user mode executable it is handled by the Linux kernel. On WSL, when a syscall is made from the same executable the Windows NT kernel forwards the request to lxcore.sys.  Where possible, lxcore.sys translates the Linux syscall to the equivalent Windows NT call which in turn does the heavy lifting.  Where there is no reasonable mapping the Windows kernel mode driver must service the request directly.

As an example, the Linux fork() syscall has no direct equivalent call documented for Windows. When a fork system call is made to the Windows Subsystem for Linux, lxcore.sys does some of the initial work to prepare for copying the process. It then calls internal Windows NT kernel APIs to create the process with the correct semantics, and completes copying additional data for the new process.

File system

File system support in WSL was designed to meet two goals.

  1. Provide an environment that supports the full fidelity of Linux file systems
  2. Allow interoperability with drives and files in Windows

The Windows Subsystem for Linux provides virtual file system support similar to the real Linux kernel. Two file systems are used to provide access to files on the users system: VolFs and DriveFs.

VolFs

VolFs is a file system that provides full support for Linux file system features, including:

  • Linux permissions that can be modified through operations such as chmod and chroot
  • Symbolic links to other files
  • File names with characters that are not normally legal in Windows file names
  • Case sensitivity

Directories containing the Linux system, application files (/etc, /bin, /usr, etc.), and users Linux home folder, all use VolFs.

Interoperability between Windows applications and files in VolFs is not supported.

DriveFs

DriveFs is the file system used for interoperability with Windows. It requires all files names to be legal Windows file names, uses Windows security, and does not support all the features of Linux file systems. Files are case sensitive and users cannot create files whose names differ only by case.

All fixed Windows volumes are mounted under /mnt/c, /mnt/d, etc., using DriveFs. This is where users can access all Windows files. This allows users to edit files with their favorite Windows editors such as Visual Studio Code, and manipulate them with open source tools in Bash using WSL at the same time.

In future blog posts we will provide additional information on the inner workings of these component areas. The next post will cover more details on the Pico Process which is a foundational building block of WSL.

Deepu Thomas and Seth Juarez discuss the underlying architecture that enables the Windows Subsystem for Linux.


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/5GKZ2NEkV-s/

Original article

TIL target=_blank is insecure

What problems does it solve?

You’re currently viewing index.html.

Imagine the following is user-generated content on your website:

Click me!!1 (same-origin)

Clicking the above link opens malicious.html in a new tab (using target=_blank). By itself, that’s not very exciting.

However, the malicious.html document in this new tab has a window.opener which points to the window of the HTML document you’re viewing right now, i.e. index.html.

This means that once the user clicks the link, malicious.html has full control over this document’s window object!

Note that this also works when index.html and malicious.html are on different origins — window.opener.location is accessible across origins! (Things like window.opener.document are subject to CORS though.) Here’s an example with a cross-origin link:

Click me!!1 (cross-origin)

In this proof of concept, malicious.html replaces the tab containing index.html with index.html#hax, which displays a hidden message. This is a relatively harmless example, but instead it could’ve redirected to a phishing page, designed to look like the real index.html, asking for login credentials. The user likely wouldn’t notice this, because the focus is on the malicious page in the new window while the redirect happens in the background. This attack could be made even more subtle by adding a delay before redirecting to the phishing page in the background (see tab nabbing).

TL;DR If window.opener is set, a page can trigger a navigation in the opener regardless of security origin.

Recommendations

To prevent pages from abusing window.opener, use rel=noopener. This ensures window.opener is null in Chrome 49 and Opera 36.

Click me!!1 (now with rel=noopener)

For older browsers, you could use rel=noreferrer which also disables the Referer HTTP header, or the following JavaScript work-around which potentially triggers the popup blocker:

var otherWindow = window.open();
otherWindow.opener = null;
otherWindow.location = url;

Don’t use target=_blank (or any other target that opens a new navigation context), especially for links in user-generated content, unless you have a good reason to.

Bug tickets to follow

Questions? Feedback? Tweet @mathias.


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/1UAqs7dzr2w/

Original article

When to Rewrite from Scratch – Autopsy of a Failed Software

21 Apr 2016

It was winter of 2012. I was working as a software developer in a small team at a start-up. We had just released the first version of our software to a real corporate customer. The development finished right on schedule. When we launched, I was over the the moon and very proud. It was extremely satisfying to watch the system process couple of million of unique users a day and send out tens of millions of SMS messages. By summer, the company had real revenue. I got promoted to software manager. We hired new guys. The company was poised for growth. Life was great. And then we made a huge blunder and decided to rewrite the software. From scratch.

Why We Felt That Rewrite from Scratch Was Needed?

We had written the original system with a gun to our heads. We had to race to the finish line. We weren’t having long design discussions or review meetings – we didn’t have time for such things. We would finish up a feature get it tested quickly and move on to the next. We had a shared office and I remember software developers at other companies getting into lengthy design and architecture debates and arguing for weeks over design patterns.

Despite agile-on-steroids design, the original system wasn’t badly written and generally was well structured. There was some spaghetti code that carried over from company’s previous proof of concept attempts that we left untouched because it was working and we had no time. But instead of thinking about incremental improvements, we convinced ourselves that we need to rewrite from scratch because:

  • the old code was bad and hard to maintain.
  • the “monolith java architecture” was inadequate for our future need of supporting a very large operator with 60 million mobile users and multi-site deployments.
  • I wanted to try out new, shinny technologies like Apache Cassandra, Virtualization, Binary Protocols, Service Oriented Architecture, etc.

We convinced the entire organization and the board and sadly, we got our wish.

The Rewrite Journey

The development officially began in spring of 2012 and we set end of January, 2013 as the release date. Because the vision was so grand, we needed even more people. I hired consultants and couple of remote developers in India. However, we didn’t fully anticipate the need to maintain the original system in parallel with new development and underestimated customer demands. Remember I said in the beginning we had a real customer? The customer was one one of the biggest mobile operators in South America and once our system had adoption from its users, they started making demands for changes and new features. So we had to continue updating the original system, albeit half-heartedly because we were digging its grave. We dodged new feature requests from the customer as much as we can because we were going to throw the old one away anyways. This contributed to delays and we missed our January deadline. In fact, we missed it by 8 whole months!

But let’s skip to the end. When the project was finally finished, it looked great and met all the requirements. Load tests showed that it can easily support over 100 million users. The configuration was centralized and it had a beautiful UI tool to look at charts and graphs. It was time to go and kill the old system and replace it with the new one… until the customer said “no” to upgrade. It turned out that the original system had gained wide adoption and their users had started relying on it. They wanted absolutely no risks. Long story short, after months of back and forth, we got nowhere. The project was officially doomed.

Lessons Learnt

  • You should almost never, ever rewrite from scratch. We rewrote for all the wrong reasons. While parts of code were bad, we could have easily fixed them with refactoring if we had taken time to read and understand the source code that was written by other people. We had genuine concerns about the scalability and performance of the architecture to support more sophisticated business logic, but we could have introduced these changes incrementally.
  • Systems rewritten from scratch offer no new value to the user. To the engineering team, new technology and buzzwords may sound cool but they are meaningless to customers if they don’t offer new features that the customers need.
  • We missed real opportunities while we were focused on the rewrite. We had a very basic ‘Web Tool’ that the customer used to look at charts and reports. As they became more involved, they started asking for additional features such as real-time charts, access-levels, etc. Because we weren’t interested in the old code and had no time anyways, we either rejected new requests or did a bad job. As a result, the customer stopped using the tool and insisted on reports by email. Another lost opportunity was an opportunity to build a robust Analytics platform that was badly needed.
  • I underestimated the effort of maintaining the old system while the new one is in development. We estimated 3-5 requests a month and got 3 times as many.
  • We thought our code was harder to read and maintain since we didn’t use proper design patterns and practices that other developers spent days discussing. It turned out that most professional code I have seen in larger organizations is 2x time worst than that we had. So we were dead wrong about that.

When Is Rewrite the Answer?

Joel Spolsky made strong arguments against rewrite and suggests that one should never do it. I’m not so sure about it. Sometimes incremental improvements and refactoring are very difficult and the only way to understand the code is to rewrite it. Plus software developers love to write code and create new things – it’s boring to read someone else’s code and try to understand their code and their ‘mental abstractions’. But good programmers are also good maintainers.

If you want to rewrite, do it for the right reasons and plan properly for the following:

  • The old code will still need to be maintained, in some cases, long after you release the new version. Maintaining two versions of code will require huge efforts and you need to ask yourself if you have enough time and resources to justify that based on the size of the project.
  • Think about losing other opportunities and prioritize.
  • Rewriting a big system is more risky than smaller ones. Ask yourself if you can incrementally rewrite. We switched to a new database, became a ‘Service Oriented Architecture’ and changed our protocols to binary, all at the same time. We could have introduced each of these changes incrementally.
  • Consider the developers’ bias. When developers want to learn a new technology or language, they want to write some code in it. While I’m not against it and it’s a sign of a good environment and culture, you should take this into consideration and weigh it against risks and opportunities.

Michael Meadows made excellent observations on when “BIG” rewrite becomes necessary:

Technical

  • The coupling of components is so high that changes to a single component cannot be isolated from other components. A redesign of a single component results in a cascade of changes not only to adjacent components, but indirectly to all components.
  • The technology stack is so complicated that future state design necessitates multiple infrastructure changes. This would be necessary in a complete rewrite as well, but if it’s required in an incremental redesign, then you lose that advantage.
  • Redesigning a component results in a complete rewrite of that component anyway, because the existing design is so fubar that there’s nothing worth saving. Again, you lose the advantage if this is the case.

Political

  • The sponsors cannot be made to understand that an incremental redesign requires a long-term commitment to the project. Inevitably, most organizations lose the appetite for the continuing budget drain that an incremental redesign creates. This loss of appetite is inevitable for a rewrite as well, but the sponsors will be more inclined to continue, because they don’t want to be split between a partially complete new system and a partially obsolete old system.
  • The users of the system are too attached with their “current screens.” If this is the case, you won’t have the license to improve a vital part of the system (the front-end). A redesign lets you circumvent this problem, since they’re starting with something new. They’ll still insist on getting “the same screens,” but you have a little more ammunition to push back.
    Keep in mind that the total cost of redesigning incrementally is always higher than doing a complete rewrite, but the impact to the organization is usually smaller. In my opinion, if you can justify a rewrite, and you have superstar developers, then do it.

Abandoning working projects is dangerous and we wasted an enormous amount of money and time duplicating working functionality we already had, rejected new features, irritated the customer and delayed ourselves by years. If you are embarking on a rewrite journey, all the power to you, but make sure you do it for the right reasons, understand the risks and plan for it.

This article was written by Umer Mansoor.
Please leave your comments below and also don’t forget to follow us on Facebook.



Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/ShQUvXYGG9g/

Original article

Microsoft Announces Windows 10 Build 14328 With Windows Ink, New UI

An anonymous reader writes: Windows Ink is one of the many new features rolling out to beta testers as part of Windows 10 Build 14328. The build includes the new Windows Ink Workspace, providing access to new and improved sticky notes, a sketchpad, and a new screen sketch feature. There’s also a new digital ruler you can use to create shapes and draw objects freely. The UI of the Start menu and Start Screen have also been tweaked. The most used apps list and all apps UI have been merged into a single view, creating a less cluttered Start menu. Microsoft also moved power, settings, and file explorer shortcuts so they’re always visible. You can now bring back the fullscreen all apps list in the Start Screen, and you can toggle between the all apps view and your regular pinned apps. If you want things to feel less like a desktop PC, you can auto-hide the taskbar in tablet mode. Microsoft has detailed all of the new features found in Build 14328 in their blog post.


Share on Google+

Read more of this story at Slashdot.


Original URL: http://rss.slashdot.org/~r/Slashdot/slashdot/~3/w7656UW_OME/microsoft-announces-windows-10-build-14328-with-windows-ink-new-ui

Original article

12-inch MacBook’s three flaws that Apple could’ve fixed but didn’t

Earlier this week, Apple finally updated its svelte laptop that launched 13-months ago. I am awe-struck by the company’s design-audacity—not for brash innovation but bumbling compromises that make me wonder who needs this thing. The 12-inch MacBook offers much, wth respect to thinness, lightness, and typing experience (the keyboard is clever tech). But baffling is the decision to keep the crappy 480p webcam. These days, not late-1990s state-of-art, 720p is the least a pricey computer should come with, and is it too much to ask for 1080p or 4K when modern smartphones can shoot just that? This shortcoming, and two… [Continue Reading]


Original URL: http://feeds.betanews.com/~r/bn/~3/3JCuHCa4-sI/

Original article

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑

%d bloggers like this: