Anant Agarwal is a professor of Electrical Engineering and Computer Science at MIT and the CEO of edX. A massive open online course platform founded by MIT and Harvard, edX offers numerous courses on a wide variety of subjects. As of 2014 edX had more than 4 million students taking more than 500 courses online. The organization has developed open-source software called Open edX that powers edX courses and is freely available online. Mr. Agarwal has agreed to take some time out of his schedule and answer your questions about edX and the future of learning. As usual, ask as many as you’d like, but please, one question per post.
My colleague Jon Fritz wrote the blog post below to introduce you to some new features of Amazon EMR.
Today we are announcing Amazon EMR release 4.3.0, which adds support for Apache Hadoop 2.7.1, Apache Spark 1.6.0, Ganglia 3.7.2, and a new sandbox release for Presto (0.130). We have also enhanced our maximizeResourceAllocation setting for Spark and added an AWS CLI Export feature to generate a create-cluster command from the Cluster Details page in the AWS Management Console.
New Applications in Release 4.3.0 Amazon EMR provides an easy way to install and configure distributed big data applications in the Hadoop and Spark ecosystems on managed clusters of Amazon EC2 instances. You can create Amazon EMR clusters from the Amazon EMR Create Cluster Page in the AWS Management Console, AWS Command Line Interface (CLI), or using a SDK with an EMR API. In the latest release, we added support for several new versions of the following applications:
Spark 1.6.0 – Spark 1.6.0 was released on January 4th by the Apache Foundation, and we’re excited to include it in Amazon EMR within four weeks of open source GA. This release includes several new features like compile-time type safety using the Dataset API (SPARK-9999), machine learning pipeline persistence using the Spark ML Pipeline API (SPARK-6725), a variety of new machine learning algorithms in Spark ML, and automatic memory management between execution and cache memory in executors (SPARK-10000). View the release notes or learn more about Spark on Amazon EMR.
Presto 0.130 – Presto is an open-source, distributed SQL query engine designed for low-latency queries on large datasets in Amazon S3 and HDFS. This is a minor version release, with optimizations to SQL operations and support for S3 server-side and client-side encryption in the PrestoS3Filesystem. View the release notes or learn more about Presto on Amazon EMR.
Enhancements to the maximizeResourceAllocation Setting for Spark Currently, Spark on your Amazon EMR cluster uses the Apache defaults for Spark executor settings, which are 2 executors with 1 core and 1GB of RAM each. Amazon EMR provides two easy ways to instruct Spark to utilize more resources across your cluster. First, you can enable dynamic allocation of executors, which allows YARN to programmatically scale the number of executors used by each Spark application, and adjust the number of cores and RAM per executor in your Spark configuration. Second, you can specify maximizeResourceAllocation, which automatically sets the executor size to consume all of the resources YARN allocates on a node and the number of executors to the number of nodes in your cluster (at creation time). These settings create a way for a single Spark application to consume all of the available resources on a cluster. In release 4.3.0, we have enhanced this setting by automatically increasing the Apache defaults for driver program memory based on the number of nodes and node types in your cluster (more information about configuring Spark).
AWS CLI Export in the EMR Console You can now generate an EMR create-cluster command representative of an existing cluster with a 4.x release using the AWS CLI Export option on the Cluster Details page in the AWS Management Console. This allows you to quickly create a cluster using the Create Cluster experience in the console, and easily generate the AWS CLI script to recreate that cluster from the AWS CLI.
Launch an Amazon EMR Cluster with Release 4.3.0 Today To create an Amazon EMR cluster with 4.3.0, select release 4.3.0 on the Create Cluster page in the AWS Management Console, or use the release label emr-4.3.0 when creating your cluster from the AWS CLI or using a SDK with the EMR API.
The current perceptions of Microsoft by some home users can be quite negative. This is likely due to privacy concerns with Windows 10, which is a legitimate issue. With that said, the company is still the darling of the enterprise. After all, Windows 7 and Office are integral tools for many successful businesses.
Windows and Office aside, another wildly popular business tool from Microsoft is Azure. This cloud platform is great, but some companies wisely prefer an on-premises solution. Enter Azure Stack. Today, Microsoft announces that the first Technical Preview of its hybrid cloud/datacenter product is coming this week. Bigger news, arguably, is that Canonical’s operating system, Ubuntu Linux, will play a key role. Once again, Microsoft is leveraging open source — noticing a trend here, folks?
“Today, Microsoft announced the first Technical Preview of Microsoft Azure Stack with Ubuntu. Azure Stack is based on Microsoft’s Azure public cloud model and allows organizations to deliver Azure services from their own datacenter. By including Ubuntu, Azure Stack supplies developers and customers the same great Ubuntu experience they are used to on Azure. Canonical is working with Microsoft to bring more choice and portability to the cloud, by having Ubuntu as a part of Azure Stack”, says John Zannos, Canonical.
Mike Neil, Corporate Vice President, Enterprise Cloud, Microsoft explains, “through a series of Technical Previews, Microsoft will add services and content such as OS images and Azure Resource Manager templates to help customers start taking advantage of Azure Stack. Also, Azure has 100s of such applications and components on GitHub and as the corresponding services come to Azure Stack, users can take advantage of those as well. In this context, we are already seeing early excitement from partners — especially open source partners — like Canonical, who are contributing validated Ubuntu Linux images that enable open source applications to work well in Azure Stack environments”.
Since Linux-based operating systems, like Ubuntu, are already quite popular on the traditional Azure platform, they should see continued success on the Stack variant as well. Canonical’s operating system in particular is very robust and stable, making it a smart choice. Of course, other Linux-based OS images will be available too.
Ultimately, it will be interesting to see how Azure Stack will be received by businesses. With that said, its performance cannot truly be evaluated until a final version is released. We will have a better glimpse into the future this Friday, however, when the Technical Preview is released.
Do you think Azure Stack will prove popular in the enterprise? Tell me in the comments.
Original URL: http://feeds.betanews.com/~r/bn/~3/ZTq27zgzv9g/
The concept is to follow the MVC pattern. Where the Models are being observed by the Views which are observed by the Controllers. A user interacts with a View, that event is handled by the Controller which updates the Model accordingly. The View observes the Model and handles those events by rendering itself accordingly. In the aforementioned post, there is no controller, nor events. So I’m going to keep my example as simplistic as the one provided. In a future post, I plan on expanding this idea example to utilize Models and Controllers.
Below is my parallel universe version of Per Harald Borgen’s post:
Why so simple? Because I’ve found that when I’m trying to learn a new technology, even the simplest features can add unnecessary complexity.
If you’re overwhelmed with the wide variety of frameworks and libraries being published, this tutorial is for you.
To claim you’ll be building an app is actually an exaggeration. It’s only a profile page, as you can see below. (The image is taken randomly from http://lorempixel.com/)
Step 1: Splitting the page into components
An application is built around components; everything you see on the screen is a part of a View component. Before we start coding, it’s a good idea to create a sketch of the views, as we’ve done above.
The main component — which wrap all other components — is marked in red. Since we’re targeting this application to run in a browser, we’ll call this one document.body or Body.
Once we’ve figured out that Body is our main viewport, we’ll need to ask ourselves: which view is a direct child of the Body?
I’d argue that the name and the profile image can be grouped into one View, which we’ll call Profile (green rectangle), and the Hobbies section can be another View (blue rectangle).
The structure of our components can also be visualized like this:
We could split the Views further; like ProfileImage and HobbyItem, though we’ll stop here for the sake of simplicity.
Step 2: Hello World
Before you begin coding, you’ll need to download the source file. It’s available at this GitLab repo. Simply copy or clone it and open the index.html file in the browser. (The full code is available in the finished_project.html file.)
I’ve setup the file properly, so you’ll see no links to any unnecessary libraries in the section of the file. Your code will start at line 9.
Each View object can have as many members you want, though the most important one is the render method. In the render method, you’ll pass in a reference to the DOM element that will contain the output of this view and then append the contents of the view to that container. In our case, we simply want a div tag with the text “Hello World”.
Then follow up with this inside the App object:
This is how we specify where on the page we want the Hello view to be rendered. This is done by adding a load event handler to the window that calls Hello.render, passing in the document.body as the container parameter. I throw this into a function called main, but that is just a convention that I use due to familiarity with C-based programming languages.
The syntax shouldn’t look to weird, we’re just declaring an object that utilizes the DOM API and adhering to a couple of tried and tested software development patterns.
Load the page in a browser and you’ll see ‘Hello World’ printed out on the screen.
Step 3: More components
Let’s add some more views. Looking back at our application overview, we see that the App component has got two views called Profile and Hobbies.
Let’s write out these two views. We’ll begin with Profile:
There is actually nothing new here. Just a bit more content in the render function than it was in the Hello view.
Let’s write the Hobbies component:
If you refresh the page again though, you won’t see any of these components.
This is because nothing has told these views to render to the screen. We need to update our main function to render the Profile and Hobbies view instead of the Hello view.
This is what we’ll need to do:
If you refresh the page again you’ll see that all the content appears on the page. (Though the image wont appear, as we’ve only added a dummy link to it).
Step 4: Get the data
Now that we have the basic structure setup, we’re ready to add the correct data to our project.
A good practice when implementing the MVC pattern is something called a one directional data flow, meaning that the data is passed down from parent to child components.
Above all the components, paste in the following code:
You can imagine this data being fetched from an API of something.
The next thing you’ll need to do is add this data to the App components as its model.
Data in the MVC pattern is maintained in the Model objects and mutated by the Controller objects. In this simplistic example, I’m going to overlook that aspect and just deal with the single object. In a future post, I’ll explore a more complex data model for the application.
Below, you’ll see how you pass the data into the views, by simply changing the constructor method of the views to accept a model, we’ll initialize our view objects and then render them.
Now we’re able to access this data from within the View objects through model.[member-name]. We’ll also restructure the views a bit to make the Hobbies a child of the Profile view. This makes it so the application just initializes an instance of the Profile view and data is where it needs to be.
We use the profileImage and name in the Profile view while only the hobbyList array is passed into the Hobbies view. This is because the Hobbies component doesn’t need the rest of the data; it’s simply going to display a list of hobbies.
Let’s look at how we’ll need to rewrite the Profile view in order to use the data we’ve passed down to it:
I removed the IIFE pattern and added the Hobbies view to the Profile‘s members. Then access the members of the passed in model in order to present the resource’s current state to the user.
In the Hobbies component we’ll need to use a technique for looping through the of hobbies.
As you can see, we’re looping through the hobbies array stored in model. We’re using the array prototype method map, which creates a new array based on whatever we return within the callback function.
Notice that I didn’t create a key attribute on the list items. There is no reason to at this point in the development of this application as nothing outside of the Hobbies view is concerned with this list of data as it is presented on the screen.
This is the full code:
Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/cUjRRF2b8kw/