Introducing Facebook, Messenger, and Instagram Windows Apps

By Davis Fields, Product Manager, Facebook

There are many people using Facebook, Messenger and Instagram on Windows, so today we’re excited to rollout Windows 10 Apps for Facebook and Messenger on desktop and Instagram on mobile. These new apps will load quickly and easily within Windows and have the most up-to-date features.

Facebook for Windows 10
We built the new Windows 10 Facebook app so it’s fast and easy to access your favorite features. Facebook is one click away from the Start Menu, and the app starts and loads your News Feed much faster than previous Facebook desktop applications. You can stay up-to-date with Facebook through desktop notifications, and you can pin a new Facebook Live Tile which shows you the latest updates from your friends, family and Pages you follow. It’s also easy to share photos to Facebook straight from your favorite apps or File Explorer.

Live Tile Magnifying Glass Marcos
We included the latest Facebook features in the new app, including Reactions, stickers in comments, and a right-hand column that shows birthday & event reminders, trending topics and more. We built an in-app browser to make it easier for people to read and share multiple articles from their News Feed.

FB with Device Shadow

Messenger for Windows 10
To keep your conversations going wherever you are, we’re also rolling out a Messenger app. Along with many of your favorite Messenger features – like stickers, group conversations and GIFs – Messenger for Windows has native desktop notifications that make your experience richer and more complete. You also can see when you have messages waiting for you with a Live Tile.

Messenger with Device Shadow

Instagram for Windows 10
When we first built Instagram for Windows, we were focused on bringing the app’s core features to the Windows Phone community as quickly as possible. Today, we’re rolling out Instagram for Windows 10 Mobile with all of the community’s favorite features — including Instagram Direct, Explore and video.

You’ll also see that Instagram for Windows 10 Mobile supports Live Tiles, showing you updates right on your home screen.

IG in device with shadow

Facebook and Messenger for Windows 10 will both be available later today in the Windows Desktop App Store, and Instagram for Windows 10 Mobile will be available later today in the Windows Phone Store. We’ll be replacing the older Facebook Windows 8 listing in the Windows Store with the new Facebook Windows 10 app, using the below logo. People who have the Windows 8 app can choose to continue to use it, use Facebook in the browser or complete a free update to Windows 10. Later this year we’re excited to roll out the Windows 10 Phone Facebook and Messenger apps.

We hope you enjoy!

image-10


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/uBVuVnJW-so/

Original article

You probably don’t need a JavaScript framework

Article note: A number of good points made, many of which apply to frameworks for other languages too.

I’m not going to post yet another rant about “Why the JavaScript community is so bad” or anything like that, because I don’t feel that way. I’d much rather show you that it’s actually pretty simple and surprisingly fun to do things from the ground-up by yourself and introduce you to how simple and powerful the Web API and native DOM really is.

For the sake of simplicity, let’s assume your everyday framework is React but if you use something else, replace React with X framework because this will still apply to you too.

React, Virtual DOM, Webpack, TypeScript, JSX…

HOLD ON! Let’s pause for a second here and think.

Let’s start from the beginning and ask ourselves why do these things exist in the first place?

Virtual DOM is efficient and has good performance, but native DOM interaction is insanely fast and your users will not notice any difference. This is where you have to choose what sacrifice you want to make. Add 45KB of extra size to your app to save a couple of milliseconds for DOM manipulation that is completely invisible to the user.

Stress-testing tools like DBMON is good to have for scientific purposes, but all too often I’ve seen these tests being used in arguments between developers to decide which framework is the best. The reality is that it is a stress-testing tool and nobody does that many DOM manipulations in reality, and if you do you seriously have to reconsider to use WebGL instead.

With optimized vanilla I get 46 repaints/sec while optimized React gives 36 repaints/sec, so I don’t know what’s up with that?

React was created to solve the issues Facebook were facing with their massive amount of data, activity and size. They then went ahead and open-sourced it and released it to the public, where many developers seem to have come to the conclusion that “Facebooks’ solution to their problem is the solution to my problem”.

This – in the majority of cases – is not true.

Don’t underestimate the browser

The browser is a very powerful platform, considering most implementations have cross-platform support with a lot of additional functionality.

The native Web API and DOM API is more than enough to power your application and gives you really good performance, meanwhile having great loading times across all platforms.

The underlying JavaScript engines like V8, SpiderMonkey and Chakra is almost scary fast considering JavaScript is a runtime-interpreted, dynamically typed language. I’ve seen V8 rival a few compiled, statically typed languages in terms of performance, and that is not something you should take lightly. I am very thankful for the hard work these engine developer teams has put into their respective JavaScript engine. The web would not be as capable as it is today without them.

Solution

What you need is vanilla JavaScript and the Web API. I want you to take a look at the Web API (detailed). Scimm through it for a couple of minutes and come back.

Done? Awesome. I was pretty impressed when I first looked at it in detail. A lot has changed and it’s much simpler to work with. Take MutationObserver for example – it allows you to watch parts of the DOM for mutations. This feature alone would allow you to setup simple databinding and react to potential changes.

Here’s an example of observing a contenteditable h1 tag:

let elementToWatch = document.querySelector('h1')

elementToWatch.contentEditable = true



let observer = new MutationObserver(mutations => {

  mutations.forEach(mutation => {

    console.log(mutation.target.textContent)

  })

})



observer.observe(elementToWatch, {

  subtree: true,

  characterData: true
})

This is a fairly useless example, and you would rather want to observe DOM structure mutations with it, but you get the idea.

Another example is the Fetch API. It’s like XHR, but much more straight-forward and powerful.

Write functions and classes

Things that you often use in your application should of course be placed in their own functions or classes, depending on if you need instances of something or not.

This could for example be setting up observers, walking the DOM or talking with a server-side resource.

A great example of using classes is if you want a lightweight implementation of components. An instance of a specific class can represent a component and be its view-model.

You could also build a tiny router and utilize HTML5 History API.

Some may argue that this is like reinventing the wheel, but it really isn’t because you are the one in control. You are the author of your code, and you get to work with code you feel comfortable with. You can build it however you want and support whatever browser versions you want.

Take a look at router.js, this is what their README says:

router.js is a lightweight JavaScript library that builds on route-recognizer and rsvp to provide an API for handling routes.

The library is 24.4KB minified, and 2,175 lines of code after transpilation.

How many lines of code do YOU think that YOU can write a router for your application in? 100? 200? 1000? How about 20?

I’m not saying that you shouldn’t use libraries, because you should. But I want you to be critical when choosing what libraries to use. Check through the source, maybe submit a pull request and try to help reduce the size a bit if you find something that’s unnecessary.

Know when to use JS and when to use CSS

CSS is powerful too. It can handle almost every animation/transition that you can imagine without the need of any JavaScript involved.

Only animate the transform and opacity attributes, and never things like height or width. A bit more about this here.

Use :hover, :active and :focus for simple triggers for something like a dropdown menu. You can also trigger child elements via these methods.

What about browser support?

Browser support is usually something you have decide upon a project-basis, but if there’s a feature that isn’t available for a browser you want to support, just polyfill it. Granted, there are some features like Service Worker that can’t be polyfilled, but if a feature can’t be polyfilled, a framework can’t have support for it either.

An argument that often comes up when talking about polyfilling is that it increases the size of your application. That is true, however you can dynamically polyfill based on what the client browser already has support for. And the follow-up argument is that dynamic polyfilling adds additional round-trips to the server, and that is however NOT true in modern web, explained below.

HTTP/2

The HTTP-protocol has been completely rewritten. The protocol is no longer textual, but binary. Binary protocols are much more efficient and less error-prone. HTTP/2 connections are also multiplexed which – to put it simply – allows you to transfer multiple responses simultaneously within a single connection.

Additionally, HTTP/2 implements server-push. And this is the feature I was getting at earlier when I was talking about polyfilling.

Server-push allows your server to send additional data that the client may not have requested. For instance, a client requests index.html and your server responds with index.html, style.css, script.js, polyfill-1.js and polyfill-2.js.

This doesn’t solve the application size problem, but it reduces the number of round-trips to the server. However, you can reduce the size by adding server-side configuration to match the requesting browser version and dynamically decide which polyfills to push on a per-client basis.

This is where Webpack and other module loaders/bundlers are falling behind, because bundling is bad practice with HTTP/2. The only reason bundling is being used is to reduce the amount of requests made to the server, but since HTTP/2 is multiplexed, implements server-push and can independently cache each asset this is no longer the case. If you would bundle and still use HTTP/2, you wouldn’t get all the caching capabilities HTTP/2 offers. Because imagine if you only update a single line in a JS-file, the user would have to download the entire bundle again (even the code that hasn’t been updated) and cache it. Without bundling, only the changed file would have to be re-downloaded – thus reducing bandwidth drastically for websites with large amounts of traffic. It also makes your site much more mobile-friendly as bandwidth and parsing is a huge deal on mobile.

Additionally, you could serve a bundled application for HTTP/1.x visitors and a non-bundled application for SPDY & HTTP/2 visitors.

HTTP/2 is widely supported by web browsers already. At the time of writing, 70.15% of visitors has support for the updated protocol. The problem is that hosting providers and website owners haven’t enabled it in their web server configurations yet.

Application structure

When you start a project, it’s always a good idea to plan ahead and estimate roughly what you need. If it’s a truly single page site (1 page), you definitely don’t need a framework. A single index.html, styling and optional script is enough. If you want to use Stylus and ES2015 for example, you can npm init, install Babel and Stylus and use their command-line versions in a npm run script. You could also add watch for a more snappier development environment.

package.json:

{
  ...
  "scripts": {
    "build": "babel src -d dist & stylus src/styles -o dist/styles & cp src/index.html dist/index.html",
    "dev": "watch 'npm run build' ./src"
  }
  ...
}

npm run build and you have a production-ready build.

There’s a small, but great Gist that you can look at for more snippets.

Package manager & Module loader

When your application grows large, it could be an option to use a package manager. I recommend JSPM as it is based on SystemJS which in turn is based on the standard ES6 module loader spec.

Know though that this introduces the same problem as Webpack with HTTP/2 support, so you might as well just use NPM and and build a simple script that copies your dependencies during build.

Using copy-paste of libraries is not a bad way of handling your dependencies. That’s the way it was before package managers for the browser existed and the current package manager implementations – except for bower – inject some kind of script into your production build due to having support for module loading. Module loading introduces many problems due to the cheer amount of different module formats and does not currently play very well with HTTP/2. Until that’s fixed, you may as well just stay with NPM as I mentioned earlier or copy-paste what you need. This is of course also a question of application size and the amount of dependencies you have.

Conclusion

I really hope that I’ve inspired you to try out native web development at least one more time. If you for some reason don’t feel comfortable with the native Web APIs or DOM APIs and want to stick to your framework, I don’t blame you. Do what you feel comfortable with and try to make the best out of the situation!

Lastly, I want to apologize that this had to be a Slack post.

Cheers, Tom.

UPDATE:

I’ve read some comments and feedback about mentioning React. I apologize that I didn’t go more in-depth into it, but I’ll try to cover as much as possible now.

Note though, that my message for you was not that your framework is bad – it was for you to ask yourself if you really need it. Play a bit with the thought, open your mind and try to picture yourself re-building your previous project using vanilla JavaScript, native Web API and DOM API.

How much more time do you think it would take?

How much bandwidth would you save and would it be much more complicated?

I understand that React is only the view-layer, but in reality many of you are using things like Redux with it – along with other plugins. The result of that is a pretty heavy application.

React is not about performance, it’s about handling UI state, and that is hard.

Yes, but that’s not my message for you. UI state isn’t hard for the size your applications are. A plain JavaScript object holding the top-level state, and using Object.observe should be more than enough for you to implement automatic UI state synchronization functionality, across components. It doesn’t have to be more complex than that. I can definitely understand why large companies like Facebook and YouTube need such functionality, but it doesn’t make sense for your blog editor or e-store with 7 categories.

Building your own implementations makes it hard for the next person that comes in to do maintenance of your work.

That is indeed a problem, but with proper commenting and documentation it shouldn’t be. It’s not like you’re writing a new framework – you’re writing wrappers for functionality you often re-use. In the end it’s plain JavaScript and Web APIs. Just because you don’t use a framework anymore doesn’t mean that you have to build something overly complex. And in the case of your web app becoming part of a fortune 500 company, you’d probably write your own framework either way as Facebooks solutions would probably not solve your problems.

JavaScript size doesn’t matter that much when there are images > 500KB.

Unfortunately it does. The JavaScript engine has to parse, interpret and execute the code. Most frameworks runs heavy tasks in the startup-phase of the document, adding even more time to first paint. This is really bad on mobile, even if they have a good connection.

It will turn out bad, spaghetti code – impossible to continue development of.

For starters, you’ve got to learn the how to write clean, reusable code. As mentioned earlier, document your code!

You haven’t built any complex applications.

I’ve built a few complex applications with offline-support, notifications and data that is synchronized in real-time. Don’t really know what more to answer, feels like a completely irrelevant question.

I’ve worked with Angular, React, Aurelia, Angular 2.0, jQuery and Polymer. None of them are very pleasing to work with, Aurelia is probably my favorite for its unobtrusiveness.

My message for you

My point was not that your framework sucks or anything like that, I wanted to inspire you to try out the native DOM and Web API again. There’s a lot of new features and stuff that you probably haven’t seen yet.


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/PCItSm2QsBo/T03JT4FC2-F151AAF7A-13fe6f98da

Original article

WebExtensions in Firefox 48

We last updated you on our progress with WebExtensions when Firefox 47 landed in Developer Edition (Aurora), and today we have an update for Firefox 48, which landed in Developer Edition this week.

With the release of Firefox 48, we feel WebExtensions are in a stable state. We recommend developers start to use the WebExtensions API for their add-on development. Over the last release more than 82 bugs were closed on WebExtensions alone.

If you have authored an add-on in the past and are curious how it’s affected by the upcoming changes, please use the lookup tool. There is also a wiki page filled with resources to support you through the changes.

APIs Implemented

Many APIs gained improved support in this release, including: alarms, bookmarks, downloads, notifications, webNavigation, webRequest, windows and tabs.

The options v2 API is now supported so that developers can implement an options UI for their users. We do not plan to support the options v1 API, which is deprecated in Chrome. You can see an example of how to use this API in the WebExtensions examples on Github.

image08

In Firefox 48 we pushed hard to make the WebRequest API a solid foundation for privacy and security add-ons such as Ghostery, RequestPolicy and NoScript. With the current implementation of the onErrorOccurred function, it is now possible for Ghostery to be written as a WebExtension.

The addition of reliable origin information was a major requirement for existing Firefox security add-ons performing cross-origin checks such as NoScript or uBlock Origin. This feature is unique to Firefox, and is one of our first expansions beyond parity with the Chrome APIs for WebExtensions.

Although requestBody support is not in Firefox 48 at the time of publication, we hope it will be uplifted. This change to Gecko is quite significant because it will allow NoScript’s XSS filter to perform much better as a WebExtension, with huge speed gains (20 times or more) in some cases over the existing XUL and XPCOM extension for many operations (e.g. form submissions that include file uploads).

We’ve also had the chance to dramatically increase our unit test coverage again across the WebExtensions API, and now our modules have over 92% test coverage.

Content Security Policy Support

By default WebExtensions now use a Content Security Policy, limiting the location of resources that can be loaded. The default policy for Firefox is the same as Chrome’s:

"script-src 'self'; object-src 'self';"

This has many implications, such as the following: eval will no longer work, inline JavaScript will not be executed and only local scripts and resources are loaded. To relax that and define your own, you’ll need to define a new CSP using the content_security_policy entry in the WebExtension’s manifest.

For example, to load scripts from example.com, the manifest would include a policy configuration that would look like this:

"content_security_policy": "script-src 'self' https://example.com; object-src 'self'"

Please note: this will be a backwards incompatible change for any Firefox WebExtensions that did not adhere to this CSP. Existing WebExtensions that do not adhere to the CSP will need to be updated.

Chrome compatibility

To improve the compatibility with Chrome, a change has landed in Firefox that allows an add-on to be run in Firefox without the add-on id specified. That means that Chrome add-ons can now be run in Firefox with no manifest changes using about:debugging and loading it as a temporary add-on.

Support for WebExtensions with no add-on id specified in the manifest is being added to addons.mozilla.org (AMO) and our other tools, and should be in place on AMO for when Firefox 48 lands in release.

Android Support

With the release of Firefox 48 we are announcing Android support for WebExtensions. WebExtensions add-ons can now be installed and run on Android, just like any other add-on. However, because Firefox for Android makes use of a native user interface, anything that involves user interface interaction is currently unsupported (similar to existing extensions on Android).

You can see what the full list of APIs supported on Android in the WebExtensions documentation on MDN, these include alarms, cookies, i18n and runtime.

Developer Support

In Firefox 45 the ability to load add-ons temporarily was added to about:debugging. In Firefox 48 several exciting enhancements are added to about:debugging.

If your add-on fails to load for some reason in about:debugging (most commonly due to JSON syntax errors), then you’ll get a helpful message appearing at the top of about:debugging. In the past, the error would be hidden away in the browser console.

image02

It still remains in the browser console, but is now visible that an error occurred right in the same page where loading was triggered.

image04

Debugging

You can now debug background scripts and content scripts in the debugging tools. In this example, to debug background scripts I loaded the add-on bookmark-it from the MDN examples. Next click “Enable add-on debugging”, then click “debug”:

image03

You will need to accept the incoming remote debugger session request. Then you’ll have a Web Console for the background page. This allows you to interact with the background page. In this case I’m calling the toggleBookmark API.

image06

This will call the toggleBookmark function and bookmark the page (note the bookmark icon is now blue. If you want to debug the toggleBookmark function,  just add the debugger statement at the appropriate line. When you trigger toggleBookmark, you’ll be dropped into the debugger:image09

You can now debug content scripts. In this example I’ve loaded the beastify add-on from the MDN examples using about:debugging. This add-on runs a content script to alter the current page by adding a red border.

All you have to do to debug it is to insert the debugger statement into your content script, open up the Developer Tools debugger and trigger the debug statement:

image05

You are then dropped into the debugger ready to start debugging the content script.

Reloading

As you may know, restarting Firefox and adding in a new add-on is can be slow, so about:debugging now allows you to reload an add-on. This will remove the add-on and then re-enable the add-on, so that you don’t have to keep restarting Firefox. This is especially useful for changes to the manifest, which will not be automatically refreshed. It also resets UI buttons.

In the following example the add-on just calls setBadgeText to add “Test” onto the browser action button (in the top right) when you press the button added by the add-on.

image03

Hitting reload for that add-on clears the state for that button and reloads the add-on from the manifest, meaning that after a reload, the “Test” text has been removed.

image07

This makes developing and debugging WebExtensions really easy. Coming soon, web-ext, the command line tool for developing add-ons, will gain the ability to trigger this each time a file in the add-on changes.

There are also lots of other ways to get involved with WebExtensions, so please check them out!

Update: clarified that no add-on id refers to the manifest as a WebExtension.


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/r2Fucnam3YU/

Original article

Intel’s Changing Future: Smartphone SoCs Broxton and SoFIA Officially Cancelled

The past two weeks have been a busy – if not tumultuous – period for Intel. Driven by continued challenges in various semiconductor markets, culminating in weaker-than-desired earnings in the most recent quarter, Intel has set out to change their direction and refocus the company towards what they see as more lucrative, higher growth opportunity markets such as data center/server markets and cellular (5G) connectivity. To get there, the company is making changes to both their product lines and their head count, with the goal in the case of the latter to cut 11% of their workforce by the middle of next year.

Today’s big news out of Intel is along these lines, and with strategy and workforce news behind them, we have our first announcements on product changes that will come from Intel’s new strategy. In a report on Intel’s new strategy published by analyst Patrick Moorhead, Moorhead revealed that Intel would be radically changing their smartphone SoC plans, canceling their forthcoming Broxton and SoFIA products and in practice leaving the smartphone market for at least the time being.

Given the significance of this news we immediately reached out to Intel to get direct confirmation of the cancelation, and we can now confirm that Intel is indeed canceling both Broxton and SoFIA as part of their new strategy. This is arguably the biggest change in Intel’s mobile strategy since they first formed it last decade, representing a significant scaling back in their mobile SoC efforts. Intel’s struggles are well-published here, so this isn’t entirely unsurprising, but at the same time this comes relatively shortly before Broxton was set to launch. Otherwise as it relates to Atom itself, Intel’s efforts with smaller die size and lower power cores have not ended, but there’s clearly going to be a need to reevaluate where Atom fits into Intel’s plans in the long run if it’s not going to be in phones.

For the moment Intel’s announcement leaves some ambiguity in their larger mobile plans – what will happen to SoCs for non-professional tablets, for example? – but for now we have a very clear picture of the smartphone SoC market, and how Intel will no longer be a part of it.

Intel’s full statement:

Intel is accelerating its transformation from a PC company to one that powers the cloud and billions of smart, connected computing devices. We will intensify our investments to fuel the virtuous cycle of growth in the data center, IoT, memory and FPGA businesses, and to drive more profitable mobile and PC businesses. Intel delivers a broad range of computing and connectivity technologies that are foundational to this strategy and that position us well to lead the end-to-end transition to 5G. Our connectivity strategy includes increased investment in wired and wireless communications technology for connecting all things, devices and people to the cloud, and to power the communications infrastructure behind it. We re-evaluated projects to better align to this strategy.

I can confirm that the changes included canceling the Broxton platform as well as SoFIA 3GX, SoFIA LTE and SoFIA LTE2 commercial platforms to enable us to move resources to products that deliver higher returns and advance our strategy. These changes are effective immediately.

Smartphone SoCs: The Path so Far

Anyone following Intel’s exploits in the smartphone space over the last few years has been watching them with interest on product, timeliness and execution.

We’ve interviewed and appeared on video speaking with Aicha Evans, Intel’s current corporate Vice President of the Communication and Devices Group, whose large enthusiasm, energy and mantra of time to market has steered Intel over the past few years into the mobile scene, after bashfully missing an early entry. In that time, Intel has invested many billion dollars in both SoC and modem development to claw a market from the slew of ARM-based solutions in the wild. Aside from having a process node advantage during that time, Intel has had to redevelop its microarchitecture products and radio business into something that could be efficient, performant and price competitive, all the while maintaining the high margins Intel’s overall business requires. Particularly in the radio business, the bread and butter of the CVP, Intel acquired and merged several companies to expand its radio portfolio, including the CDMA assets of VIA Telecom announced as recently as Q4 2015, as well as Infineon Wireless (modem/RF) and Silicon Hive (ISP).

As admitted by Intel, the first few generations were rough, either resting on their laurels or not having a complete solution. Earlier this decade Intel used a ‘contra-revenue’ strategy, investing into OEMs that would buy their chips, causing operating losses for the mobile division of $3.1 billion in 2013 and $4.2 billion in 2014 with a much lower revenue stream. Intel subsequently combined the financial reports of their mobile and consumer PC businesses into a new Client Computing Division, bringing all CPU/SoC development under a single roof but also obfuscating the investments and losses behind a high performing, high margin part of the company.


(Image Courtesy Tweakers.net)

Thus Intel’s big wins in the smartphone space have been rather limited: they haven’t had a win in any particularly premium devices, and long term partners have been deploying mid-range platforms in geo-focused regions. Perhaps the biggest recipient has been ASUS, with the ever popular ZenFone 2 creating headlines when it was announced at $200 with a quad-core Intel Atom, LTE, 4GB of DRAM and a 5.5-inch 1080p display. Though not quite a premium product, the ZenFone 2 was very aggressively priced and earned a lot of attention for both ASUS and Intel over just how many higher-end features were packed into a relatively cheap phone.

Meanwhile, just under two years ago, in order to address the lower-end of the market and to more directly compete with aggressive and low-margin ARM SoC vendors, Intel announced the SoFIA program. SoFIA would see Intel partner with the Chinese SoC vendors Rockchip and Spreadtrum, working with them to design cost-competitive SoCs using Atom CPU cores and Intel modems, and then fab those SoCs at third party fabs. SoFIA was a very aggressive and unusual move for Intel that acknowledged that the company could not compete in the low-end SoC space in a traditional, high-margin Intel manner, and that as a result the company needed to try something different. The first phones based on the resulting Atom x3 SoCs launched earlier this year, so while SoFIA has made it to the market it looks like that presence will be short-lived.

Overall, Intel’s strategy of ‘Time To Market’ in order to generate revenue in a fast paced market makes sense – if you are late, then you are behind on performance, efficiency, and no-one will buy the chips. However, TTM has drawbacks if the chip comes without the features it needs, and the end result has seen Intel always play catch-up in one form or another, hoping that their strategy would encourage customers. Intel got serious about mobile, but it would appear it hasn’t been enough.

Intel’s Leaving the Trail: Broxton & SoFIA Cancelled

With Intel announcing the cancelation of their entire suite of smartphone SoCs, this has a significant impact on the company’s overall strategy. The next generation of Intel’s in-house mobile SoCs, Broxton, was lined up to use Intel’s newest generation 14nm Atom core, Goldmont. Goldmont has already been announced at IDF Shenzhen this year as part of the Apollo Lake netbook/low-cost PC platform, but we have been expecting it to arrive as part of a few handsets this year. Despite the fact that we assume Broxton should be in the final stages of silicon development and less than a few months out, the official word from Intel today is that the Broxton commercial platform has been cancelled, effective immediately. The resources working on the Broxton platform are being moved to areas within the company that offers better returns on investment and are more aligned with Intel’s connectivity (read: 5G) strategy.

Comparison of Intel’s Atom SoC Platforms
  Node Release Year Smartphone Tablet Netbook
Notebook
Saltwell 32 nm 2011 Medfield
Clover Trail+
Clover Trail Cedar Trail
Silvermont 22 nm 2013 Merrifield
Moorefield
Bay Trail-T Bay Trail-M/D
Airmont 14 nm 2014 ‘Riverton’ Cherry Trail-T Braswell
Goldmont 14 nm 2016 Broxton
(cancelled)
Willow Trail?
Apollo Lake?
Apollo Lake

The other side of this news is the cancellation of the SoFIA 3GX, LTE and LTE2 commercial platforms as well. SoFIA as a platform had missed its original targets, was delayed (some analysts suggest up to a year), and in the end was developed through agreements made with RockChip and Spreadtrum to manufacture some of the SoFIA SoCs for those markets using a less expensive process node but also using the expertise of these two bulk SoC sales companies. We were expecting SoFIA with Intel’s 2nd generation LTE, as well as the next microarchitecture in SoFIA, to be announced this year. As of today’s email exchange with Intel, these programs are now cancelled, again effective immediately.  At this point details on how the arrangements with RockChip and Spreadtrum are unclear (Intel declined to comment).


One of Intel and Rockchip’s current SoFIA SoCs

The Road Ahead for Intel

Intel’s announcements over the past week have included layoffs of 12000 staff, but also a clarification of Intel’s future strategy. Among those five focal points include the Cloud, the Client business, Memory and FPGAs, R&D through Moore’s Law, and 5G Connectivity. These five areas are all high margin, high grossing and high volume market segments. Sometimes an introspective look and an internal refocus on the core strengths is a good thing, depending on how your competitors are doing, but that means shedding parts of the business that don’t meet those expectations.

For the moment at least, Intel is out of the SoC side of the smartphone market, which will allow ARM architecture based SoCs to absorb the remaining market share they didn’t have already. What’s less clear at the moment is whether this will also impact the low-cost/non-premium tablet market – as embodied by products such as the Surface 3 – as Intel is not discussing the status of the Willow Trail platform at this time. Previous comments made to Patrick Moorhead indicate that tablets are impacted, but for the moment Intel is staying mum on anything that isn’t smartphones.

Also not discussed in greater detail is Intel’s future plans for their overall Atom lineup. With Apollo Lake announced just earlier this month, it’s clear that Intel’s Atom efforts have not been cancelled entirely. We will still see the new 14nm Goldmont cores appear in low-cost PCs under Apollo Lake, most likely in several 11-to-13 inch high volume devices. However for the moment there is not an Atom core on Intel’s roadmap beyond Goldmont.

Finally, despite all of this one key target for Intel will be the rest of the discrete modem market, which is currently Qualcomm’s domain, and the late 2015 acquisition of VIA Telecom’s CMDA assets will help. To put some perspective on this, two things: Intel recently hired Dr. Renduchintala, former Qualcomm VP of Mobile, to head up the client business, as well as Amir Faintuch, also formerly Qualcomm, to co-manage Intel’s Platform Engineering Group. Secondly, at Mobile World Congress 2016 in February, Aicha Evans said that she wanted a big contract in 2016, otherwise we might not see her in 2017.

Source: Intel, tip-off from Patrick Moorhead via Forbes


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/uVzOFa6WIyM/intel-broxton-sofia-smartphone-socs-cancelled

Original article

Phishing apps posing as popular payment services infiltrate Google Play

Google’s efforts to police the Android app store — Google Play — are far from perfect, with malicious apps routinely slipping through its review process. Such was the case for multiple phishing applications this year that posed as client apps for popular online payment services.

Researchers from security firm PhishLabs claim that they’ve found 11 such applications since the beginning of 2016 hosted on Google Play, most of them created by the same group of attackers.

The apps are simple, yet effective. They load Web pages containing log-in forms that look like the target companies’ websites. These pages are loaded from domain names registered by the attackers, but because they are loaded inside the apps, users don’t see their actual location.

To read this article in full or to leave a comment, please click here


Original URL: http://www.computerworld.com/article/3063573/security/phishing-apps-posing-as-popular-payment-services-infiltrate-google-play.html#tk.rss_all

Original article

Google AI Has Access To 1.6M People’s NHS Records

Hal Hodson, reporting for New Scientist:It’s no secret that Google has broad ambitions in healthcare. But a document obtained by New Scientist reveals that the tech giant’s collaboration with the UK’s National Health Service goes far beyond what has been publicly announced. The document — a data-sharing agreement between Google-owned artificial intelligence company DeepMind and the Royal Free NHS Trust — gives the clearest picture yet of what the company is doing and what sensitive data it now has access to. The agreement gives DeepMind access to a wide range of healthcare data on the 1.6 million patients who pass through three London hospitals run by the Royal Free NHS Trust — Barnet, Chase Farm and the Royal Free — each year. This will include information about people who are HIV-positive, for instance, as well as details of drug overdoses and abortions. The agreement also includes access to patient data from the last five years. According to their original agreement, Google cannot use the data in any other part of its business.


Share on Google+

Read more of this story at Slashdot.


Original URL: http://rss.slashdot.org/~r/Slashdot/slashdot/~3/QjsCiaGlGoE/google-ai-has-access-to-16m-peoples-nhs-records

Original article

IBM offers advice on how to secure blockchain in the cloud

Cloud providers hosting blockchain secure transactions technology should take additional steps to protect their records, IBM says.

IBM’s new framework for securely operating blockchain networks, released Friday, recommends that network operators make it easy to audit their operating environments and use optimized accelerators for hashing — the generation of numbers from strings of text — and the creation of digital signatures to pump up CPU performance. 

Along with the security guidelines, IBM announced new cloud-based blockchain services designed to meet existing regulatory and security requirements. The company has worked with security experts to create cloud services for “tamper-resistant” blockchain networks, it said.

To read this article in full or to leave a comment, please click here


Original URL: http://www.computerworld.com/article/3062921/security/ibm-offers-advice-on-how-to-secure-blockchain-in-the-cloud.html#tk.rss_all

Original article

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑

%d bloggers like this: