Turning Drupal outside-in

Republished from buytaert.net

There has been a lot of discussion around the future of the Drupal front end both on Drupal.org (#2645250, #2645666, #2651660, #2655556) and on my blog posts about the future of decoupled Drupal, why a standard framework in core is a good idea, and the process of evaluating frameworks. These all relate to my concept of “progressive decoupling”, in which some portions of the page are handed over to client-side logic after Drupal renders the initial page (not to be confused with “full decoupling”).

My blog posts have drawn a variety of reactions. Members of the Drupal community, including Lewis Nyman, Théodore Biadala and Campbell Vertesi, have written blog posts with their opinions, as well as Ed Faulkner of the Ember community. Last but not least, in response to my last blog post, Google changed Angular 2’s license from Apache to MIT for better compatibility with Drupal. I read all the posts and comments with great interest and wanted to thank everyone for all the feedback; the open discussion around this is nothing short of amazing. This is exactly what I hoped for: community members from around the world brainstorming about the proposal based on their experience, because only with the combined constructive criticism will we arrive at the best solution possible.

Based on the discussion, rather than selecting a client-side JavaScript framework for progressive decoupling right now, I believe the overarching question the community wants to answer first is: How do we keep Drupal relevant and widen Drupal’s adoption by improving the user experience (UX)?

Improving Drupal’s user experience is a topic near and dear to my heart. Drupal’s user experience challenges led to my invitation to Mark Boulton to redesign Drupal 7, the creation of the Spark initiative to improve the authoring experience for Drupal 8, and continued support for usability-related initiatives. In fact, the impetus behind progressive decoupling and adopting a client-side framework is the need to improve Drupal’s user experience.

It took me a bit longer than planned, but I wanted to take the time to address some of the concerns and share more of my thoughts about improving Drupal’s UX (and JavaScript frameworks).

To iterate or to disrupt?

In his post, Lewis writes that the issues facing Drupal’s UX “go far deeper than code” and that many of the biggest problems found during the Drupal 8 usability study last year are not resolved with a JavaScript framework. This is true; the results of the Drupal 8 usability study show that Drupal can confuse users with its complex mental models and terminology, but it also shows how modern features like real-time previews and in-page block insertion are increasingly assumed to be available.

To date, much of our UX improvements have been based on an iterative process, meaning it converges on a more refined end state by removing problems in the current state. However, we also require disruptive thinking, which is about introducing entirely new ideas, for true innovation to happen. It’s essentially removing all constraints and imagining what an ideal result would look like.

I think we need to recognize that while some of the documented usability problems coming out of the Drupal 8 usability study can be addressed by making incremental changes to Drupal’s user experience (e.g. our terminology), other well-known usability problems most likely require a more disruptive approach (e.g. our complex mental model). I also believe that we must acknowledge that disruptive improvements are possibly more impactful in keeping Drupal relevant and widening Drupal’s adoption.

At this point, to get ahead and lead, I believe we have to do both. We have to iterate and disrupt.

From inside-out to outside-in

Let’s forget about Drupal for a second and observe the world around us. Think of all the web applications you use on a regular basis, and consider the interaction patterns you find in them. In popular applications like Slack, the user can perform any number of operations to edit preferences (such as color scheme) and modify content (such as in-place editing) without incurring a single full page refresh. Many elements of the page can be changed without the user’s flow being interrupted. Another example is Trello, in which users can create new lists on the fly and then add cards to them without ever having to wait for a server response.

Contrast this with Drupal’s approach, where any complex operation requires the user to have detailed prior knowledge about the system. In our current mental model, everything begins in the administration layer at the most granular level and requires an unmapped process of bottom-up assembly. A user has to make a content type, add fields, create some content, configure a view mode, build a view, and possibly make the view the front page. If each individual step is already this involved, consider how much more difficult it becomes to traverse them in the right order to finally see an end result. While very powerful, the problem is that Drupal’s current model is “inside-out”. This is why it would be disruptive to move Drupal towards an “outside-in” mental model. In this model, I should be able to start entering content, click anything on the page, seamlessly edit any aspect of its configuration in-place, and see the change take effect immediately.

Drupal 8’s in-place editing feature is actually a good start at this; it enables the user to edit what they see without an interrupted workflow, with faster previews and without needing to find what thing it is before they can start editing.

Making it real with content modeling

Eight years ago in 2007, I wrote about a database product called DabbleDB. I shared my belief that it was important to move CCK and Views into Drupal’s core and learn from DabbleDB’s integrated approach. DabbleDB was acquired by Twitter in 2010 but you can still find an eight-year-old demo video on YouTube. While the focus of DabbleDB is different, and the UX is obsolete, there is still a lot we can learn from it today: (1) it shows a more integrated experience between content creation, content modeling, and creating views of content, (2) it takes more of an outside-in approach, (3) it uses a lot less intimidating terminology while offering very powerful capabilities, and (4) it uses a lot of in-place editing. At a minimum, DabbleDB could give us some inspiration for what a better, integrated content modeling experience could look like, with the caveat that the UX should be as effortless as possible to match modern standards.

Other new data modeling approaches with compelling user experiences have recently entered the landscape. These include back end-as-a-service (BEaaS) solutions such as Backand, which provides a visually clean drag-and-drop interface for data modeling and helpful features for JavaScript application developers. Our use cases are not quite the same, but Drupal would benefit immensely from a holistic experience for content modeling and content views that incorporates both the rich feature set of DabbleDB and the intuitive UX of Backand.

This sort of vision was not possible in 2007 when CCK was a contributed module for Drupal 6. It still wasn’t possible in Drupal 7 when Views existed as a separate contributed module. But now that both CCK and Views are in Drupal 8 core, we can finally start to think about how we can more deeply integrate the two. This kind of integration would be nontrivial but could dramatically simplify Drupal’s UX. This should be really exciting because so many people are attracted to Drupal exactly because of features like CCK and Views. Taking an integrated approach like DabbleDB, paired with a seamless and easy-to-use experience like Slack, Trello and Backand, is exactly the kind of disruptive thinking we should do.

What most of the examples above have in common are in-place editing, immediate previews, no page refreshes, and non-blocking workflows. The implications on our form and render systems of providing configuration changes directly on the rendered page are significant. To achieve this requires us to have robust state management and rendering on the client side as well as the server side. In my vision, Twig will provide structure for the overall page and non-interactive portions, but more JavaScript will more than likely be necessary for certain parts of the page in order to achieve the UX that all users of the web have come to expect.

We shouldn’t limit ourselves to this one example, as there are a multitude of Drupal interfaces that could all benefit from both big and small changes. We all want to improve Drupal’s user experience — and we have to. To do so, we have to constantly iterate and disrupt. I hope we can all collaborate on figuring out what that looks like.

Special thanks to Preston So and Kevin O’Leary for contributions to this blog post and to Wim Leers for feedback.

Continue the conversation on buytaert.net

Front page news: 
Drupal version: 

Original URL: https://www.drupal.org/news/turning-drupal-outside-in

Original article

Canonical releases Snappy Ubuntu Core Linux image for x86-based Intel NUC DE3815TY

Thin-Canyon_NUC_Front-Angle_Board

The Raspberry Pi is a game-changing computer. While it was primarily designed as a low-cost base on which students could learn to code, it has proven to be much more. Some consumers buy it for HTPC purposes, but more importantly, developers embrace the little computer for other projects, such as IoT.

Unfortunately for some developers, the ARM architecture and rather anemic performance make the Raspberry Pi a poor choice. While some consider ARM to be the future, I’m not so sure — x86 has been surprisingly adaptable. Today, Canonical releases an Ubuntu Core image for the x86-based Intel NUC DE3815TY. Priced around $150, this NUC is more expensive than the Pi, but it is much more powerful too; a better choice for developers needing an x86 platform.

“Over the last few months Canonical and Intel have been working together to create a standard platform for developers to test and create x86-based IoT solutions using snappy Ubuntu Core. The results are here today and we’re pleased to announce the availability of the Ubuntu Core images for the Intel NUC DE3815TY on our developer site”, says Thibaut Rouffineau, IOT and Ubunutu Core Evangelist, Canonical.

Rouffineau further explains, “the Intel NUC DE3815TY is an ideal IOT development platform! It’s got enough computing power to prototype for all embedded use cases with an Intel Atom Processor. It also offers a lot of IOs and configuration options: USB ports, I2C ports, 4GB eMMC and the possibility to add a wireless card, up to 8G of RAM and a 2.5 inch HDD or SSD. Now, with the availability of snappy Ubuntu Core, developers have the possibility to simply bring the rich ecosystem of Ubuntu apps onto the Intel NUC and into the embedded space. Don’t like embedded because cross-compilation is a bit painful? Development for the Intel NUC requires none of that, what will run on the developer’s machine will run on the embedded device”.

These are exciting times for Ubuntu; Snappy Core is not only the future of mobile, embedded, and the cloud, but the Ubuntu desktop too. Developers would be smart to buy the NUC DE3815TY now to gain experience with this container-focused version of the operating system, including creating Snappy apps.

If you are a developer that wants to try this Ubuntu Core 15.04 image, you can download it here. You can buy the Intel NUC — if you don’t already own one — here.


Original URL: http://feeds.betanews.com/~r/bn/~3/oDcj3bjbfoc/

Original article

A Review of the Amazon Books Store

Posted on Feb 10th, 2016 in Reviews | 1 comment

This week I’m in unusually sunny Seattle as a mentor at Datapalooza, a data science conference that is organized by my employer. While here, I thought I’d pay visit to the first – and currently only – physical Amazon store.

Amazon Books

Amazon Books

Amazon Books is a retail outlet located in University Village, an upscale mall in Seattle, Washington.

The type of shops in the area

The type of shops in the area.

As soon as you enter the store you’re greeted by a sign that answers the question most of us would have: yes, store prices are the same as Amazon.com. Buying in person is therefore handy for those who live in the area, assuming one can find the book they’re looking for. In this regard, Amazon Books is at a strong disadvantage over Amazon’s own site (naturally), but also against other physical bookstores such as Barnes & Noble.

”Same

Book selection is in fact severely limited, even considering the modest dimensions of the retail area. Bookshelves are mostly reserved for best sellers in highly popular categories. Technical books are all but excluded from their physical bookstore, as admitted by a clerk with whom I spoke to see if by chance my own book was available onsite (in my defense, I prefixed the conversation with a, “I know it’s a long shot”).

The available books are best sellers, new releases, and books with at least four stars on Amazon’s website. Specifically, there are bookshelf areas dedicated to books that have more than 4.8 stars, four stars, books of the month, staff picks, and highly reviewed books about current topics (e.g., being February, Black History Month).

4.8 stars or above

Black History Month

I found it to be a nice touch that their online “customers who bought this…” narrative has been brought down from the cloud to the physical world. Some shelves were in fact dedicated to books similar to a given author (i.e., John Grisham) or a particular bestseller (i.e., Zero to One by Peter Thiel).

If you like...

Books don’t have prices on them or on the shelves. Instead the public is supposed to download the Amazon app and use it to scan the barcode available on the shelves next to each book (or on the back of the book, naturally).

What’s the price?

For those with a dead battery or who are not inclined to take their smartphone out of their pocket, there are a few scanners around the bookstore, which you can use to scan books and discover their prices.

Scanner

I asked how to check if a book was available in the store, and was told that the only way to accomplish such is to ask them. This strikes me as a rather low-tech solution from a company like Amazon. I suspect this might change, including listing current in-store inventory through their app, once more stores are added and Amazon Books becomes a chain.

On the shelf labels for each book there is often a quote from a review on Amazon.com, or some statistical tidbit (e.g., 91% of reviewers give this book five stars).

Shelf label

A recommended book

As you’d expect from the name, the shop is mainly a bookstore. The central section however is dedicated to various gadgets, mostly created by Amazon itself, such as Echo, Fire TV, Kindle Fire, etc. In this the shop reminded me a little of the Apple Store, which clearly inspired the layout of this section. Tucked away there was even a tiny section dedicated to Amazon Basics, for those who’d like to buy an HDMI cable or similar accessories, without splashing out much.

There is a decent selection of mainstream magazines, but nothing too impressive (I’d say inferior to that of most large bookstores in America or Canada). And towards the end of the store there is a fairly large section for children’s books, complete with small tables and chairs so that kids can browse/read on site.

Magazines

Adults can sit as well, though their padded bench area is located by the largest window, which is near the magazines. As well, a couple of secured-in-place Kindle Fires are available for those who’d like to play with them.

The tablets by the bench area

Employees were friendly and ready to greet you at the entrance, and the general environment was fairly inviting. Overall it wasn’t what I expected, but it was a positive experience and they even managed to sell me a book (Elon Musk’s biography). With my purchase they included a complimentary orange bookmark, branded with their logo.

The checkout area

At the checkout you can insert your Amazon.com email and your order will automatically be added to your account, on top of emailing you a receipt for your purchase.

Receipt by email

In conclusion, I see a couple of strategic advantages in Amazon Books. Such stores offer to Amazon.com, what Apple Stores provide to Apple.com. Namely a showroom where customers can try products such as Amazon Echo, Kindle Fire, Fire TV, and whatever new products they might produce in the future.

The second advantage is actually a side effect of their limited selection approach. They are essentially offering a pre-selection service to the public. The average user could enter the store, grab a random book from the bookshelf of their interest, and they’d be almost guaranteed to go home with a book they’d enjoy. (Yes, there are fake reviews on Amazon, and bad books with high ratings, but manual filtering is almost certainly at play here.)

What the bookstore is not, at least in its current incarnation, is a monster that would justify the accusation of having first destroyed retail bookstores through their online site, only to turn it around and eliminate the leftover competition with physical stores of their own. The Amazon Books store I visited is a bit Apple Store, a bit Starbucks in terms of atmosphere. In some ways it tries to lift the Amazon brand from utilitarian to almost a luxury brand. It has a boutique-like charm, rather than the utility of a warehouse. In this choice, it leaves ample space to offline competition (a good thing, of course).

Amazon Books

An offline Amazon that would have caused serious trouble not only to bookstores but also to other types of shops, would be an Amazon modeled after a Costco warehouse with the addition of high-tech automation. Amazon Books is the exact opposite. Nevertheless it is an interesting bet on Bezos’ part, and one I suspect will ultimately bear fruit (particularly if they evolve the concept further based on their data analysis of what people purchase in-store).

Unlike some pundits who mocked Amazon for going offline, I personally admire Amazon for trying new things (even if it can be argued that we’ve come full circle over the course of the last 20 years).

They are not afraid to experiment and fail fast, and I can’t help but suspect that, ultimately, that is their proverbial secret sauce.

Related

If you enjoyed this post, then make sure you subscribe to my Newsletter and/or Feed.


#mc_embed_signup fieldset {position: relative;}
#mc_embed_signup legend {position: absolute; top: -1em; left: .2em;}

.mc-field-group {overflow:visible;}



Leave a Reply

I sincerely welcome and appreciate your comments, whether in agreement or dissenting with my article. However, trolling will not be tolerated. Comments are automatically closed 15 days after the publication of each article.


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/dhgj8A6jcV0/

Original article

Developer Preview: RethinkDB Now Available for Windows

We’re pleased to announce today that RethinkDB is
now available for Windows. You can download a Developer Preview
of our Windows build, which runs natively on Microsoft’s operating system.

Support for Windows is one of the features
most frequently requested by RethinkDB users.
We launched an ambitious engineering project to port RethinkDB to
Windows–an undertaking that required a year of intensive development,
touching nearly every part of the database.

To try today’s Developer Preview, simply
download RethinkDB as an executable and run it on a Windows machine.
We’re making the preview available today so that our users can start
exploring RethinkDB on Windows and help us test it in the real world. You
shouldn’t trust it with your data or use it in production environments yet.
It’s also not fully optimized, so you might not get the same performance that
you would from a stable release.

Continue reading “Developer Preview: RethinkDB Now Available for Windows”


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/wwecIZTzLeE/

Original article

OpenShot 2.0.6 (Public Beta) Released

Greetings Everyone! I am proud to release the third beta of OpenShot 2.0 (full details below). This marks the 3rd full release of OpenShot 2.0 in the past 30 days. I am working closely with testers and users to address the most critical issues as they are identified.


Installers and Downloads

If you are interested in trying out OpenShot 2.0, you are in luck! For the first time ever, we are releasing the beta installers to everyone, so feel free to grab a copy and check it out!

Smoother Animation

Animations are now silky smooth because of improved anti-aliasing support in the libopenshot compositing engine. Zooming, panning, and rotation all benefit from this change.

Audio Quality Improvements

Audio support in this new version is vastly superior to previous versions. Popping, crackling, and other related audio issues have been fixed.

Autosave

A new autosave engine has been built for OpenShot 2.0, and it’s fast, simple to configure, and will automatically save your project at a specific interval (if it needs saving). Check the Preferences to be sure it’s enabled (it will default to enabled for new users).

Automatic Backup and Recovery

Along with our new autosave engine, a new automatic backup and recovery feature has also been integrated into the autosave flow. If your project is not yet saved… have no fear, the autosave engine will make a backup of your unsaved project (as often as autosave is configured for), and if OpenShot crashes, it will recover your most recent backup on launch.


 Project File Improvements

Many improvements have been made to project file handling, including relative paths for built-in transitions and improvements to temp files being copied to project folders (i.e. animated titles). Projects should be completely portable now, between different versions of OpenShot and on different Operating Systems. This was a key design goal of OpenShot 2.0, and it works really well now.

Improved Exception Handling

Integration between libopenshot (our video editing library) and openshot-qt (our PyQt5 user interface) has been improved. Exceptions generated by libopenshot are now passed to the user interface, and no longer crash the application. Users are now presented with a friendly error message with some details of what happened. Of course, there is still the occasional “hard crash” which kills everything, but many, many crashes will now be avoided, and users more informed on what has happened.

Preferences Improvements

There are more preferences available now (audio preview settings – sample rate, channel layout, debug mode, etc…), including a new feature to prompt users when the application will “require a restart” for an option to take effect.


Improved Stability on Windows

A couple of pretty nasty bugs were fixed for Windows, although in theory they should have crashed on other platforms as well. But for whatever reason, certain types of crashes relating to threading only seem to happen on Windows, and many of those are now fixed.

New Version Detection

OpenShot will now check the most recent released version on launch (from the openshot.org website) and descretely prompt the user by showing an icon in the top right of the main window. This has been a requested feature for a really long time, and it’s finally here. It will also quietly give up if no Internet connection is available, and it runs in a separate thread, so it doesn’t slow down anything.

Metrics and Anonymous Error Reporting

A new anonymous metric and error reporting module has been added to OpenShot. It can be enabled / disabled in the Preferences, and it will occasionally send out anonymous metrics and error reports, which will help me identify where crashes are happening. It’s very basic data, such as “WEBM encoding error – Windows 8, version 2.0.6, libopenshot-version: 0.1.0”, and all IP addresses are anonymized, but will be critical to help improve OpenShot over time.

Improved Precision when Dragging

Dragging multiple clips around the timeline has been improved. There were many small issues that would sometimes occur, such as extra spacing being added between clips, or transitions being slightly out of place. These issues have been fixed, and moving multiple clips now works very well.

Debug Mode

In the preferences, one of the new options is “Debug Mode”, which outputs a ton of extra info into the logs. This might only work on Linux at the moment, because it requires the capturing of standard output, which is blocked in the Windows and Mac versions (due to cx_Freeze). I hope to enable this feature for all OSes soon, or at least to provide a “Debug” version for Windows and Mac, that would also pop open a terminal/command prompt with the standard output visible.

Updated Translations

Updates to 78 supported languages have been made. A huge thanks to the translators who have been hard at work helping with OpenShot translations. There are over 1000 phrases which require translation, and seeing OpenShot run so seamlessly in different languages is just awesome! I love it!

Lots of Bug fixes

In addition to all the above improvements and fixes, here are many other smaller bugs and issues that have been addressed in this version.

  • Prompt before overwriting a video on export
  • Fixed regression while previewing videos (causing playhead to hop around)
  • Default export format set to MP4 (regardless of language)
  • Fixed regression with Cutting / Split video dialog
  • Fixed Undo / Redo bug with new project
  • Backspace key now deletes clips (useful with certain keyboards and laptop keyboards)
  • Fixed bug on Animated Title dialog not updating progress while rendering
  • Added multi-line and unicode support to Animated Titles
  • Improved launcher to use distutils entry_points
  • Renaming launcher to openshot-qt
  • Improved Mac build scripts (version # parsing)
  • Fixed many issues with keyboard shortcuts
  • Known Issues
  • WebM export crash on Windows
  • DVD export crash on some versions of Linux
  • Some translation issues with certain languages. Please review your language translations here.
  • Some users have reported issues launching OpenShot on Mac
  • Some stability issues with Windows – still haven’t nailed down the cause… but it’s probably related to threading and a couple more race conditions that only seem to happen on Windows.


Get Involved

Please report bugs and suggestions here: https://github.com/OpenShot/openshot-qt/issues. Please contribute language translations here (if you are a non-English speaking user): https://translations.launchpad.net/openshot/2.0/+translations.

Stay tuned…


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/HqDd70UlR7s/openshot-206-beta-3-released.html

Original article

Intel to shut down renegade Skylake overclocking with microcode update

Intel Skylake die shot.

Intel

Skylake processors that were discovered to be readily overclockable are having their speeds locked back down, with Intel shipping a new microcode update for the chips that closes a loophole introduced in Intel’s latest generation of processors, according to PC World.

Intel has a funny relationship with overclocking. On the one hand, the company doesn’t like it. Historically, there have been support issues—unscrupulous companies selling systems with slower processors that are overclocked, risking premature failure, overheating, and just plain overcharging—and more fundamentally, if you want a faster processor, Intel would prefer that you spend more money to get it. On the other hand, the company knows that overclocking appeals greatly to a certain kind of enthusiast, one that will show some amount of brand loyalty and generally advocate for Intel’s products. Among this crowd there’s also a certain amount of street cred that comes from having the fastest chip around.

To address this duality, Intel does a couple of things. Most of its processors have a fixed maximum clock multiplier, capping the top speed that they’ll operate at. But for a small price premium, certain processors have “K” versions that remove this cap, allowing greater flexibility for PC owners to run their chips at above the rated maximum speed. This way, most processors can’t be readily overclocked, but for those enthusiasts who really want to, an official option exists (although even with these chips, Intel recommends that people do not overclock).

All was well and good until Skylake came along. Late last year, Asrock shipped a firmware that enabled overclocking of even non-K Skylake processors.

What made Skylake special is a change in how it generates the various clock frequencies that it uses for the processor’s different components. A processor’s clock speed is governed by two things: a base clock speed and a multiplier. A 3.5GHz processor, for example, might have a base clock of 100MHz and a 35× multiplier. In the heyday of overclocking, both the base clock and multiplier were often adjusted (though sometimes the processor had to be tricked into offering the full range of multiplier options). Base clock overclocking, however, was always a little more susceptible to problems, because the processor’s speed isn’t the only thing that’s generated by that base clock. Other system clocks, such as the one used by the PCI bus, also tend to be driven by the base clock. As such, boosting the base clock meant that everything—not just the processor, but also RAM, the interface to your video card, the disk controller—had to operate out of spec.

Adjusting the multiplier was, therefore, the safer, better option for all but the most extreme overclocking. And that’s what people did, until Intel introduced the K processors with its Sandy Bridge line. With Sandy Bridge, multiplier overclocking was off-limits unless you bought a K processor. The base clock was the only option, with the problematic side-effect of making everything run too fast.

Skylake, however, changes how the base clock is used. In Skylake processors, the clock signals used for things like the integrated PCIe bus and memory controller aren’t derived from the base clock. They’re separate, meaning that the base clock can be freely altered without pushing any other part of the system out of specification. This is what Asrock’s firmware update took advantage of. Non-K Skylake processors have locked multipliers, but with the base clock now freely adjustable, a wide range of overclocking options became available. The only downside to this was that it meant disabling the integrated GPU, which presumably retains some base clock dependence.

Intel’s update changes the processor’s microcode—programmable code embedded in the processor that gets updated both by system firmware and operating systems—to, in some unspecified way, prevent altering the base clock in ways Intel does not want. While it will take some time for motherboard vendors to update their firmwares and actually propagate the new microcode to end-user systems, it means that the end is nigh for simple overclocking of Skylake processors. Some holdouts may stick with old, overclocking-capable firmware, but it will become increasingly hard to buy new motherboards that support the old capability.

None of this should have much impact on the K processors, which remain unlocked—and which will continue to attract around a 10-percent price premium for the privilege.


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/7TMF72y_aKM/

Original article

NetworkMiner 2.0 adds keyword filtering

network miner

NETRESEC has shipped NetworkMiner 2.0, the latest edition of its powerful network forensic analysis tool.

The update does a better job of interpreting your network traffic, with new parsers for SMB2 and Modbus/TCP, file extraction from SMB writes, and improved parsing for SMTP, FTP and DNS traffic.

A keyword filter in the Files, Parameters and DNS tabs allows you to quickly zoom in on important network data.

The program now extracts website favicon images and displays them in the HOSTS tab.

This release also sees the project move from SourceForge to NETRESEC’s site. (Older editions are still available there, but no longer supported.)

While this all sounds technical, it’s still extremely easy for absolutely anyone to use. If you want to understand how your network is being used, it could be as easy as downloading and unzipping the program, running NetworkMiner.exe as an administrator, choosing a network adapter and clicking Start.

Open a browser window, visit a site or two, and NetworkMiner sniffs the traffic and analyses it across various ways: DNS queries, hosts accessed, files downloaded, even images are extracted from the traffic and displayed in thumbnail form.

Live network sniffing isn’t always reliable, but if this doesn’t work then the program can also analyze PCAP files with the same level of detail.

The free build delivers all this with minimal restrictions, while a $900 Professional edition adds even more (Geo IP localisation, browser tracing, host coloring, export to CSV/ Excel, XML, more).

NetworkMiner 2.0 is available for Windows XP and later.


Original URL: http://feeds.betanews.com/~r/bn/~3/Y6tySScJaDo/

Original article

Identity thieves obtain 100,000 electronic filing PINs from IRS system

The Internal Revenue Service was the target of an attack that used stolen Social Security numbers and other taxpayer data to obtain PINs that can be used to file tax returns electronically.

The attack occurred in January and targeted an IRS Web application that taxpayers use to obtain their so-called Electronic Filing (E-file) PINs. The app requires taxpayer information such as name, Social Security number, date of birth and full address.

Attackers attempted to obtain E-file PINs corresponding to 464,000 unique SSNs using an automated bot, and did so successfully for 101,000 SSNs before the IRS blocked it.

The personal taxpayer data used during the attack was not obtained from the IRS, but was stolen elsewhere, the agency said in a statement. The IRS is notifying affected taxpayers via mail and will monitor their accounts to protect them from tax-related identity theft.

To read this article in full or to leave a comment, please click here


Original URL: http://www.computerworld.com/article/3031846/security/identity-thieves-obtain-100000-electronic-filing-pins-from-irs-system.html#tk.rss_all

Original article

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑

%d bloggers like this: