Cheaper Vizio 4K TVs With Built-in Google Cast Are Here

An anonymous reader cites a Mashable report: Cutting-edge technology always comes at a premium for early adopters, but it never stays premium for long. After launching its new P-Series 4K TVs with built-in Google Cast last month, Vizio is bringing the feature to its lower-priced TVs. The 2016 M-Series 4K TVs start at $849.99 for a 50-inch and rocket up to $3,999.99 for an 80-inch. They support high dynamic range (HDR) with Dolby Vision. The E-Series 4K TVs are much cheaper. They start at $469.99 for a 43-inch and go up to $1,699.99 for a 70-inch. Vizio’s also selling non-4K full HD E-series TVs with SmartCast starting at $229.99 for a 32-inch and going up to $369.99 for a 43-inch.

Share on Google+

Read more of this story at Slashdot.

Original URL:

Original article

Digital textbooks prove controversial in Huntsville, Alabama school district

15819844-mmmainRight as New York City is on the cusp of cutting a $30 million deal with Amazon to provide e-books for its schools, some parents in Huntsville, Alabama are finding that replacing print with digital textbooks for its schools has been more expensive than advertised.

Alabama Today and blogger Russell Winn have looked into costs for the four-year-old digital textbook program. Winn quotes letters from school board representative Elisa Ferrell explaining that a lack of textbook funding had resulted in many of the print textbooks in use degenerating into “a state of advanced age and disrepair,” necessitating a complete update of textbooks at that point anyway. There wasn’t enough funding to do a complete upgrade of printed books, but going all-digital at that point would save a little money and help “catapult our students into the digital age that they will be living in as adults.”

Ferrell explained that sets of print textbooks had been provided to classrooms at the beginning of the program for use by students with vision issues and other problems working with digital. However, over the four years of the program so far, students had become much more comfortable with digital, and as a result they would be moving the physical textbooks out of the classroom and into the schools’ libraries so that students who still needed them could check them out and take them home.

Contrary to the social media and blog traffic, we are not holding some sort of dystopian book burning party somewhere. We are retaining the textbooks, making them accessible, and making it possible for a student to check them out and use them at home. Of course, any student with an IEP or a 504 who has vision challenges, has accommodations in place for those challenges.

Winn is skeptical of some of Ferrell’s claims, noting that between the state of Alabama and the city of Huntsville, the school district has sufficient funding to cover the complete cost of printed textbooks if it wanted, and it could have staggered the upgrades to spread out the cost. He also suggests that some of her claims (such as as when she said one school had previously been using 40-year-old biology textbooks) were pure hyperbole.

When Winn ran the numbers, he found that although the digital textbooks themselves do cost slightly less than buying physical textbooks ($3.13 million as opposed to $3.35 million per year), the cost of buying new HP and Lenovo computers and iPads on which to use those digital textbooks adds almost $5 million in costs, bringing the grand total to a hair under $8 million per year. Rather than saving a little money, it’s actually costing over twice as much.

He also notes there seems to be some disagreement between school board representatives as to whether those print textbooks are going to be kept after all. It might be as few as 10% of them, which means fewer than 10% of students would be able to check them out and take them home.

It’s worth noting that nowhere in Winn’s blog post or the Alabama Today article is there any discussion of the fact that computers and iPads would be useful for a lot more than just reading digital textbooks. They could also be used for educational software, writing and research, and many uses beyond simply displaying textbooks, so tallying their expense only against textbook costs is a little disingenuous.

Furthermore, I think Ferrell has a point when she talks about how much better-prepared for college students who’ve gone through the new digital curriculum have been. Surely helping graduate a class of better-prepared students is worth a little extra outlay, right?

That said, some of the comments show that students are still having problems using the digital textbook. One parent notes that their middle school son does enjoy having a school computer and most of the learning that takes place with it; however, he’s disappointed the paper textbooks are going away because they’re easier to use. The parent writes:

The digital books are not that user friendly. Social studies and science, in particular are difficult to use to find specific information without turning each page one at a time (can’t search for a question or term easily). He doesn’t enjoy the digital version of these at all. They are frustrating to use for normal “answer the questions at the end of the chapter” assignments. He would much rather have a book.

This harkens back to an article I covered last month about some of the usability problems digital textbooks experience in a college environment. It’s not surprising that grade school students would have similar issues. Will more than 10% of the students need to use those paper textbooks? If so, it’s possible another solution might be needed.

Overall, this controversy feels like it has more to do with local school board politics in general than e-books in and of themselves. As they say about academic politics in general, it’s all the more cutthroat because so little is actually at stake. If it weren’t digital textbooks, there would be some other controversy over something else.

Nonetheless, this is instructive in showing other districts the kinds of things they’ll have to take into account when they come to replacing paper textbooks with digital versions themselves. Will they keep some paper copies around for convenience? If so, how many, and how will they be made available?

Sooner or later, every school district in the nation is going to have to face this question. The ones who haven’t yet should probably start paying attention and learning from the ones who have.

(Photo credit:

The post Digital textbooks prove controversial in Huntsville, Alabama school district appeared first on TeleRead News: E-books, publishing, tech and beyond.

Original URL:

Original article

Mesosphere open-sources data center management software

Cloud computing startup Mesosphere has decided to open source its platform for managing data center resources, with the backing of over 60 tech companies, including Microsoft, Hewlett Packard Enterprise and Cisco Systems.

Derived from its Datacenter Operating System, a service that Mesosphere set out to build as an operating system for all servers in a data center as if they were a single pool of resources, the open-source DC/OS offers capabilities for container operations at scale and single-click, app-store-like installation of over 20 complex distributed systems, including HDFS, Apache Spark, Apache Kafka and Apache Cassandra, the company said in a statement Tuesday.

To read this article in full or to leave a comment, please click here

Original URL:

Original article

AWS Device Farm Update – Remote Access to Devices for Interactive Testing

Last year I wrote about AWS Device Farm and told you how you can use it to Test Mobile Apps on Real Devices. As I described at the time, AWS Device Farm allows you to create a project, identify an application, configure a test, and then run the test against a variety of iOS and Android devices.

Remote Access to Devices
Today we are launching a new feature that provides you with remote access to devices (phones and tablets) for interactive testing. You simply open a new session on the desired device, wait (generally a minute or two) until the device is available, and then interact with the device via the AWS Management Console.

You can gesture, swipe, and interact with devices in real time directly through your web browser as if the device was on your desk or in your hand. This includes installing and running applications!

Here’s a quick demo. I click on Start a new session to begin:

Then I search for a device of the desired type, including the desired OS version, select it, and name my session. I click on Confirm and start session to proceed:

Then I wait for the device to become available (about 30 seconds in this case):


Once the device is available I can see the screen and access it through the Console:

I can interact with the Kindle Fire using my mouse. Perhaps my app is not behaving as expected when the language is set to Latin American Spanish. I can change the Kindle Fire’s settings with a couple of clicks:

I can install my app on the Kindle Fire by clicking on Upload and choosing my APK.

My session can run for up to 60 minutes. After that time it will stop automatically.

Available Now
This new feature is available in beta form now, with a wide selection of Android phones and tablets. We will be adding iOS devices later this year, along with additional control over the device configuration and (virtual) location.

AWS Device Farm comes with a one-time free trial of 250 device minutes. After that you are charged $0.17 per device minute. Or you can pay $250 per slot per month for unmetered access (slots are the units of concurrent execution).



Original URL:

Original article

Amazon S3 Transfer Acceleration

Several AWS teams are focused on simplifying and accelerating the process of moving on-premises data to the cloud. We started out with the very basic PUT operation and multipart upload in the early days. Along the way we gave you the ability to send us a disk, and made that process even easier by launching the AWS Import/Export Snowball at last year’s AWS re:Invent (read AWS Import/Export Snowball – Transfer 1 Petabyte Per Week Using Amazon-Owned Storage Appliances to learn more).

Today we are launching some important improvements to Amazon S3 and to Snowball, both designed to further simplify and accelerate your data migration process. Here’s what’s new:

Amazon S3 Transfer Acceleration – This new feature accelerates Amazon S3 data transfers by making use of optimized network protocols and the AWS edge infrastructure. Improvements are typically in the range of 50% to 500% for cross-country transfer of larger objects, but can go ever higher under certain conditions.

Larger Snowballs in More Regions – A new, larger-capacity (80 terabyte) Snowball appliance is now available. In addition, the appliances can now be used with two additional US Regions and two additional international Regions.

Amazon S3 Transfer Acceleration
The AWS edge network has points of presence in more than 50 locations. Today, it is used to distribute content via Amazon CloudFront and to provide rapid responses to DNS queries made to Amazon Route 53. With today’s announcement, the edge network also helps to accelerate data transfers in to and out of Amazon S3. It will be of particular benefit to you if you are transferring data across or between continents, have a fast Internet connection, use large objects, or have a lot of content to upload.

You can think of the edge network as a bridge between your upload point (your desktop or your on-premises data center) and the target bucket. After you enable this feature for a bucket (by checking a checkbox in the AWS Management Console), you simply change the bucket’s endpoint to the form No other configuration changes are necessary! After you do this, your TCP connections will be routed to the best AWS edge location based on latency.  Transfer Acceleration will then send your uploads back to S3 over the AWS-managed backbone network using optimized network protocols, persistent connections from edge to origin, fully-open send and receive windows, and so forth.

Here’s how I enable Transfer Acceleration for my bucket (jbarr-public):

My bucket endpoint is I simply change it to in my upload tool and/or code. Then I initiate the upload and take advantage of S3 Transfer Acceleration with no further effort. I can use the same rule to construct a URL that can be used for accelerated downloads.

Because network configurations and conditions vary from time to time and from location to location, you pay only for transfers where Transfer Acceleration can potentially improve upload performance. Pricing for accelerated transfers begins at $0.04 per gigabyte uploaded. As is always the case with S3, there’s no up-front fee or long-term commitment.

You can use our new Amazon S3 Transfer Acceleration Speed Comparison to get a better understanding of the value of Transfer Acceleration in your environment. I ran it from my Amazon WorkSpace in the US West (Oregon) Region and this is what I saw:

As you can see, the opportunity for improvement grew in rough proportion to the distance between my home Region and the target.

To learn more about this feature, read Getting Started with Amazon S3 Transfer Acceleration in the Amazon S3 Developer Guide. It is available today in all Regions except Beijing (China) and AWS GovCloud (US).

Larger Snowballs in More Regions
Many AWS customers are now using AWS Import/Export Snowball to move large amounts of data in and out of the AWS Cloud. For example, here’s a Snowball on-site at GE Oil & Gas:

Ben Wilson (CTO of GE Oil & Gas) posted the picture to LinkedIn with the following caption:

“PIGs and Snowballs – a match made in heaven!! AWS Snowball 25 TB of Pipeline PIG Data to be managed at AWS. That is our GE PIG we pulled some of the data from. It’s always fun to try new AWS features and try to break them!!”

Today we are making Snowball available in four new Regions: AWS GovCloud (US), US West (Northern California), Europe (Ireland), and Asia Pacific (Sydney). We expect to make Snowball available in the remaining AWS Regions in the coming year.

The original Snowball appliances had a capacity of 50 terabytes. Today we are launching a newer appliance with 80 terabytes of capacity. If you are transferring data in or out of the US East (Northern Virginia), US West (Oregon), US West (Northern California), or AWS GovCloud (US) Regions using Snowball you can choose the desired capacity. If you are transferring data in or out of the Europe (Ireland) or Asia Pacific (Sydney) Regions, you will use the 80 terabyte appliance. Here’s how you choose the desired capacity when you are creating your import job:

To learn more about Snowball, read the Snowball FAQ and the Snowball Developer Guide.


Original URL:

Original article

Capital One open sources Cloud Custodian AWS resource management tool

CapitalOne Bank branch Capital One is a huge organization with lots of compliance issues related to being a financial services company. It also happens to be an Amazon Web Services customer and it needed a tool to set rules and policies in an efficient way around AWS usage. Last July it started developing the tool that would become Cloud Custodian and today it announced at an AWS event in Chicago that it was… Read More

Original URL:

Original article

GitLab Partners with DigitalOcean to make CI more affordable

Apr 19, 2016

Today, we are excited to announce our partnership with DigitalOcean, the world’s simplest cloud infrastructure provider. Together, GitLab and DigitalOcean want to help developers eliminate the scaling challenges that come with Continuous Integration (CI), such as speed, security, and cost. To help alleviate these challenges, GitLab partnered with DigitalOcean to provide free Runners to all projects on as well as discount codes for GitLab Community Edition and Enterprise Edition users.

GitLab + DigitalOcean

Eliminating Scaling Challenges with DigitalOcean

At GitLab, we have a new release every month on the 22nd, so we respect the importance of agile development and timely testing. That is why we built Continuous Integration directly into our platform. Our continuous integration allows you to run a number of tests as you prepare to deploy your software. Naturally, we are heavy users of our own software. We run about 16 tests in parallel. While the benefits of testing are undeniable, we realized that running several parallel tests requires a lot of CPU. The need to scale servers up to meet testing demands often forces developers to sacrifice speed, security, and/or money.

We want to help solve the challenges arising from agile development processes and growing code bases. “Together with DigitalOcean, we’ve taken the challenges of expensive and slow build processes head on—changing the way developers approach the build process,” said Sid Sijbrandij, our CEO and co-founder. “Complementing our collaborative platform, DigitalOcean is uniquely suited to help us solve these problems as it can spin up new, provisioned servers in under a minute, an industry record. Developers can have the needed resources simply and immediately for testing and launching their code.”

To further support the needs of developers, in late March we introduced a new autoscaling feature to our existing GitLab Runner. GitLab Runner is a hosted application that processes builds. This new feature, called GitLab Runner Autoscale, enables you to automatically spin up new instances (and wind them down) as needed. This dynamic availability makes it faster, safer and more affordable for you to run your builds in parallel. While instances can be hosted at all the major cloud providers, DigitalOcean is uniquely suited to support this autoscaling feature. With the fastest start in the industry, DigitalOcean can make new instances available in under a minute versus up to eight minutes on a leading cloud platform.

Benefits to Developers

DigitalOcean has made tremendous strides in supporting the development community with a simple and scalable cloud computing solution. DigitalOcean’s dedication to simplicity and scale perfectly aligns with GitLab’s focus on delivering a code collaboration tool that makes it easier for developers to code, test, and deploy together. Our goal in partnering with DigitalOcean was to make continuous integration fast, secure, and cost-effective. We hope that this partnership will offer the following benefits:

  • Speed: You no longer have to wait to test your code. Running tests can take multiple hours, especially if it’s the end of the sprint and your tests are the last one in the queue. Now, you can scale your Runners up to test in parallel.
  • Security: Test your code in a controlled and safe environment. After the machine runs the test, it’s discarded to ensure security.
  • Affordability: Save money by only paying for servers when you use them.

Ben Uretsky, CEO and co-founder of DigitalOcean, is equally excited about the benefits this partnership brings to developers. “We want to make it easier for teams building and scaling distributed applications in the cloud,” he said. “This partnership with GitLab enhances the open- source, collaborative approach to development.”

Start using GitLab + DigitalOcean Today

If you’re not a customer, simply create a account to get free Runners for your public and private repositories.

For existing users, great news, your Runners are powered by DigitalOcean and are completely free.

For GitLab Community Edition users, use the promotional code GitLab10 to receive a $10 credit*, when creating a new DigitalOcean account.

For GitLab Enterprise Edition users, you’ll receive an email with a unique promo for a $250 credit* to use to host your own Runners on DigitalOcean.

*Note: Promotion code available for new DigitalOcean customers only.

Need help setting up your Runners?

For help setting up your GitLab Runners, read the tutorial documentation, How to setup GitLab Runner on DigitalOcean.

Install GitLab on your own server in 2 minutes

Browse all posts

Get our GitLab newsletter twice monthly.

Original URL:

Original article

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑

%d bloggers like this: