What’s new on Drupal.org? – February 2016

Read our Roadmap to understand how this work falls into priorities set by the Drupal Association with direction and collaboration from the Board and community.

Drupal.org updates

Drupal 6 is now end of life

”Druplicon”

As of February 24, 2016, Drupal 6 is at end of life (EOL). To support the end of life process for this version of Drupal, Association staff are ensuring that users are prompted to update to the final version of Drupal 6, and that site owners are made aware of the implications of EOL. Because the community at large no longer supports Drupal 6, site owners are encouraged to move to Drupal 7 or 8, or to work with one of the Drupal 6 Long Term Support vendors.

Our board’s director at-large election

”Hands

In February, self-nominations opened for a single director at-large position on the Association board. This is one of two such seats on the board that are decided by community election.

Now that nominations have closed, you can review candidate profiles and watch the Meet the Candidate webinars. Voting will run from March 7-18, and will be promoted to all eligible voters with a banner on Drupal.org.

Composer support for Drupal

Logo: WizardCat.com

In February, we continued the community initiative to support Composer on Drupal.org. Over the last several months, we’ve been working closely with members of the community, as well as with the maintainer of Composer and Packagist.org.

Drupal.org will provide two Composer endpoints: one for Drupal 7 projects, and one for Drupal 8. These separate endpoints will allow Drupal.org to translate Drupal-style contrib version numbers into the true semantic versioning that Composer expects. This will also help support a transparent movement to a more semantic versioning for contrib projects on Drupal.org.

We hope to provide a beta of this Drupal.org Composer support in March.

Manage your Drupal.org notifications

In January, we updated Drupal.org to allow users to follow many more content types, including Forum Topics, Posts, Case Studies, and documentation Book Pages. However, now that users are able to receive email notification for activity on a wide variety of content on Drupal.org, we also needed to provide some better tools for managing those notifications.

Follow notifications UI

A new tab now appears on every user’s profile called “Notifications,” which allows the user to configure, per content type, whether they want to receive email notifications when following that content, or simply add it to the their tracker (the “Your Posts” part of the Dashboard).

More insight into organization contributions

For some time now, the Drupal.org Marketplace has displayed recent issue credits attributed to the organizations that provide Drupal services to our ecosystem. However, there’s been no way to see the contributions attributed to non-service providers (that is, organizations that don’t sell Drupal services).

Until now. The drupal.org/organizations view now shows all organizations ranked by attributed issue credits, whether they’re a Drupal service provider, a customer, or even a community organization like a DrupalCamp. To promote this greater visbility, we’ve also highlighted the top 10 contributing customers of Drupal. We hope to continue to improve the many ways we track and display user and organization contributions, and would love your feedback.

Content restructure: Documentation

In 2015, we did a tremendous amount of work developing a comprehensive content strategy for Drupal.org. In 2016, we’re making great strides in implementing that strategy through a content restructure. The main idea behind the content restructure is the reorganization of our content into sections that can each have their own maintainership, governance, and related content.

Perhaps the most critical new section is Documentation. The new Documentation section will bring easier navigation, maintainership of documentation guides, better related content, and more relevant metadata to documentation on Drupal.org. We’re doing usability testing of our prototype of this new section now.

Sustaining support and maintenance

DrupalCon Asia: A landmark moment

DrupalCon Asia logo

February was also the time for DrupalCon Asia—at last! The event held at IITB in Mumbai hosted over 1,000 attendees, 82% of whom were first-time DrupalCon attendees. With Association staff both in Mumbai and providing remote support it was an incredibly challenging, colorful, rewarding, and enlightening event.

We’re proud to have brought the Asian community the DrupalCon they deserved. As the second largest region of users on Drupal.org, behind only the United States, we expect tremendous things from this vibrant community.

Jobs.drupal.org—tweeting the best opportunities in Drupal

Fostering and promoting the Drupal ecosystem is an important part of the Association’s mission. Drupal powers the best of the web, from single-installation to large-scale enterprise sites. Drupal Jobs provides companies a way to find the best Drupal talent, and provides Drupal developers a way to find open source-friendly careers. We recently updated jobs.drupal.org so that new positions are tweeted from the @jobsindrupal handle.

DrupalCI: troubleshooting a PHP 5.6 garbage collection bug

DrupalCI Logo

Our infrastructure team investigated a random failure in Core branch tests that happened when testing against PHP 5.6. Whenever a random failure like this appears on our testing infrastructure, it’s important for us to track down the cause. Is it a problem in testbot configuration, an unusual kind of regression introduced in code or in the test itself, or a bug in an underlying part of the stack, like PHP itself? For now, we believe the issue is related to PHP garbage collection, and we’re trying to reproduce it so that we can open a bug for PHP.

Upgrading Drupal.org servers to PHP 5.4

PHP 5.3 limitations may have caused some recent instability (like the outages on the weekend of February 14). Because of this instability, we <a href="https://www.drupal.org/node/2670036
%0A” rel=”noreferrer” target=”_blank”>upgraded our production and pre-production servers to PHP 5.4. We’d previously held off on this upgrade due to two sub-sites: qa.drupal.org (QA) and groups.drupal.org (GDO). However, we used this opportunity to statically archive QA (now that it is superseded by DrupalCI), and to upgrade outdated parts of GDO to work with PHP 5.4.

———

As always, we’d like to say thanks to all the volunteers who work with us, and to the Drupal Association Supporters, who made it possible for us to work on these projects.

Follow us on Twitter for regular updates: @drupal_org, @drupal_infra


Original URL: https://www.drupal.org/drupalorg/blog/whats-new-on-drupalorg-february-2016

Original article

Avoiding a pointless blockchain project

How to determine if you’ve found a real blockchain use case

Blockchains are overhyped. There, I said it. From Sibos to Money20/20 to cover stories of The Economist and Euromoney, everyone seems to be climbing aboard the blockchain wagon. And no doubt like others in the space, we’re seeing a rapidly increasing number of companies building proofs of concept on our platform and/or asking for our help.

As a young startup, you’d think we’d be over the moon. Surely now is the time to raise a ton of money and build that high performance next generation blockchain platform we’ve already designed. What on earth are we waiting for?

I’ll tell you what. We’re waiting to gain a clearer understanding of where blockchains genuinely add value in enterprise IT. You see, a large proportion of these incoming projects have nothing to do with blockchains at all. Here’s how it plays out. Big company hears that blockchains are the next big thing. Big company finds some people internally who are interested in the subject. Big company gives them a budget and tells them to go do something blockchainy. Soon enough they come knocking on our door, waving dollar bills, asking us to help them think up a use case. Say what now?

As for those who do have a project in mind, what’s the problem? In many cases, the project can be implemented perfectly well using a regular relational database. You know, big iron behemoths like Oracle and SQL Server, or for the more open-minded, MySQL and Postgres. So let me start by setting things straight:

If your requirements are fulfilled by today’s relational databases, you’d be insane to use a blockchain.

Why? Because products like Oracle and MySQL have decades of development behind them. They’ve been deployed on millions of servers running trillions of queries. They contain some of the most thoroughly tested, debugged and optimized code on the planet, processing thousands of transactions per second without breaking a sweat.

And what about blockchains? Well, our product was one of the first to market, and has been available for exactly 5 months, with a few thousand downloads. Actually it’s extremely stable, because we built it off Bitcoin Core, the software which powers bitcoin. But even so, this entire product category is still in its diapers.

So am I saying that blockchains are useless? Absolutely not. But before you embark on that shiny blockchain project, you need to have a very clear idea of why you are using a blockchain. There are a bunch of conditions that need to be fulfilled. And if they’re not, you should go back to the drawing board. Maybe you can define the project better. Or maybe you can save everyone a load of time and money, because you don’t need a blockchain at all.

1. The database

Here’s the first rule. Blockchains are a technology for shared databases. So you need to start by knowing why you are using a database, by which I mean a structured repository of information. This can be a traditional relational database, which contains one or more spreadsheet-like tables. Or it can be the trendier NoSQL variety, which works more like a file system or dictionary. (On a theoretical level, NoSQL databases are just a subset of relational databases anyway.)

A ledger for financial assets can be naturally expressed as a database table in which each row represents one asset type owned by one particular entity. Each row has three columns containing: (a) the owner’s identifier such as an account number, (b) an identifier for the asset type such as “USD” or “AAPL”, and (c) the quantity of that asset held by that owner.

Databases are modified via “transactions” which represent a set of changes to the database which must be accepted or rejected as a whole. For example, in the case of an asset ledger, a payment from one user to another is represented by a transaction that deducts the appropriate quantity from one row, and adds it to another.

2. Multiple writers

This one’s easy. Blockchains are a technology for databases with multiple writers. In other words, there needs to be more than one entity which is generating the transactions that modify the database. Do you know who these writers are?

In most cases the writers will also run “nodes” which hold a copy of the database and relay transactions to other nodes in a peer-to-peer fashion. However transactions might also be created by users who are not running a node themselves. Consider for example a payments system which is collectively maintained by a small group of banks but has millions of end users on mobile devices, communicating only with their own bank’s systems.

3. Absence of trust

And now for the third rule. If multiple entities are writing to the database, there also needs to be some degree of mistrust between those entities. In other words, blockchains are a technology for databases with multiple non-trusting writers.

You might think that mistrust only arises between separate organizations, such as the banks trading in a marketplace or the companies involved in a supply chain. But it can also exist within a single large organization, for example between departments or the operations in different countries.

What do I specifically mean by mistrust? I mean that one user is not willing to let another modify database entries which it “owns”. Similarly, when it comes to reading the database’s contents, one user will not accept as gospel the “truth” as reported by another user, because each has different economic or political incentives.

4. Disintermediation

So the problem, as defined so far, is enabling a database with multiple non-trusting writers. And there’s already a well-known solution to this problem: the trusted intermediary. That is, someone who all the writers trust, even if they don’t fully trust each other. Indeed, the world is filled with databases of this nature, such as the ledger of accounts in a bank. Your bank controls the database and ensures that every transaction is valid and authorized by the customer whose funds it moves. No matter how politely you ask, your bank will never let you modify their database directly.

Blockchains remove the need for trusted intermediaries by enabling databases with multiple non-trusting writers to be modified directly. No central gatekeeper is required to verify transactions and authenticate their source. Instead, the definition of a transaction is extended to include a proof of authorization and a proof of validity. Transactions can therefore be independently verified and processed by every node which maintains a copy of the database.

But the question you need to ask is: Do you want or need this disintermediation? Given your use case, is there anything wrong with having a central party who maintains an authoritative database and acts as the transaction gatekeeper? Good reasons to prefer a blockchain-based database over a trusted intermediary might include lower costs, faster transactions, automatic reconciliation, new regulation or a simple inability to find a suitable intermediary.

5. Transaction interaction

So blockchains make sense for databases that are shared by multiple writers who don’t entirely trust each other, and who modify that database directly. But that’s still not enough. Blockchains truly shine where there is some interaction between the transactions created by these writers.

What do I mean by interaction? In the fullest sense, this means that transactions created by different writers often depend on one other. For example, let’s say Alice sends some funds to Bob and then Bob sends some on to Charlie. In this case, Bob’s transaction is dependent on Alice’s one, and there’s no way to verify Bob’s transaction without checking Alice’s first. Because of this dependency, the transactions naturally belong together in a single shared database.

Taking this further, one nice feature of blockchains is that transactions can be created collaboratively by multiple writers, without either party exposing themselves to risk. This is what allows delivery versus payment settlement to be performed safely over a blockchain, without requiring a trusted intermediary.

A weaker case can also be made for situations where transactions from different writers are cross-correlated with each other, even if they remain independent. One example might be a shared identity database in which multiple entities validate different aspects of consumers’ identities. Although each such certification stands alone, the blockchain provides a useful way to bring everything together in a unified way.

6. Set the rules

This isn’t really a condition, but rather an inevitable consequence of the previous points. If we have a database modified directly by multiple writers, and those writers don’t fully trust each other, then the database must contain embedded rules restricting the transactions performed.

These rules are fundamentally different from the constraints that appear in traditional databases, because they relate to the legitimacy of transformations rather than the state of the database at a particular point in time. Every transaction is checked against these rules by every node in the network, and those that fail are rejected and not relayed on.

Asset ledgers contain a simple example of this type of rule, to prevent transactions creating assets out of thin air. The rule states that the total quantity of each asset in the ledger must be the same before and after every transaction.

7. Pick your validators

So far we’ve described a distributed database in which transactions can originate in many places, propagate between nodes in a peer-to-peer fashion, and are verified by every node independently. So where does a “blockchain” come in? Well, a blockchain’s job is to be the authoritative final transaction log, on whose contents all nodes provably agree.

Why do we need this log? First, it enables newly added nodes to calculate the database’s contents from scratch, without needing to trust another node. Second, it addresses the possibility that some nodes might miss some transactions, due to system downtime or a communications glitch. Without a transaction log, this would cause one node’s database to diverge from that of the others, undermining the goal of a shared database.

Third, it’s possible for two transactions to be in conflict, so that only one can be accepted. A classic example is a double spend in which the same asset is sent to two different recipients. In a peer-to-peer database with no central authority, nodes might have different opinions regarding which transaction to accept, because there is no objective right answer. By requiring transactions to be “confirmed” in a blockchain, we ensure that all nodes converge on the same decision.

Finally, in Ethereum-style blockchains, the precise ordering of transactions plays a crucial role, because every transaction can affect what happens in every subsequent one. In this case the blockchain acts to define the authoritative chronology, without which transactions cannot be processed at all.

A blockchain is literally a chain of blocks, in which each block contains a set of transactions that are confirmed as a group. But who is responsible for choosing the transactions that go into each block? In the kind of “private blockchain” which is suitable for enterprise applications, the answer is a closed group of validators (“miners”) who digitally sign the blocks they create. This whitelisting is combined with some form of distributed consensus scheme to prevent a minority of validators from seizing control of the chain. For example, MultiChain uses a scheme called mining diversity, in which the permitted miners work in a round-robin fashion, with some degree of leniency to allow for non-functioning nodes.

No matter which consensus scheme is used, the validating nodes have far less power than the owner of a traditional centralized database. Validators cannot fake transactions or modify the database in violation of its rules. In an asset ledger, that means they cannot spend other people’s money, nor change the total quantity of assets represented. Nonetheless there are still two ways in which validators can unduly influence a database’s contents:

  • Transaction censorship. If enough of the validators collude maliciously, they can prevent a particular transaction from being confirmed in the blockchain, leaving it permanently in limbo.
  • Biased conflict resolution. If two transactions conflict, the validator who creates the next block decides which transaction is confirmed on the blockchain, causing the other to be rejected. The fair choice would be the transaction that was seen first, but validators can choose based on other factors without revealing this.

Because of these problems, when deploying a blockchain-based database, you need to have a clear idea of who your validators are and why you trust them, collectively if not alone. Depending on the use case, the validators might be chosen as: (a) one or more nodes controlled by a single organization, (b) a core group of organizations that maintain the chain, or (c) every node on the network.

8. Back your assets

If you’ve got this far, you may have noticed that I tend to refer to blockchains as shared databases, rather than the more common “shared ledgers”. Why? Because as a technology, blockchains can be applied to problems far beyond the tracking of asset ownership. Any database which has multiple non-trusting writers can be implemented over a blockchain, without requiring a central intermediary. Examples include shared calendars, wiki-style collaboration and discussion forums.

Having said that, for now it seems that blockchains are mainly of interest to those who track the movement and exchange of financial assets. I can think of two reasons for this: (a) the finance sector is responding to the (in retrospect, minuscule) threat of cryptocurrencies like bitcoin, and (b) an asset ledger is the most simple and natural example of a shared database with interdependent transactions created by multiple non-trusting entities.

If you do want to use a blockchain as an asset ledger, you need to answer one additional crucial question: What is the nature of the assets being moved around? By this I don’t just mean cash or bonds or bills of lading, though of course that’s important as well. The question is rather: Who stands behind the assets represented on the blockchain? If the database says that I own 10 units of something, who will allow me to claim those 10 units in the real world? Who do I sue if I can’t convert what’s written in the blockchain into traditional physical assets? (See this asset agreement for an example.)

The answer, of course, will vary by the use case. For monetary assets, one can imagine custodial banks accepting cash in traditional form, and then crediting the accounts of depositors in a blockchain-powered distributed ledger. In trade finance, letters of credit and bills of lading would be backed by the importer’s bank and the shipping company respectively. And further in the future, we can imagine a time when the primary issuance of corporate bonds takes place directly on a blockchain by the company seeking to raise funds.

Conclusion

As I mentioned in the introduction, if your project does not fulfill every single one of these conditions, you should not be using a blockchain. In the absence of any of the first five, you should consider one of: (a) regular file storage, (b) a centralized database, (c) master–slave database replication, or (d) multiple databases to which users can subscribe.

And if you do fulfill the first five, there’s still work to do. You need to be able to express the rules of your application in terms of the transactions which a database allows. You need to be confident about who you can trust as validators and how you’ll define distributed consensus. And finally, if you’re looking at creating a shared ledger, you need to know who will be backing the assets which that ledger represents.

Got all the answers? Congratulations, you have a real blockchain use case. And we’d love to hear from you.


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/y3CXTzJBLl4/

Original article

Announcing SQL Server on Linux

It’s been an incredible year for the data business at Microsoft and an incredible year for data across the industry. This Thursday at our Data Driven event in New York, we will kick off a wave of launch activities for SQL Server 2016 with general availability later this year. This is the most significant release of SQL Server that we have ever done, and brings with it some fantastic new capabilities. SQL Server 2016 delivers:

  • Groundbreaking security encryption capabilities that enable data to always be encrypted at rest, in motion and in-memory to deliver maximum security protection
  • In-memory database support for every workload with performance increases up to 30-100x
  • Incredible Data Warehousing performance with the #1, #2 and #3 TPC-H 10 Terabyte benchmarks for non-clustered performance, and the #1 SAP SD Two-Tier performance benchmark on windows
  • Business Intelligence for every employee on every device – including new mobile BI support for iOS, Android and Windows Phone devices
  • Advanced analytics using our new R support that enables customers to do real-time predictive analytics on both operational and analytic data
  • Unique cloud capabilities that enable customers to deploy hybrid architectures that partition data workloads across on-premises and cloud based systems to save costs and increase agility

These improvements, and many more, are all built into SQL Server and bring you not just a new database but a complete platform for data management, business analytics and intelligent apps – one that can be used in a consistent way across both on-premises and the cloud. In fact, over the last year we’ve been using the SQL Server 2016 code-base to run in production more than 1.4 million SQL Databases in the cloud using our Azure SQL Database as a Service offering, and this real-world experience has made SQL Server 2016 an incredibly robust and battle-hardened data platform.

Gartner recently named Microsoft as leading the industry in their Magic Quadrant for Operational Database Management Systems in both execution and vision. We’re also a leader in Gartner’s Magic Quadrant for Data Warehouse and Data Management Solutions for Analytics, and Magic Quadrant for Business Intelligence and Analytics Platforms, as well as leading in vision in the Magic Quadrant for Advanced Analytics Platforms.

Gartner MQs

Extending SQL Server to Also Now Run on Linux

Today I’m excited to announce our plans to bring SQL Server to Linux as well. This will enable SQL Server to deliver a consistent data platform across Windows Server and Linux, as well as on-premises and cloud. We are bringing the core relational database capabilities to preview today, and are targeting availability in mid-2017.

SQL Server on Linux will provide customers with even more flexibility in their data solution. One with mission-critical performance, industry-leading TCO, best-in-class security, and hybrid cloud innovations – like Stretch Database which lets customers access their data on-premises and in the cloud whenever they want at low cost – all built in.

“This is an enormously important decision for Microsoft, allowing it to offer its well-known and trusted database to an expanded set of customers”, said Al Gillen, group vice president, enterprise infrastructure, at IDC. “By taking this key product to Linux Microsoft is proving its commitment to being a cross platform solution provider. This gives customers choice and reduces the concerns for lock-in. We would expect this will also accelerate the overall adoption of SQL Server.”

“SQL Server’s proven enterprise experience and capabilities offer a valuable asset to enterprise Linux customers around the world,” said Paul Cormier, President, Products and Technologies, Red Hat. “We believe our customers will welcome this news and are happy to see Microsoft further increasing its investment in Linux. As we build upon our deep hybrid cloud partnership, spanning not only Linux, but also middleware, and PaaS, we’re excited to now extend that collaboration to SQL Server on Red Hat Enterprise Linux, bringing enterprise customers increased database choice.”

“We are delighted to be working with Microsoft as it brings SQL Server to Linux,” said Mark Shuttleworth, founder of Canonical. “Customers are already taking advantage of Azure Data Lake services on Ubuntu, and now developers will be able to build modern applications that utilize SQL Server’s enterprise capabilities.”

Bringing SQL Server to Linux is another way we are making our products and new innovations more accessible to a broader set of users and meeting them where they are. Just last week, we announced our agreement to acquire Xamarin. Recently, we also announced Microsoft R Server , our technologies based on our acquisition of Revolution Analytics, with support for Hadoop and Teradata.

The private preview of SQL Server on Linux is available starting today and we look forward to working with the community, our customers and our partners to bring it to market.

Please join me Satya Nadella, Joseph Sirosh and Judson Althoff at our Data Driven event on Thursday to hear more about this news and how Microsoft is helping customers transform their business using data.

Thanks,
Scott

To find out more about SQL Server on Linux, you can sign up to get regular updates and provide input to the team.

[1] Gartner “Magic Quadrant for Operational Database Management Systems,” by Donald Feinberg , Merv Adrian , Nick Heudecker, October 2015
[2] Gartner “Magic Quadrant for Data Warehouse and Data Management Solutions for Analytics,” by Roxane Edjlali and Mark Beyer, Feb 2016
[3] Gartner “Magic Quadrant for Business Intelligence and Analytics Platforms,” by Josh Parenteau, Rita L. Sallam,  Cindi Howson  Joao Tapadinhas, Kurt Schlegel, Thomas W. Oestreich February 2016
[4] Gartner “Magic Quadrant for Advanced Analytics Platforms,” by Lisa Kart, Gareth Herschel, Alexander Linden, Jim Hare, February 2016

The above graphic was published by Gartner, Inc. as part of a larger research document and should be evaluated in the context of the entire document. The Gartner document is available upon request from Microsoft. Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/OniEiax0dto/

Original article

Microsoft brings SQL Server to Linux

sql_server_linux

The new Microsoft has placed an increased importance on the cloud, and with other companies following suit, reliance on server solutions has increased.  Today the company announces that it is bringing SQL Server to Linux.

Both cloud and on-premises versions will be available, and the news has been welcomed by the likes of Red Hat and Canonical. Although the Linux port of SQL Server is not due to make an appearance until the middle of next year, a private preview version is being made available to testers from today.

Microsoft’s increasing embrace of Linux sees the company expanding to a wider audience than ever. Al Gillen, group vice president, enterprise infrastructure, at IDC says that it shows Microsoft’s “commitment to being a cross platform solution provider”.

Writing on the Official Microsoft blog, Executive Vice President of Cloud and Enterprise Group at Microsoft, Scott Guthrie says:

Today I’m excited to announce our plans to bring SQL Server to Linux as well. This will enable SQL Server to deliver a consistent data platform across Windows Server and Linux, as well as on-premises and cloud. We are bringing the core relational database capabilities to preview today, and are targeting availability in mid-2017.

SQL Server on Linux will provide customers with even more flexibility in their data solution. One with mission-critical performance, industry-leading TCO, best-in-class security, and hybrid cloud innovations — like Stretch Database which lets customers access their data on-premises and in the cloud whenever they want at low cost — all built in.

Microsoft has not yet made clear exactly what other features of SQL Server 2016 will make their way to SQL Server for Linux, but more news is expected over the coming weeks and months.

Paul Cormier, President, Products and Technologies, Red Hat said, “SQL Server’s proven enterprise experience and capabilities offer a valuable asset to enterprise Linux customers around the world.” He continued:

We believe our customers will welcome this news and are happy to see Microsoft further increasing its investment in Linux. As we build upon our deep hybrid cloud partnership, spanning not only Linux, but also middleware, and PaaS, we’re excited to now extend that collaboration to SQL Server on Red Hat Enterprise Linux, bringing enterprise customers increased database choice.

While the full launch of SQL Server for Linux is not due until the middle of 2017, SQL Server 2016 is expected to launch later this year.


Original URL: http://feeds.betanews.com/~r/bn/~3/W8816pQgQ5o/

Original article

Supreme Court will not hear Apple antitrust appeal; lower court decision stands

Well, that’s that.

All we’ll hear about it from the Supreme Court is a single line under the listing of “Certiorari Denied” on the Supreme Court’s current list of summary dispositions (PDF)—but suffice it to say that it’s finally over. The Supreme Court won’t be hearing Apple’s final appeal of the antitrust decision against it, the appeals court decision affirming Judge Cote’s can stand, and Apple should get ready to start paying out $400 million in consumer refunds (despite the failed attempts of a professional objector to stick it with more). Wonder how much I’ll get back?

Andrew Albanese has his usual excellent coverage of the history of the case and the details of the settlement at Publishers Weekly; Bloomberg has it, too. Thankfully, it’s finally over, and we shouldn’t have to hear too much more about it once the payments are dispensed. Expect a flurry of commentary and reactions from across the blogosphere today, though.

On the whole, it’s not terribly surprising that the court ruled the way it did. As a number of participants in the blog symposium I covered pointed out, there was nothing really special about it, except in the minds of Apple’s defenders—and every antitrust case is special to the defendant. It’s unclear whether Scalia’s death affected the decision, though it couldn’t have helped Apple’s case any. It’s also uncertain he’d have favored Apple if he was still around.

In the end, the suit didn’t really make a whole lot of difference over the longer term where e-book prices are concerned. It delayed the onset of agency pricing, but it still came back and looks like it’s here to stay in the end—possibly even by Amazon’s own request. But it did smack the publishers down, force them to stop conspiring with each other as a matter of course even on things like book release dates, and kick back some of the publishers’ and Apple’s ill-gotten gains to the people who had to overspend on agency-priced titles.

And it also means I can get a smug sense of satisfaction by saying, as a statement of fact, Apple broke the law by conspiring with publishers to implement agency pricing. No more arguments that “it isn’t settled yet,” or denials, or weasel-words like “allegedly”: the courts officially found that Apple broke the law and its appeal was denied.

Apple. Broke. The. Law.

(And so did the publishers, though they settled to avoid an official judgment—but if Apple’s conduct was illegal, then so was theirs.)

I’m going to enjoy saying that for quite some time to come.

The post Supreme Court will not hear Apple antitrust appeal; lower court decision stands appeared first on TeleRead News: E-books, publishing, tech and beyond.


Original URL: http://www.teleread.com/supreme-court-will-not-hear-apple-antitrust-appeal-lower-court-decision-stands/

Original article

IDG Contributor Network: GoDaddy goes cloud, and throws in applications for good measure

GoDaddy is a funny beast. On the one hand the hosting business is huge, boasting 14 million customers worldwide and managing over 60 million domain names. It offers primarily small-business owners a one-stop shop. A place to buy their domain name, build their online presence, gain customers and manage their business.

On the other hand, however, GoDaddy is a bit of an anachronism. in a world which sees high-flying cloud infrastructure providers like Amazon Web Services, Microsoft Azure and DigitalOcean ramp up massive growth, GoDaddy seems a little . . . old school. 

To read this article in full or to leave a comment, please click here


Original URL: http://www.computerworld.com/article/3038758/enterprise-applications/godaddy-goes-cloud-and-throws-in-applications-for-good-measure.html#tk.rss_all

Original article

Logitech announces Intel NUC-powered ‘ConferenceCam Kit’ video conferencing bundle

Logitech_ConferenceCam_Kit_with_Intel_NUC

In the year 2016, you would think videoconferencing would be very prevalent in business offices. Unfortunately, many solutions are expensive and confusing — audio-based conference calls are still quite popular. In order for video conferencing to truly take off, it must be both easy to setup and use.

Today, Logitech announces the ConferenceCam Kit — a video conferencing bundle powered by the powerful, and diminutive, Intel NUC. Will it prove popular with businesses?

“In order to solve these customer pain points, the Logitech VC group created a conference room bundle dubbed ConferenceCam Kit. It includes a pre-specified PC from Intel tailored for video conferencing, a Logitech keyboard (K400+) for entering in meeting passcodes, a pre-configured Windows 10 Professional, Intel Unite for wireless data sharing to the local TV monitor, Intel vPro for IT to remotely manage the PC, and a Logitech ConferenceCam — either our new GROUP or Connect”, says Logitech.

The company further explains, “but most importantly, it contains a new element — a software shell called QuickLaunch SE that runs on top of Windows and turns the PC into a video conferencing kiosk. This provides the best of both worlds for IT: a locked down computer but fully configurable by IT to add select Windows applications (think Office, Google Drive, or custom designed company programs) at their discretion. No Angry Birds allowed. This software has some neat tricks like asking at the end of your meeting if you want to email that spreadsheet on the PC with changes, wipeout any confidential data so it’s not left behind, and reset the passwords to protect security”.

videomtgroom

The Intel NUC (NUC5i5MYHE) includes the following specs:

  • 8GB DDR3L RAM
  • Intel Dual Band Wireless-AC 7265
  • Intel SSD Pro 2500 Series (180GB)
  • Microsoft Windows 10 Pro

The mini computer comes bundled with the following:

  • Wireless Touch Keyboard K400 Plus
  • Logitech GROUP or ConferenceCam Connect
  • iluminari Quicklaunch SE
  • Mini display port to HDMI adapter
  • Intel Unite software

This bundle is quite beautiful and comprehensive, but it is not inexpensive. It will launch next month, starting at $1,599. The big benefit here, besides high-quality components, is knowing that everything is compatible. Since Logitech has tested everything together, you can be confident that you will not hit any roadblocks.

You also are not tied to any individual proprietary conferencing solution — a huge plus. Since this runs Windows 10, you can choose almost anything you’d like, such as Skype, WebEx, or Google Hangouts.

If you are interested in purchasing this bundle, Logitech shares its sales number, 1-800-308-8666.


Original URL: http://feeds.betanews.com/~r/bn/~3/CuzCecZQ5Uc/

Original article

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑

%d bloggers like this: