GoDaddy CTO and Cloud VP Heads to Google

GoDaddy’s chief technology officer — amid a time when the company is expanding its cloud-computing operations — is departing, according to a regulatory filing.

Elissa Murphy will be leaving the company later in May. Her departure comes as GoDaddy has begun building out cloud infrastructure, helping it evolve from a simple hosting service to something more robust. These kinds of tools help convince small businesses to stick around with GoDaddy services, rather than just register and host a domain.

To be sure, executive departures happen — especially as companies grow and go public. But it’s still an interesting time for her to leave given the company’s expansion into cloud services.

According to a statement provided to Fortune, Murphy is joining Google. Chief information and infrastructure officer Arne Josefberg will take over, according to Fortune — giving him a pretty important job for the recently-public company that has to find ways to expand its core business. There are no hints as to what she’ll be doing at Google, but Google too has been winning big clients for its cloud service, including recently Spotify.

GoDaddy has had somewhat of a rocky year, share-wise. But so far, on the year shares are up around 20% — signaling that, whatever the company is doing, it seems to be working and growing. Revenue for the company was up 14% in the fourth quarter year-over-year, and last quarter beat analyst expectations.

Still, if the company is going to continue growing, it has to find new lines of business — which means expanding into new areas that give small businesses tools that make them want to stick with GoDaddy, rather than moving to other services.


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/C-gvTP8QEU8/godadddy-cto-and-cloud-vp-heads-to-google

Original article

Out-of-Date Apps Put 3 Million Servers At Risk of Crypto Ransomware Infections

An anonymous reader cites an article on Ars Technica: More than 3 million Internet-accessible servers are at risk of being infected with crypto ransomware because they’re running vulnerable software, including out-of-date versions of Red Hat’s JBoss enterprise application, researchers from Cisco Systems said Friday. About 2,100 of those servers have already been compromised by webshells that give attackers persistent control over the machines, making it possible for them to be infected at any time, the Cisco researchers reported in a blog post. The compromised servers are connected to about 1,600 different IP addresses belonging to schools, governments, aviation companies, and other types of organizations. Some of the compromised servers belonged to school districts that were running the Destiny management system that many school libraries use to keep track of books and other assets. Cisco representatives notified officials at Destiny developer Follett Learning of the compromise, and the Follett officials said they fixed a security vulnerability in the program. Follett also told Cisco the updated Destiny software also scans computers for signs of infection and removes any identified backdoors.


Share on Google+

Read more of this story at Slashdot.


Original URL: http://rss.slashdot.org/~r/Slashdot/slashdot/~3/4Waz6KNBQo0/out-of-date-apps-put-3-million-servers-at-risk-of-crypto-ransomware-infections

Original article

White House Source Code Policy a Big Win for Open Government

The U.S. White House Office of Management and Budget (OMB) is considering a new policy for sharing source code for software created by or for government projects. There’s a lot to love about the proposed policy: it would make it easier for people to find and reuse government software, and explicitly encourages government agencies to prioritize free and open source software options in their procurement decisions.

EFF submitted a comment on the policy through the White House’s GitHub repository (you can also download our comment as a PDF). The OMB is encouraging people to send comments through GitHub, reply to and +1 each other’s comments, and even offer direct edits to the policy via pull requests.

Public Domain with Asterisks

But wait, why is a source code policy necessary at all? Isn’t everything the federal government creates already in the public domain?

Sort of. U.S. copyright law has an exception for works created by the federal government, but that exception has always left some doubt over the copyright status of U.S.-government-created works in other countries.

To our knowledge, the government has never enforced its copyright abroad for works that would be considered public domain in the U.S; however, in some contexts, it has actively asserted that its works are copyrighted internationally. Doubts over copyright can be a stumbling block for researchers, developers, free software communities, and other people who might want to use or study government source code.

That’s why we recommend that the White House expressly dedicate code covered under the policy to the public domain internationally. As a less preferable alternative, the OMB could license the code under a permissive software license. That way, users in other countries could assuage any concerns over copyright by adhering to the terms of the license. As Creative Commons pointed out in its comment on the policy, some government agencies have assuaged this uncertainty by using CC0, a tool copyright owners can use to effectively dedicate their works to the public domain.

That all assumes you can find the source code in the first place. Right now, there’s no convenient way to procure source code for many government-owned projects. It’s technically in the public domain (with the caveat above), but that doesn’t do you much good if it’s not shared publicly in any reliable way. That’s why the proposed policy also includes launching a public repository for government source code.

What About Third-Party Code?

There’s another big problem with the public domain status of government-owned works: it doesn’t include content created by third parties. If I take a photograph as an employee of the federal government, that photograph gets no copyright protection in the United States (putting aside the question of international use). Conversely, if I take it as a work for hire for a government agency, my copyright is transferred to the government. That discrepancy is particularly relevant when discussing software, because most government software is built by contractors.

The proposed policy would require agencies to publicly share source code for some third-party software. Agencies could develop their own policies for which projects to share, so long as at least 20% of the total code is made public and shared under a license approved by the Open Source Initiative. We think that the 20% rule would be a missed opportunity, and many of our fellow commenters agree. An open-by-default policy would make the government repository a much more valuable resource to the public. We also hope that the OMB considers dedicating third-party code that is assigned to the federal government to the public domain as well, possibly alongside an appropriate free and open source software license.

Narrower Exceptions Will Protect the Policy from Abuse

The policy lays out a process for requesting permission to keep source code private, and gives specific reasons agencies can use to request an exception:

Applicable exceptions are as follows:

  • The release of the item is restricted by another statute or regulation, such as the Export Administration Regulations, the International Traffic in Arms Regulation, or the laws and regulations governing classified information;
  • The release of the item would compromise national security, confidentiality, or individual privacy;
  • The release of the item would create an identifiable risk to the stability, security, or integrity of the agency’s systems or personnel;
  • The release of the item would compromise agency mission, programs, or operations; or
  • The CIO believes it is in the national interest to exempt publicly releasing the work.

Both the term “confidentiality” and that last item on the list strike us as very broad. They could effectively be used to keep any code private—that’s why we suggest omitting them.

The OMB says that it expects exceptions to be very rare. The way to keep them rare even as administrations change is to define them as narrowly and specifically as possible.

Once again, we applaud the OMB for this proposed policy and we’re eager to see it enacted. The OMB is still accepting comments and pull requests through Monday, April 18.


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/EwPGhSovLTw/white-house-source-code-policy-big-win-open-government

Original article

GoDadddy CTO and cloud VP heads to Google

Screen Shot 2016-04-16 at 11.34.35 AM GoDaddy’s chief technology officer — amid a time when the company is expanding its cloud-computing operations — is departing, according to a regulatory filing. Elissa Murphy will be leaving the company later in May. Her departure comes as GoDaddy has begun building out cloud infrastructure, helping it evolve from a simple hosting service to something more robust. These kinds… Read More


Original URL: http://feedproxy.google.com/~r/Techcrunch/~3/JXy-fzHy6VA/

Original article

BugVM is free and an open source alternative to closed commercial RoboVM

bugvm

BugVM is free and an open source alternative to closed commercial RoboVM.

It all started on October 22, 2015 when RoboVM officially announced its intention to close the source code and that RoboVM has been acquired by other company. A fork of RoboVM was created to maintain the free open source status.

This fork is BugVM.

Source Code : https://github.com/bugvm

Update:
RoboVM Winding Down announcement on April 15, 2016.
Well… RoboVM just killed itself.

Good news is that BugVM is alive. There are apps on the AppStore created with BugVM.


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/kYnnVEPPJKc/

Original article

Challenges of micro-service deployments

I recently took part in a panel discussion discussing continuous delivery of micro-services hosted by ElectricCloud with Daniel Rolnick, Darko Fabijan and Anders Wallgren. The discussion is worth a listen if you have a spare hour however I would like to discuss some of the interesting points that came out of the discussion regarding the challenges inherent in micro-services deployments.

Team Overhead

Micro services architecture has a lot of benefits around the development process. It allows large teams to move rapidly semi-independent from each other. In addition it allows rapid prototyping and iteration on small feature sets. However, micro-services put a significant operational and tooling overhead on your development teams. Each of your services requires a deployment pipelines, a monitoring system, automated alerts, on-call rotations and so forth. All of this overhead is justified for large teams as the payoff from added productivity of feature work is worth the effort of creating these systems. However, in small teams if the same few people are responsible for all the services anyway replicating the pipelines for multiple project is wasted overhead. As Anders highlighted, you should write version 1.0 of your system as a monolith and then spin-off micro-services from the monolith as and when they make sense. This is will also allow for the emergent design around how the system breaks down into a service.

Operations Overhead

There is also an operational overhead to running so many services. In the monolith world if you push a bad version you roll the system back, if your are getting resource constrained you scale out horizontally. In the micro-services world the steps are the same however you need a lot more monitoring and automation to detect; Which of the tens of services needs to be rolled back? What is the impact of the rollback on other dependent services? If capacity needs to be added which services should it be added for? Will that just push the problem down stream to other services? If you have automated alerting (which you really should) then we need to ascribe alerts to service owners and then maintain on-call schedules for multiple services. In a small organization there will be a lot of overlap in the sets of people responsible for each service. This means we will have to coordinate the schedules for these services to make sure the same person is not on the hook for too many services and that people get some respite from on-call rotations. For these reasons as well as those mentioned in the previous section, its better to have a monolith and get all your operational ducks in a row before you start adding the overhead of multiple micro-services into the mix.

Distributed Debugging

With micro-services when things go wrong you can’t just log into a server and eye-ball the logs. The logs for a single user-session and in fact even a small process within the session will be spread over many different services. This was already true for monolithic scalable stateless servers but in the micro-services world not having centralized logging is a show-stopper. Furthermore, at large scale having separate monitoring systems (such as datadog, grahite) and separate log aggregation systems (such as ELK, loggly or Splunk) is not feasible. At this scale visualizing metrics and log data is a big-data problem that you are better of solving in one place.

Deployment Coordination and Version Management

Lastly, one of the big differences between monolithic and micro-services is that you are going from a dependency tree of services into a graph. For example a typical service stack in the monolithic model may consist of a web array, which calls a cache layer, database layer and maybe a few stand-alone services such as authentication etc. In the micro-services model you will have an interconnected graph or network of services each of which depends on several others.

It is very important to ensure that this graph remains a Directed Acyclic Graph (DAG) otherwise you will be in dependency hell and potentially have distributed stack overflow errors.

Dependency Example 1

For example as shown in the example above, service A calls service B which calls service C which calls service A. If the first call to service A is the same as the second you will be in an infinite loop. If the first call is different from the second you may still be able to make progress but you can get into dependency cycles. For example an update to the API of service A will potentially require a change to service C. However, before Service C can be updated service A needs to be updated for the new API. Which do you do first? What happens to the traffic when one of the services is updated and the other is not?

Dependency Example 2

A similar issue arises when you you have two services depending on a third services, i.e. Service X and Service Y both call Service Z. What if service X depends on a different version of Z than Y. For these (and other) reasons we recommend that you always maintain backwards compatibility of all APIs or have very good mechanisms for detecting and responding to the issues highlighted above.

Guidelines

No one in the panel was comfortable enough with their micro-service system to propose anything like a guide for how to go about building such a system however, we did come up with some rules of thumb or general guidelines including the following.

Build/Use a Platform

We have hinted this earlier but with micro-services you will need to setup a lot of infrastructure, if you do this for each of your services the overhead be prohibitive. It is only possible to run micro-services deployments if you have automated all your infrastructure creation and management tasks. In other words, you must build or use a micro-services platform before you start writing micro-services. Kubernetees, Swarm, Mesos and their ilk will get you lot of the way there but you still need to unify your monitoring, debugging, continuous pipelines and service discovery mechanisms.

Everything must be code-defined

Following on from the previous point you cannot afford to have any part of your system be defined using human processes. Everything must be defined in code, testable and repeatable. For example, your server/VM setup should be orchestrated using docker-machine, puppet, ansible etc. Your continuous pipelines should be created using something like the Jenkins DSL Plugin. Your deployment should be defined in something like Docker compose. Using this setup you can easily replicate your setup for each new service and also push infrastructure updates and fixes to your entire set of services quickly.

Centralize Monitoring, Logging and Alerting

Before you write your first micro-service you need to have a central system to ingest, index and present your system metrics and logging events. Not only that but you need some form of anomaly detection and monitoring that is able to analyze events from each new service that gets added without manual intervention. While a monolithic service is like a beloved pet, you know all of its quirks and habits, micro-services are like cattle; you need all of them to be more or less identical, and managed as a generic herd rather than an individual.

Enforce backwards and forwards compatibility

You must use a design paradigm and tool set that ensures the API is always backwards and forwards compatible between services. At kik We use a system called GRPC which allows us to easily define services and their dependence using Protocol Buffers. Only using optional fields and coding for missing fields helps us ensure our services are resilient to version mismatch. Daniel mentioned that Yodle uses Pact JVM to help with testing compatibility at this layer. There are a host of testing and service definition frameworks to choose from but make sure your tools and dev process catches API breaking changes.

Micro-services as Networks

Lastly, we recommend you visualize a large micro-services deployment as a network. Monitoring and managing a large micro-services deployment is very similar to managing a network system. We need to make sure that requests(packets) do not infinitely loop in the services(routers). Maybe we can use the concept of TTLs to limit the number of hops. We need detect and respond to failures at edges, if a service deep down in the call hierarchy is down do we need to do all the calls to get to that service or can we shed load by preempting the request early (very similar BGP Route availability). We need to make sure that services are not overloaded by calls from other services, maybe we can use concepts from congestion control work on networks, Heka and Hystrix may be useful in this area.

Summary

Micro-services are a huge step forward in defining scalable, manageable and maintainable deployments and is a natural progression of the service oriented architecture. However, they are not a magic bullet to solve the fundamental problems of building and running distributed software at scale. A micro-services architecture does force you to be more conscientious about following best practices and automating workflows. The big take away from the discussion is that unless you are willing to divert a lot of time and resources from feature work into building and maintaining a service framework, its better avoid taking the plunge into the micro-services world. If however, you can invest the time to build a great service framework and workflow then you will come out of the transition as a more agile and productive organization.


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/NLYNkYn6FY8/microservice.html

Original article

Awesome-cpus: All CPU and MCU documentation in one place


All CPU and MCU documentation in one place

HTML

Permalink

Failed to load latest commit information.
8085
Intel 8085 data sheet.
Apr 12, 2016
ARM
Uncompress files.
Apr 13, 2016
Alpha
Uncompress files.
Apr 13, 2016
CRIS
Upload my old stash.
Dec 31, 2015
DSP56000
Upload my old stash.
Dec 31, 2015
ESA390
Upload my old stash.
Dec 31, 2015
F18A
GreenArrays F18A technology reference.
Apr 12, 2016
H8
Upload my old stash.
Dec 31, 2015
HD6301
Hitachi HD6301 data sheet.
Apr 12, 2016
IA-64
Uncompress files.
Apr 13, 2016
M68000
Uncompress files.
Apr 13, 2016
MC6809
Motorola 6809 manual.
Apr 12, 2016
MCS6500
6510 data sheet.
Apr 12, 2016
MIPS
Uncompress files.
Apr 13, 2016
MSP430
Texas Instruments MSP430 Quick Reference.
Apr 12, 2016
OpenRISC
OpenRISC architecture manual v1.1.
Apr 12, 2016
PDP-1
DEC PDP-1 documents.
Apr 12, 2016
PDP-10
<a href="https://github.com/larsbrinkhoff/awesome-cpus/commit/f5c4d28da79899079d1bd219a9fb955b39e18532" title="Convert Postscript Files to PDF

The original postscript files were left intact in case of potential
errors from conversion.

Conversion
– ps2pdf command from ghostscript package
– command used 'ps2pdf ‘
– Installed ghostscript with HomeBrew
– https://github.com/Homebrew/homebrew-core/blob/master/Formula/ghostscript.rb
– master branch
– commit 30235c8b2c2f13ef14f117e132455f6f229b359c
– ghostscript 9.18
– Homebrew Used
– Homebrew/homebrew-core (git revision ee9a; last commit 2016-04-11)
– converted on OS X El Capitan” rel=”noreferrer” target=”_blank”>Convert Postscript Files to PDF

Apr 15, 2016
PDP-8
Harris HD6120 specifications.
Apr 12, 2016
PIC
Microchip PIC family reference manual.
Apr 12, 2016
PowerPC
Uncompress files.
Apr 13, 2016
RISC-V
RISC-V user-level ISA manual.
Apr 12, 2016
RTX2000
Harris RTX2010RH data sheet.
Apr 12, 2016
SPARC
<a href="https://github.com/larsbrinkhoff/awesome-cpus/commit/f5c4d28da79899079d1bd219a9fb955b39e18532" title="Convert Postscript Files to PDF

The original postscript files were left intact in case of potential
errors from conversion.

Conversion
– ps2pdf command from ghostscript package
– command used 'ps2pdf ‘
– Installed ghostscript with HomeBrew
– https://github.com/Homebrew/homebrew-core/blob/master/Formula/ghostscript.rb
– master branch
– commit 30235c8b2c2f13ef14f117e132455f6f229b359c
– ghostscript 9.18
– Homebrew Used
– Homebrew/homebrew-core (git revision ee9a; last commit 2016-04-11)
– converted on OS X El Capitan” rel=”noreferrer” target=”_blank”>Convert Postscript Files to PDF

Apr 16, 2016
SuperH
Upload my old stash.
Dec 31, 2015
Xtensa
Tensilica Xtensa ISA reference manual.
Apr 12, 2016
Z80
Zilog Z80 documents.
Apr 12, 2016
x86-64
Upload my old stash.
Dec 31, 2015
zArchitecture
Upload my old stash.
Dec 31, 2015


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/urOTjbz2q2I/awesome-cpus

Original article

Tensorflow – Play with neural networks!

README.md

Deep playground is an interactive visualization of neural networks, written in typescript using d3.js.
We use github issues for tracking new requests and bugs. Your feedback is highly appreciated!

If you’d like to contribute, be sure to review the contribution
guidelines
.

To run the visualization locally you just need a server to serve all the files from the dist directory. You can run npm install then npm run serve if you don’t have one handy. To see the visualization, visit http://localhost:8080/ on your browser.

When developing, use npm run serve-watch. This will start a static server and also watchers to automatically compile the typescript, html and css files
whenever they change.

To produce a minified javascript file for production, run npm run build.

To push to production: git subtree push --prefix dist origin gh-pages.

This is not an official Google product.


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/2Liij6ElcKE/playground

Original article

Google has started a video series on machine learning and I can understand it

Six lines of Python is all it takes to write your first machine learning program! In this episode, we’ll briefly introduce what machine learning is and why it’s important. Then, we’ll follow a recipe for supervised learning (a technique to create a classifier from examples) and code it up.

Subscribe to the Google Developers: http://goo.gl/mQyv5L


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/RWH27UK1Zw0/watch

Original article

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑

%d bloggers like this: