Growler: Asyncio Micro-Framework in Python


Growler is a web framework utilizing the new asynchronous library (asyncio) described in PEP
and added to the standard library in python
It takes a cue from nodejs‘s express
library, using a series of middleware to process HTTP requests.
The custom chain of middleware provides an easy way to implement complex applications.


When available, growler will be installable via pip:


There are optionals to the install command that will ensure that additional functionality is
For example if you want to use the (quite pythonic) jade html
template engine, you can install with growler by adding it to the list of optionals:

$ pip install growler[jade]

When multiple optionals are available, they will be listed here.


The core of the framework is the growler.App class, which acts as both server and handler.
The App object creates a request and a response object when a client connects and passes the
pair to a series of middleware specified when setting up the server.
Note: The middleware are processed in the same order they are specified.
Headers are parsed and each middleware added to the app (using app.use()), then routes are
matched and functions called.
The middleware manipulate the request and response objects and either respond to the client or
pass to the next middleware in the chain.
This stream/filter model makes it very easy to modularize and extend web applications with any
features, backed by the power of python.

Example Usage

import asyncio

from growler import App
from growler.middleware import (Logger, Static, Renderer)

loop = asyncio.get_event_loop()

# Construct our application with name GrowlerServer
app = App('GrowlerServer', loop=loop)

# Add some growler middleware to the application
app.use(Renderer("views/", "jade"))

# Add some routes to the application
def index(req, res):

def hello_world(req, res):
    res.send_text("Hello World!!")

# Create the server - this automatically adds it to the asyncio event loop
Server = app.create_server(host='', port=8000)

# Tell the event loop to run forever - this will listen to the server's
# socket and wake up the growler application upon each connection

This code creates an application which is identified by ‘GrowlerServer’ (this name does nothing
at this point) and has some listening options, host and port.
Requests are passed to some middleware provided by the Grower package: Logger and Renderer.
Logger simply prints the ip address of the connecting client, and Renderer adds the render
function to the response object (used in index(req, res)).

Decorators are used to add endpoints to the application, so requests with path matching ‘/’
will call index(req, res) and requests matching ‘/hello’ will call hello_world(req,

Calling app.create_server(...) creates an asyncio server object with the event loop given
in the app’s constructor.
You can’t do much with the server directly, so after creating, as long as the event loop has
control, the application will run.
The easiest way to do this is to use asyncio.run_forever() after app.create_server.
Or do it with one line as in app.create_server_and_run_forever(...).


Growler introduces the virtual namespace growler_ext to which other projects may add their
own growler-specific code.
Of course, these packages may be imported in the standard way, but Growler provides an
autoloading feature via the growler.ext module (note the ‘.’ in place of ‘_’) which will
automatically import any packages found in the growler_ext namespace.
This not only provides a standard interface for extensions, but allows for different
implementations of an interface to be chosen by the environment, rather than hard-coded in.
It also can reduce number of import statements at the beginning of the file.
This specialized importer may be imported as a standalone module:

from growler import (App, ext)

app = App()

or a module to import ‘from’:

from growler import App
from growler.ext import MyGrowlerExtension

app = App()

This works by replacing the ‘real’ ext module with an object that will import submodules in the
growler_ext namespace automatically.
Perhaps unfortunately, because of this there is no way I know of to allow the
import growler.ext.my_extension syntax, as this skips the importer object and raises an
import error.
Users must use the from growler.ext import ... syntax instead.

The best practice for developers to add their middleware to growler is now to put their code in
the python module growler_ext/my_extesion.
This will allow your code to be imported by others via from growler.ext import my_extension
or the combination of from growler import ext and ext.my_extesion.

An example of an extension is the indexer
which hosts an automatically generated index of a filesystem directory.
It should implement the best practices of how to write extensions.


As it stands, Growler is single threaded, and not tested very well. Any submissions or comments
would be appreciated.

The name Growler comes from the beer bottle keeping in line with the theme of giving
python micro-web-frameworks fluid container names.


Growler is licensed under Apache 2.0.

Original URL:  

Original article


Today we’re announcing the preview release of our new documentation service, showcasing content supporting our Enterprise Mobility products.


In short, content matters. We interviewed and surveyed hundreds of developers and IT Pros and sifted through your website feedback over the years on UserVoice. It was clear we needed to make a change and create a modern web experience for content. The first thing we did was evaluate our existing content infrastructure TechNet and MSDN. Both sites are built on a 10-15 year-old brittle codebase with an archaic publishing and deployment system that was never designed to run on the cloud.

Our focus was not only on the experience, but also on the content we create and how each of you consume it. For years customers have told us to go beyond walls of text with feature-level content and help them implement solutions to their business problems. We knew that the content we delivered and the platform we built must make it easy for customers to learn and deploy solutions.

We realized that to get the overall experience right we needed to start from scratch; from this effort comes – a new hope for documentation at Microsoft.

Note: This preview release of the website includes content *only* for Enterprise Mobility Documentation (which consists of Advanced Threat Analytics, Azure Active Directory, Azure Remote App, Multi-factor Authentication, Azure Rights Management, Intune, and Microsoft Identity Manager). In the future, as our platform matures with the help of your feedback, we will migrate more of our documentation onto this experience.

Key Features

Let’s start with an example documentation page shown below and we’ll showcase some of the new features on the site.

Documentation Example


To improve content readability, we changed the site to have a set content width. Eye tracking studies have shown that you can improve comprehension and reading speed with a set content width as it’s difficult for the eye to follow long passages left-to-right. To show this in action, below is an example of an Intune article running on followed by the same article on TechNet. We’ve also increased the font size for the left navigation and the text itself, something customers have been asking for (UserVoice – Increase Font size).

Docs and TechNet comparison

Estimated Reading Time

Another simple enhancement we’ve made based on your input is to provide an estimated reading time for an article. We know many of you are learning/evaluating technology a few minutes between meetings and you’re more likely to read articles if you knew how much of a time commitment is required. We also added date stamps to content to help customers understand how fresh the information is based on UserVoice feedback.

Estimated reading time

Content and Site Navigation

One key area of investment based on customer interviews and UserVoice feedback was improvements in site navigation, information architecture and content organization based on the customer’s intent. We refactored our content into logical groupings around evaluating, getting started, planning, deploying, managing, or troubleshooting products or services. You can see this content broken down in both the left navigation and our product/service pages.

Below is a screenshot of the Intune documentation home page:

Intune documentation screenshot

This same categorization is on the left navigation of articles as well:

Left navigation

Shortened Article Length

Another common piece of feedback was that our content at times can be overwhelming because of its length and that long articles are more difficult to navigate and find what you’re looking for. To address this, we’ve broken down many longer articles into smaller logical steps and provided Previous and Next buttons at the bottom of articles to navigate between steps in a multi-part tutorial as shown below.
Back / Next buttons
While many customers like the ability to have multi-part tutorials, we also heard from customers who want the ability to combine multi-step tutorials into a single, offline printer-friendly PDF. We don’t have this yet, but it will be coming soon to the preview.

Responsive Design

To build a great experience on mobile devices, tablets, and PCs like you asked for on UserVoice, we switched to a responsive layout. Clicking the Options button will expand/collapse to show the same options on a desktop view.

Responsive Page Design

Community Contributions

All documentation on is open sourced and designed to allow community contributions. This follows in the footsteps of other teams at Microsoft that have already open sourced all or parts of their documentation including ASP.NET, Azure, .NET Core, and Microsoft Graph.

Every article has an Edit button (shown below) that takes you to the source Markdown file in GitHub where you can easily submit a pull request to fix or improve content.

Feedback Mechanisms

Your questions, comments, and feedback are important to us. We’ve partnered with Livefyre to provide comments and Sidenotes on all of our articles. At the top of every article you’ll see a link for comments as shown below.

Comments link

Clicking comments will take you to the bottom of the page where you can login (using Twitter, Facebook, Google, Yahoo, or Microsoft credentials) to add, follow, or like comments.

Comments at bottom

You can also add Sidenotes or notes on each paragraph of content or specifically highlighted text. To do that, with the mouse cursor on the comment symbol on the right, click it to add an inline comment.

Sidenote example

Social Sharing

The sharing button at the top of the page lets you easily share with Twitter and Facebook.

Sharing to Twitter and Facebook

You can also use your mouse cursor to select content on an article to add a comment or share on Twitter or Facebook directly from the context menu as shown below.

Select and comment or share content

Friendly URLs

We care about our web experience and one thing that regularly bugged us as users of TechNet and MSDN is that articles didn’t have friendly, readable URLs. Here’s an example of the same article with our new URLs.

Website theming

We also added a theme picker to articles so that you can change between a light and dark theme, something that some of you have asked for on UserVoice.

Light and Dark Theme Selector

The image below shows the difference between the light and the dark theme.

Light and Dark themes


Fundamentals like site performance are a key feature and something many customers have asked us to improve on UserVoice. Page load time on are between 50-300% faster in terms of load time and we are better geo-distributed than ever before. We’ve also built on an architecture that is running 100% on Azure.

We want your feedback!

We hope you enjoy the preview version of and please send us your feedback to In future posts we’ll discuss our plans to dramatically improve the experience for reference content and our plans for content localization.

Original URL:  

Original article

The curious case of slow downloads

Some time ago we discovered that certain very slow downloads were getting abruptly terminated and began investigating whether that was a client (i.e. web browser) or server (i.e. us) problem.

Some users were unable to download a binary file a few megabytes in length. The story was simple—the download connection was abruptly terminated even though the file was in the process of being downloaded. After a brief investigation we confirmed the problem: somewhere in our stack there was a bug.

Describing the problem was simple, reproducing the problem was easy with a single curl command, but fixing it took surprising amount of effort.

CC BY 2.0 image by jojo nicdao

In this article I’ll describe the symptoms we saw, how we reproduced it and how we fixed it. Hopefully, by sharing our experiences we will save others from the tedious debugging we went through.

Two things caught our attention in the bug report. First, only users on mobile phones were experiencing the problem. Second, the asset causing issues—a binary file—was pretty large, at around 30MB.

After a fruitful session with tcpdump one of our engineers was able to prepare a test case that reproduced the problem. As so often happens, once you know what you are looking for reproducing a problem is easy. In this case setting up a large file on a test domain and using the --limit-rate option to curl was enough:

$ curl -v --limit-rate 10k > /dev/null
* Closing connection #0
curl: (56) Recv failure: Connection reset by peer  

Poking with tcpdump showed there was RST packet coming from our server exactly 60 seconds after the connection was established:

$ tcpdump -tttttni eth0 port 80
00:00:00 IP > Flags [S], seq 3193165162, win 43690, options [mss 65495,sackOK,TS val 143660119 ecr 0,nop,wscale 7], length 0  
00:01:00 IP > Flags [R.], seq 1579198, ack 88, win 342, options [nop,nop,TS val 143675137 ecr 143675135], length 0

Clearly our server was doing something wrong. The RST packet coming from CloudFlare server is just bad. The client behaves, sends ACK packets politely, consumes the data at its own pace, and then we just abruptly cut the conversation.

We are a heavy users of NGINX. In order to isolate the problem we set up a basic off-the-shelf NGINX server. The issue was easily reproducible locally:

$ curl --limit-rate 10k  localhost:8080/large.bin > /dev/null
* Closing connection #0
curl: (56) Recv failure: Connection reset by peer  

This proved the problem was not specific to our setup—it was a broader NGINX issue!

After some further poking we found two culprits. First, we were using the reset_timedout_connection setting. This causes NGINX to close connections abruptly. When NGINX wants to time out a connection it sets SO_LINGER without a timeout on a socket, followed by a close(). This triggers the RST packet, instead of a usual graceful TCP finalization. Here’s an strace log from NGINX:

04:20:22 setsockopt(5, SOL_SOCKET, SO_LINGER, {onoff=1, linger=0}, 8) = 0  
04:20:22 close(5) = 0  

We could just have disabled the reset_timedout_connection setting, but that wouldn’t have solved the underlying problem. Why was NGINX closing the connection in the first place?

After further investigation we looked at the send_timeout configuration option. The default value is 60 seconds, exactly the timeout we were seeing.

http {  
     send_timeout 60s;

The send_timeout option is used by NGINX to ensure that all connections will eventually drain. It controls the time allowed between successive send/sendfile calls on each connection. Generally speaking it’s not fine for a single connection to use precious server resources for too long. If the download is going on too long or is plain stuck, it’s okay for the HTTP server to be upset.

But there was more to it than that.

Armed with strace we investigated what NGINX actually did:

04:54:05 accept4(4, ...) = 5  
04:54:05 sendfile(5, 9, [0], 51773484) = 5325752  
04:55:05 close(5) = 0  

In the config we ordered NGINX to use sendfile to transmit the data. The call to sendfile succeeds and pushes 5MB of data to the send buffer. This value is interesting—it’s about the amount of space we have in our default write buffer setting:

$ sysctl net.ipv4.tcp_wmem
net.ipv4.tcp_wmem = 4096 5242880 33554432  

A minute after the first long sendfile the socket is closed. Let’s see what happens when we increase the send_timeout value to some big value (like 600 seconds):

08:21:37 accept4(4, ...) = 5  
08:21:37 sendfile(5, 9, [0], 51773484) = 6024754  
08:24:21 sendfile(5, 9, [6024754], 45748730) = 1768041  
08:27:09 sendfile(5, 9, [7792795], 43980689) = 1768041  
08:30:07 sendfile(5, 9, [9560836], 42212648) = 1768041  

After the first large push of data, sendfile is called more times. Between each successive call it transfers about 1.7 MB. Between these syscalls, about every 180 seconds, the socket was constantly being drained by the slow curl, so why didn’t NGINX refill it constantly?

A motto of Unix design is “everything is a file”. I prefer to think about this as: “in Unix everything can be readable and writeable when given to poll“. But what exactly does “being readable” mean? Let’s discuss the behavior of network sockets on Linux.

The semantics of reading from a socket are simple:

  • Calling read() will return the data available on the socket, until it’s empty.
  • poll reports the socket as readable when any data is available on it.

One might think this is symmetrical and similar conditions hold for writing to a socket, like this:

  • Calling write() will copy data to the write buffer, up until “send buffer” memory is exhausted.
  • poll reports the socket is writeable if there is any space available in the send buffer.

Surprisingly, the last point is not true.

It’s very important to realize that in the Linux Kernel, there are two separate code paths: one for sending data and another one for checking if a socket is writeable.

In order for send() to succeed two conditions must be met:

On the other hand, the conditions for a socket to be reported as “writeable” by poll are slightly narrower:

The last condition is critical. This means that after you fill the socket send buffer to 100%, the socket will become writeable again only when it’s drained below 66% of send buffer size.

Going back to our NGINX trace, the second sendfile we saw:

08:24:21 sendfile(5, 9, [6024754], 45748730) = 1768041  

The call succeeded in sending 1.7 MiB of data. This is close to 33% of 5 MiB, our default wmem send buffer size.

I presume this threshold was implemented in Linux in order to avoid refilling the buffers too often. It is undesirable to wake up the sending program after each byte of the data was acknowledged by the client.

With full understanding of the problem we can decisively say when it happens:

  1. The socket send buffer is filled to at least 66%.

  2. The customer download speed is poor and it fails to drain the buffer to below 66% in 60 seconds.

  3. When that happens, the send buffer is not refilled in time, it’s not reported as writeable, and the connection gets reset with a timeout.

There are a couple of ways to fix the problem.

One option is to increase the send_timeout to, say, 280 seconds. This would ensure that given the default send buffer size, consumers faster than 50Kbps will never time out.

Another choice is to reduce the tcp_wmem send buffers sizes.

The final option is to patch NGINX to react differently on timeout. Instead of closing the connection, we could inspect the amount of data remaining in the send buffer. We can do that with ioctl(TIOCOUTQ). With this information we know exactly how quickly the connection is being drained. If it’s above some configurable threshold, we could decide to grant the connection some more time.

My colleague Chris Branch prepared a Linux specific patch to NGINX. It implements a send_minimum_rate option, which is used to specify the minimum permitted client throughput.

The Linux networking stack is very complex. While it usually works really well, sometimes it gives us a surprise. Even very experienced programmers don’t fully understand all the corner cases. During debugging we learned that setting timeouts in the “write” path of the code requires special attention. You can’t just treat the “write” timeouts in the same way as “read” timeouts.

It was a surprise to me that the semantics of a socket being “writeable” are not symmetrical to the “readable” state.

In past we found that raising receive buffers can have unexpected consequences. Now we know tweaking wmem values can affect something totally different—NGINX send timeouts.

Tuning a CDN to work well for all the users takes a lot of work. This write up is a result of hard work done by four engineers (special thanks to Chris Branch!). If this sounds interesting, consider applying!

Original URL:  

Original article

Starting with

Starting with


If you’ve been wondering what the fuss is about, here’s your chance to find out.

  1. First, if you have a machine that can run Node.js apps, you can install the server on your own system and run it there. 
  2. Or if you want to quickly find out what it’s like, you can create a site on my server. The usual caveats apply. I can’t run these test sites forever, but I have no immediate plans to take it down. The safest bet if you plan to use to blog for real is to run your own server or to share one with someone else. Each server can host lots of sites (and each site can have multiple contributors).
  3. If you have questions, please post a note on the 1999-server or 1999-user list, depending on whether or not you’re running your own server.
  4. If you find a bug, please report it using the Issues tracker on the 1999-project site on GitHub. 

The development and bug-fixing process continues. Still more docs to write.

I’m interested in knowing what you think. It’s designed to be easy to get started with, and posting quick short notes is what it’s optimized for. So please use 1999 to talk about 1999.  

Now that the software is public, I’ll start posting notes about new features and fixes, and also point out the important features. You’ll get lots of chances to learn about it, if you’re a regular reader of this blog.

So here we go, pouring new power into the Open Web. Exciting times!

Hope you like the software! 

Original URL:  

Original article

IBM Gives Everyone Access To Its Five-Qubit Quantum Computer

An anonymous reader writes: IBM said on Wednesday that it’s giving everyone access to one of its quantum computing processors, which can be used to crunch large amounts of data. Anyone can apply through IBM Research’s website to test the processor, however, IBM will determine how much access people will have to the processor depending on their technology background — specifically how knowledgeable they are about quantum technology. With the project being “broadly accessible,” IBM hopes more people will be interested in the technology, said Jerry Chow, manager of IBM’s experimental quantum computing group. Users can interact with the quantum processor through the Internet, even though the chip is stored at IBM’s research center in Yorktown Heights, New York, in a complex refrigeration system that keeps the chip cooled near absolute zero.

Share on Google+

Read more of this story at Slashdot.

Original URL:  

Original article

Installing Nginx with PHP 7 and MySQL 5.7 (LEMP) on Ubuntu 16.04 LTS

Nginx (pronounced “engine x”) is a free, open-source, high-performance HTTP server. Nginx is known for its stability, rich feature set, simple configuration, and low resource consumption. This tutorial shows how you can install Nginx on an Ubuntu 16.04 server with PHP 7 support (through PHP-FPM) and MySQL support (LEMP = Linux + nginx (pronounced “engine x”) + MySQL + PHP) .

Original URL:  

Original article

Simplevisor: Intel x64 Windows-specific hypervisor


Have you always been curious on how to build a hypervisor? Has Intel’s documentation (the many hundreds of pages) gotten you down? Have the samples you’ve found online just made things more confusing, or required weeks of reading through dozens of thousands of lines and code? If so, SimpleVisor might be the project for you.

Not counting the exhaustive comments which explain every single line of code, and specific Windows-related or Intel-related idiosyncrasies, SimpleVisor clocks in at about 500 lines of C code, and 10 lines of x64 assembly code, all while containing the ability to run on every recent version of 64-bit Windows, and supporting dynamic load/unload at runtime.

SimpleVisor can be built with any recent copy of Visual Studio 2015, and while older compilers have not been tested and are not supported, it’s likely that they can build the project as well. It’s important, however, to keep the various compiler and linker settings as you see them, however.

SimpleVisor has currently been tested on the following platforms successfully:

  • Windows 8.1 on a Haswell Processor (Custom Desktop)
  • Windows 10 Redstone 1 on a Sandy Bridge Processor (Samsung 930 Laptop)
  • Windows 10 Threshold 2 on a Skylake Processor (Surface Pro 4 Tablet)
  • Windows 10 Threshold 2 on a Skylape Processor (Dell Inspiron 11-3153 w/ SGX)

At this time, it has not been tested on any Virtual Machine, but barring any bugs in the implementations of either Bochs or VMWare, there’s no reason why SimpleVisor could not run in those environments as well. However, if your machine is already running under a hypervisor such as Hyper-V or Xen, SimpleVisor will not load.

Keep in mind that x86 versions of Windows are expressly not supported, nor are processors earlier than the Nehalem microarchitecture.


Too many hypervisor projects out there are either extremely complicated (Xen, KVM, VirtualBox) and/or closed-source (VMware, Hyper-V), as well as heavily focused toward Linux-based development or system. Additionally, most (other than Hyper-V) of them are expressly built for the purpose of enabling the execution of virtual machines, and not the virtualization of a live, running system, in order to perform introspection or other security-related tasks on it.

A few projects do stand out from the fold however, such as the original Blue Pill from Johanna, or projects such as VirtDbg and HyperDbg. Unfortunately, most of these have become quite old by now, and some only function on x86 processors, and don’t support newer operating systems such as Windows 10.

The closest project that actually delivers a Windows-centric, modern, and supported hypervisor is HyperPlatform, and we strongly recommend its use as a starting place for more broadly usable research-type hypervisor development. However, in attempting to create a generic “platform” that is more broadly robust, HyperPlatform also suffers from a bit of bloat, making it harder to understand what truly are the basic needs of a hypervisor, and how to initialize one.

The express goal of this project, as stated above, was to minimize code in any way possible, without causing negative side-effects, and focusing on the ‘bare-metal’ needs. This includes:

  • Minimizing use of assembly code. If it weren’t for the lack of an __lgdt intrinsic, and a workaround for the behavior of a Windows API, only the first 4 instructions of the hypervisor’s entry point would require assembly. As it stands, the project has a total of 10 instructions, spread throughout 3 functions. This is a massive departure from other hypervisor projects, which often have multiple hundreds of line of assembly code. A variety of Windows-specific and compiler-specific tricks are used to achieve this, which will be described in the source code.
  • Reducing checks for errors which are unlikely to happen. Given a properly configured, and trusted, set of input data, instructions such as vmx_vmwrite and vmx_vmread should never fail, for example.
  • Removing support for x86, which complicates matters and causes special handling around 64-bit fields.
  • Expressly reducing all possible VM-Exits to only the Intel architecturally defined minimum (CPUID, INVD, VMX Instructions, and XSETBV). This is purposefully done to keep the hypervisor as small as possible, as well as the initialization code.
  • No support for VMCALL. Many hypervisors use VMCALL as a way to exit the hypervisor, which requires assembly programming (there is no intrinsic) and additional exit handling. SimpleVisor uses a CPUID trap instead.
  • Relying on little-known Windows functions to simplify development of the hypervisor, such as Generic DPCs and hibernation contexts.

Another implied goal was to support the very latest in hardware features, as even Bochs doesn’t always have the very-latest Intel VMX instructions and/or definitions. These are often found in header files such as “vmcs.h” and “vmx.h” that various projects have at various levels of definition. For example, Xen master has some unreleased VM Exit reasons, but not certain released ones, which Bochs does have, albeit it doesn’t have the unreleased ones!

Finally, SimpleVisor is meant to be an educational tool — it has exhaustive comments explaining all logic behind each line of code, and specific Windows or Intel VMX tips and tricks that allow it to achieve its desired outcome. Various bugs or poorly documented behaviors are called out explicitly.


Because x64 Windows requires all drivers to be signed, you must testsign the SimpleVisor binary. The Visual Studio project file can be setup to do so by using the “Driver Signing” options and enabling “Test Sign” with your own certificate. From the UI, you can also generate your own.

Secondly, you must enable Test Signing Mode on your machine. To do so, first boot into UEFI to turn off “Secure Boot”, otherwise Test Signing mode cannot be enabled. Alternatively, if you possess a valid KMCS certificate, you may “Production Sign” the driver to avoid this requirement.

To setup Test Signing Mode, you can use the following command:

bcdedit /set testsigning on

After a reboot, you can then setup the required Service Control Manager entries for SimpleVisor in the registry with the following command:

sc create simplevisor type= kernel binPath= ""

You can then launch SimpleVisor with

net start simplevisor

And stop it with

net stop simplevisor

You must have administrative rights for usage of any of these commands.



SimpleVisor is designed to minimize code size and complexity — this does come at a cost of robustness. For example, even though many VMX operations performed by SimpleVisor “should” never fail, there are always unknown reasons, such as memory corruption, CPU errata, invalid host OS state, and potential bugs, which can cause certain operations to fail. For truly robust, commercial-grade software, these possibilities must be taken into account, and error handling, exception handling, and checks must be added to support them. Additionally, the vast array of BIOSes out there, and different CPU and chipset iterations, can each have specific incompatibilities or workarounds that must be checked for. SimpleVisor does not do any such error checking, validation, and exception handling. It is not robust software designed for production use, but rather a reference code base.


Copyright 2016 Alex Ionescu. All rights reserved. 

Redistribution and use in source and binary forms, with or without modification, are permitted provided
that the following conditions are met: 
1. Redistributions of source code must retain the above copyright notice, this list of conditions and
   the following disclaimer. 
2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions
   and the following disclaimer in the documentation and/or other materials provided with the 


The views and conclusions contained in the software and documentation are those of the authors and
should not be interpreted as representing official policies, either expressed or implied, of Alex Ionescu.

Original URL:  

Original article

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑

%d bloggers like this: