10 Years of Git: An Interview With Linus Torvalds

LibbyMC writes Git will celebrate its 10-year anniversary tomorrow. To celebrate this milestone, Linus shares the behind-the-scenes story of Git and tells us what he thinks of the project and its impact on software development. From the article: “Ten years ago this week, the Linux kernel community faced a daunting challenge: They could no longer use their revision control system BitKeeper and no other Software Configuration Management (SCMs) met their needs for a distributed system. Linus Torvalds, the creator of Linux, took the challenge into his own hands and disappeared over the weekend to emerge the following week with Git. Today Git is used for thousands of projects and has ushered in a new level of social coding among programmers.”


Share on Google+

Read more of this story at Slashdot.


Original URL: http://rss.slashdot.org/~r/Slashdot/slashdot/~3/NRFoV3cTc9Y/10-years-of-git-an-interview-with-linus-torvalds

Original article

Gmail users hit by software glitch

Google logo on tablet

The glitch hit people using Gmail and some of Google’s apps

Gmail users around the world saw errors and safety warnings over the weekend after Google forgot to update a key part of the messaging software.

Google said a “majority” of users were affected by the short-term software problem.

While people could still access and use Gmail many people saw “unexpected behaviour” because of the problem.

Many reported the errors via Twitter seeking clarification from Google about what had gone wrong.

The error messages started appearing early on 4 April and hit people trying to send email messages from Gmail and some of the firm’s messaging apps.

The problems arose because Google had neglected to renew a security certificate for Gmail and its app services. The certificate helps the software establish a secure connection to a destination, so messages can be sent with little fear they will be spied upon.

Google’s own in-house security service, called Authority G2, administers the security certificates and other secure software systems for the search giant.

Information about the problem was posted to status pages Google maintains for its apps and email services.

In the status message, Google said the problem was “affecting a majority of users” who were seeing error messages. It added that the glitch could cause programs to act in “unexpected” ways.

The problem was resolved about two hours after it was first noticed.

The glitch comes soon after Google started refusing security certificates issued by the China Internet Network Information Center (CNNIC). Google said a security lapse by the CNNIC meant the certificates could no longer be trusted. CNNIC called the decision “unacceptable and unintelligible”.


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/Eh38qEhUgQc/technology-32194202

Original article

Building a high performance SSD SAN -Part 1

Over the coming month I will be architecting, building and testing a modular, high performance SSD-only storage solution.

I’ll be documenting my progress / findings along the way and open sourcing all the information as a public guide.

With recent price drops and durability improvements in solid state storage now is better time than any to ditch those old magnets.

Modular server manufacturers such as SuperMicro have spent large on R&D thanks to the ever growing requirements from cloud vendors that utilise their hardware.

The State Of Enterprise Storage

Companies often settle for off-the-shelf large name storage products from companies based on several, often misguided assumptions:

  • That enterprise in the product name = reliability
  • That the blame from product / system failure can be outsourced
  • That vendors provide specialist engineers to support such complicated (and expensive) products
  • That that time and cost of building a modular storage solution tailored to their needs would be time consuming to design and manage

At the end of the day we don’t trust vendors to design our servers – why would we trust them to design our storage?

A great quote on Wikipedia under ‘enterprise storage’:

“You might think that the hardware inside a SAN is vastly superior to what can be found in your average server, but that is not the case. EMC (the market leader) and others have disclosed more than once that “the goal has always to been to use as much standard, commercial, off-the-shelf hardware as we can”. So your SAN array is probably nothing more than a typical Xeon server built by Quanta with a shiny bezel. A decent professional 1 TB drive costs a few hundred dollars. Place that same drive inside a SAN appliance and suddenly the price per terabyte is multiplied by at least three, sometimes even 10! When it comes to pricing and vendor lock-in you can say that storage systems are still stuck in the “mainframe era” despite the use of cheap off-the-shelf hardware.”

It’s the same old story, if you’ve got lots of money and you don’t care about how you spend it or translating those savings onto your customers – sure buy the ticket, take the ride – get a unit that comes with a flash logo, a 500 page brochure, licensing requirements and a greasy sales pitch.

Our Needs

Storage performance always seems to be our bottleneck at Infoxchange, we run several high-performance high-concurrency applications with large databases and complex reporting.

We’re grown (very) fast and with that spending too much on off-the-shelf storage solutions, we have a requirement to self-host most of our products securely within our own control, on our hardware and need to be flexible to meet current and emerging security requirements.

I have been working on various proof-of-concepts which have lead to our decision to proceed with our own modular storage system tailored to our requirements.

Requirements

  • Reliability above all else
    • SSD units must be durable
    • Network and iSCSI failover must be on-par with commercial products (if not better)
  • Multiple levels of provable redundancy
    • RAID
    • Cross hardware-replication
    • Easy IP and iSCSI failover using standard tools
  • 1RU rack hight per unit
  • 100% SSD only – no spindles will be hurt in the making of this journey!
  • Each unit to provide up to 450,000 IOP/s read performance on tier 1 storage
  • Provide up to 2.5GB/s read performance and 1.5GB/s write performance on tier 1 storage
  • Each unit to provide up to 400,000 IOP/s read performance on tier 2 storage
  • Provide up to 1.2GB/s read performance and 1.2GB/s write performance on tier 2 storage
  • 20Gbit of redundant network connectivity per unit
  • Two tiers of SSD storage performance (PCIe & SATA)
  • Easily monitorable with standard tools
  • Use no proprietary RAID hardware
  • Come with 3 years of hardware warranty cover
  • Outperform all proprietary storage solutions costing twice the price or more
  • Deployable and manageable by any sysadmin and require no specialised storage administrators
  • Easily updatable for the latest security patches, features etc…
  • Highly customisable and easily upgradable to larger / faster storage in the future
  • Require significantly less energy and cooling over traditional storage units
  • Offer at-rest encryption if required
  • Cost less than $9.5K USD per node

Software

Operating System Debian Debian is our OS of choice, it has newer packages than RedHat variants and is incredibly stable
RAID MDADM For SSDs hardware RAID cards can often be their undoing – they simply can’t keep up and quickly become the bottleneck in the system. MDADM is mature and very flexible
Node-to-Node Replication DRBD  
NIC Bonding LACP  
IP Failover Pacemaker We’ll probably also use a standard VM somewhere on our storage network for quorum
Monitoring Nagios  
Storage Presentation Open-iSCSI  
Kernel Latest Stable (Currently 3.18.7) Debian Backports currently has Kernel 3.16, however we do daily CI builds of the latest kernel stable source for certain servers and this may be a good use case for them due the SCSI bus bypass for NVMe introduced in 3.18+

We’re going to start with a two node cluster, we want to keep rack usage to a minimum so I’m going to go with a high density 1RU build.

The servers themselves don’t need to be particularly powerful which will help us keep the costs down. Easily the most expensive components are the 1.2TB PCIe SSDs – but the performance and durability of these units can’t be overlooked, we’re going to have a second performance tier constructed of high end SATA SSDs in RAID10. Of course if you wanted to reduce price further the PCIe SSDs could be left out or purchased at a later date.

Hardware

Base Server SuperMicro SuperServer 1028R-WTNRT 2x 10GbE, NVMe Support, Dual PSU, Dual SATA DOM Support, 3x PCIe, 10x SAS/SATA HDD Bays
CPU 2x Intel Xeon E5-2609 v3 We shouldn’t need a very high clock speed for our SAN, but it’s worth getting the newer v3 processor range for the sake of future proofing.
RAM 32GB DDR4 2133Mhz Again, we don’t need that much RAM, however it will be used for disk caching but 32GB should be more than enough and can be easily upgraded at a later date.
PCIe SSD 2x 1.2TB Intel SSD DC P3600 Series (With NVMe) This is where the real money goes – the Intel DC P3600 and P3700 series really are top of the range, the critical thing to note is that they support NVMe which will greatly increase performance, they’re backed by a 5 year warranty, these will be configured in RAID-1 for redundancy.
SATA SSD 8x SanDisk Extreme Pro SSD 480GB The SanDisk Extreme Pro line is arguably the most reliable and highest performing SATA SSD on the market – backed by a 10 year warranty, these will be configured in RAID-10 for redundancy and performance.
OS SSD 2x 16GB MLC DOM We don’t need much space for the OS, just enough to keep vital logs and package updates, these will be configured in RAID-1 for redundancy.

SuperMicro SuperServer 1028R-WTNRTSuperMicro SuperServer 1028R-WTNRT - mobo1.2TB Intel SSD DC P3600 SeriesSuperMicro DOMSanDisk Extreme Pro SSD 480GB

AHCI vs NVMe

NVMe is a relatively new technology which I’m very interested in making use of for these storage units.

From Wikipedia:

“NVM Express has been designed from the ground up, capitalizing on the low latency and parallelism of PCI Express SSDs, and mirroring the parallelism of contemporary CPUs, platforms and applications. By allowing parallelism levels offered by SSDs to be fully utilized by host’s hardware and software, NVM Express brings various performance improvements.”

AHCI NVMe
Maximum queue depth 1 command queue; 32 commands per queue 65536 queues; 65536 commands per queue
Uncacheable register accesses (2000 cycles each) 6 per non-queued command; 9 per queued command 2 per command
MSI-X and interrupt steering single interrupt; no steering 2048 MSI-X interrupts
Parallelism and multiple threads requires synchronization lock to issue a command no locking
Efficiency for 4 KB commands command parameters require two serialized host DRAM fetches gets command parameters in one 64 Bytes fetch

NVMe and the Linux Kernel

Intel published an NVM Express driver for Linux, It was merged into the Linux Kernel mainline on 19 March 2012, with the release of version 3.3 of the Linux kernel.

A scalable block layer for high-performance SSD storage, developed primarily by_ _Fusion-io_ _engineers, was merged into the Linux kernel mainline in kernel version 3.13, released on 19 January 2014. This leverages the performance offered by SSDs and NVM Express, by allowing much higher I/O submission rates. With this new design of the Linux kernel block layer, internal queues are split into two levels (per-CPU and hardware-submission queues), thus removing bottlenecks and allowing much higher levels of I/O parallelisation.

Note the following: As of version 3.18 of the Linux kernel, released on 7 December 2014, [VirtIO]6 block driver and the [SCSI]7__layer (which is used by Serial ATA drivers) have been modified to actually use this new interface; other drivers will be ported in the following releases.

Debian – our operating system of choice currently has kernel 3.16 available (using the official backports mirrors), however we do generate CI builds of the latest stable kernel for specific platforms – if you’re interested on how we’re doing that I have some information here.

That’s where I’m upto for now, the hardware will hopefully arrive in two weeks and I’ll begin the setup and testing.

Coming soon

  • Build experience / guide
  • Monitoring
  • Benchmarks
  • Failover configuration and testing
  • Software configurations (Including a Puppet module)
  • Ongoing experiences and application

Stay tuned!

[6/2/2015 – Sam McLeod]

Further reading:


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/zho2i6r3mXs/

Original article

Ten Years of Git: An Interview with Linus Torvalds

Linus-Torvalds-LinuxCon-Europe-2014Ten years ago this week, the Linux kernel community faced a daunting challenge: They could no longer use their revision control system BitKeeper and no other Software Configuration Management (SCMs) met their needs for a distributed system. Linus Torvalds, the creator of Linux, took the challenge into his own hands and disappeared over the weekend to emerge the following week with Git. Today Git is used for thousands of projects and has ushered in a new level of social coding among programmers.

To celebrate this milestone, we asked Linus to share the behind-the-scenes story of Git and tell us what he thinks of the project and its impact on software development. You’ll find his comments in the story below. We’ll follow this Q&A with a week of Git in which we profile a different project each day that is using the revision control system. Look for the stories behind KVM, Qt, Drupal, Puppet and Wine, among others. 

Why did you create Git?

Torvalds: I really never wanted to do source control management at all and felt that it was just about the least interesting thing in the computing world (with the possible exception of databases ;^), and I hated all SCM’s with a passion. But then BitKeeper came along and really changed the way I viewed source control. BK got most things right and having a local copy of the repository and distributed merging was a big deal. The big thing about distributed source control is that it makes one of the main issues with SCM’s go away – the politics around “who can make changes.” BK showed that you can avoid that by just giving everybody their own source repository. But BK had its own problems, too; there were a few technical choices that caused problems (renames were painful), but the biggest downside was the fact that since it wasn’t open source, there was a lot of people who didn’t want to use it. So while we ended up having several core maintainers use BK – it was free to use for open source projects – it never got ubiquitous. So it helped kernel development, but there were still pain points.

That then came to a head when Tridge (Andrew Tridgell) started reverse-engineering the (fairly simply) BK protocol, which was against the usage rules for BK. I spent a few weeks (months? It felt that way) trying to mediate between Tridge and Larry McVoy, but in the end it clearly wasn’t working. So at some point I decided that I can’t continue using BK, but that I really didn’t want to go back to the bad old pre-BK days. Sadly, at the time, while there were some other SCM’s that kind of tried to get the whole distributed thing, none of them did it remotely well.  I had performance requirements that were not even remotely satisfied by what was available, and I also worried about integrity of the code and the whole workflow, so I ended up just deciding to write my own.

How did you approach it? Did you stay up all weekend to write it or was it just during regular hours?

Torvalds: Heh. You can actually see how it all took shape in the git source code repository, except for the very first day or so. It took about a day to get to be “self-hosting” so that I could start committing things into git using git itself, so the first day or so is hidden, but everything else is there. The work was clearly mostly during the day, but there’s a few midnight entries and a couple of 2 a.m. ones. The most interesting part is how quickly it took shape ; the very first commit in the git tree is not a lot of code, but it already did the basics – enough to commit itself. The trick wasn’t really so much the coding but coming up with how it organizes the data.

So I’d like to stress that while it really came together in just about ten days or so (at which point I did my first *kernel* commit using git), it wasn’t like it was some kind of mad dash of coding. The actual amount of that early code is actually fairly small, it all depended on getting the basic ideas right. And that I had been mulling over for a while before the whole project started. I’d seen the problems others had. I’d seen what I wanted to avoid doing. 

Has it lived up to your expectations? How is it working today in your estimation? Are there any limitations?

Torvalds: I’m very happy with git. It works remarkably well for the kernel and is still meeting all my expectations. What I find interesting is how it took over so many other projects, too. Surprisingly quickly, in the end. There is a lot of inertia in switching source control systems;  just look at how long CVS and even RCS have stayed around, but at some point git just took over.

Why do you think it’s been so widely adopted?

Torvalds: I think that many others had been frustrated by all the same issues that made me hate SCM’s, and while there have been many projects that tried to fix one or two small corner cases that drove people wild, there really hadn’t been anything like git that really ended up taking on the big problems head on. Even when people don’t realize how important that “distributed” part was (and a lot of people were fighting it), once they figure out that it allows those easy and reliable backups, and allows people to make their own private test repositories without having to worry about the politics of having write access to some central repository, they’ll never go back.

Does Git last forever, or do you foresee another revision control system in another 10 years? Will you be the one to write it? 

Torvalds: I’m not going to be the one writing it, no. And maybe we’ll see something new in ten years, but I guarantee that it will be pretty “git-like.” It’s not like git got everything right, but it got all the really basic issues right in a way that no other SCM had ever done before.

No false modesty 😉

Why does Git work so well for Linux?

Torvalds: Well, it was obviously designed for our workflow, so that is part of it. I’ve already mentioned the whole “distributed” part many times, but it bears repeating. But it was also designed to be efficient enough for a biggish project like Linux, and it was designed to do things that people considered “hard” before git – because those are the things *I* do every day.

Just to pick an example: the concept of “merging” was generally considered to be something really quite painful and hard in most SCM’s. You’d plan your merges, because they were big deals. That’s not acceptable to me, since I commonly do tens of merges a day when in the merge window, and even then, the biggest overhead shouldn’t be the merge itself, it should be testing the result. The “git” part of the merge is just a couple of seconds, it should take me much longer just to write the merge explanation message.

So git was basically designed and written for my requirements, and it shows.

People have said that Git is only for super smart people. Even Andrew Morton said Git is “expressly designed to make you feel less intelligent than you thought you were.” What’s your response to this?

Torvalds: So I think it used to be true but isn’t any more. There is a few reasons people feel that way, but I think only one of them remains. The one that remains is fairly simple: “you can do things so many ways.”

You can do a lot of things with git, and many of the rules of what you *should* do are not so much technical limitations but are about what works well when working together with other people. So git is a very powerful set of tools, and that can not only be overwhelming at first, it also means that you can often do the same (or similar) things different ways, and they all “work.” Generally, the best way to learn git is probably to first only do very basic things and not even look at some of the things you can do until you are familiar and confident about the basics.

There’s a few historical reasons for why git was considered complicated. One of them is that it was complicated. The people who started using git very early on in order to work on the kernel really had to learn a very rough set of scripts to make everything work. All the effort had been on making the core technology work and very little on making it easy or obvious. So git (deservedly) had a reputation for requiring you to know exactly what you did early on. But that was mainly true for the first 6 months or a year.

The other big reason people thought git was hard is that git is very different. There are people who used things like CVS for a decade or two, and git is not CVS. Not even close. The concepts are different. The commands are different. Git never even really tried to look like CVS, quite the reverse. And if you’ve used a CVS-like system for a long time, that makes git appear complicated and needlessly different. People were put off by the odd revision numbers. Why is a git revision not “1.3.1” with nice incrementing numbers like it was in CVS? Why is it that odd scary 40-character HEX number?

But git wasn’t “needlessly different.” The differences are required. It’s just that it made some people really think it was more complicated than it is, because they came from a very different background. The “CVS background” thing is going away. By now there are probably lots of programmers out there who have never used CVS in their lives and would find the CVS way of doing things very confusing, because they learned git first.

Do you think the rate of Linux kernel development would have been able to grow at its current rate without Git? Why or why not?

Torvalds: Well, “without git,” sure. But it would have required that somebody else wrote something git-equivalent: a distributed SCM that is as efficient as git is. We definitely needed something *like* git.

What’s your latest opinion of GitHub?

Torvalds: Github is an excellent hosting service; I have nothing against it at all. Now, the complaints I’ve had is that GitHub as a development platform – making commits, pull requests, keeping track of issues etc – doesn’t work very well at all. It’s not even close, not for something like the kernel. It’s much too limited.

That’s partly because of how the kernel is developed, but part of it was that the GitHub interfaces were actively encouraging bad behavior. Commits done on GitHub had bad commit messages etc, because the web interfaces at GitHub were actively encouraging bad behavior. They did fix some of that, so it probably works better, but it will never be appropriate for something like the Linux kernel.

What is the most interesting use you’ve seen for Git and/or GitHub?

Torvalds: I’m just happy that it made it so easy to start a new project. Project hosting used to be painful, and with git and GitHub it’s just so trivial to do a random small project. It doesn’t matter what the project is; what matters is that you can do it.

Do you have side projects up your sleeve today? Any more brilliant software projects that will dominate software development for years to come?

Torvalds: Nothing planned. But I’ll let you know if that changes.

Atlassian is also helping to celebrate the anniversary of Git. Click on the image below to take a walk down memory lane. 

AtlassianGit10year


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/HqzY0rxidwY/821541-10-years-of-git-an-interview-with-git-creator-linus-torvalds

Original article

LG Accidentally Leaks Apple iMac 8K Is Coming Later This Year

An anonymous reader writes LG accidentally revealed in blog post that Apple is planning to release a 8K iMac later this year. This news comes as a surprise as the leak came from a different company rather than Apple. LG is one of Apple’s biggest display partners and has already demonstrated 8K monitors at CES in Las Vegas. They note that the panel boasts 16 times the number of pixels as a standard Full HD screen.


Share on Google+

Read more of this story at Slashdot.


Original URL: http://rss.slashdot.org/~r/Slashdot/slashdot/~3/a6l-36IPUlQ/lg-accidentally-leaks-apple-imac-8k-is-coming-later-this-year

Original article

Fiddle Is a Collaborative Markdown Tool for the Web

When it comes to collaborative editing, Google Docs is the go-to for most. But if you’re looking for something easy that doesn’t require a login, Fiddle is a simple Markdown editor for collaborative editing.

Read more…




Original URL: http://feeds.gawker.com/~r/lifehacker/full/~3/6W3bb-kjDQE/fiddle-is-a-collaborative-markdown-tool-for-the-web-1696011142

Original article

Turning the Arduino Uno into an Apple II

April 6, 2015


Emulated Apple ][ Running On a Stock Arduino Uno.

I’ve always been fascinated by the early days of the computer revolution. Today we take tremendously powerful machines for granted but it was not always that way. As a personal project I decided to implement an early eighties era microcomputer on the Arduino Uno to demonstrate just how powerful even the most basic of our microcontrollers are today.

My microcomputer of choice was the Apple II, this was the computer that was responsible for making Apple Computer a household name, with over five million units sold it was one of the most popular microcomputers of the era.

The Apple II was originally designed in 1977 by Steve Wozniak. In order to reduce costs and to bring the computer into the mass consumer market, Steve made many unique design decisions that reduced the cost and complexity of the machine. One of these goals was to drastically reduce the chip count of the machine.

The first Apple II machines featured 4 kilobytes of RAM that was shared with the video frame buffer. For CPU it featured a MOS 6502 clocked at 1 MHz. It was capable of generating text video at a resolution of 40 columns and 24 rows, and it featured two graphics modes capable of indexed colour video at up to 280×192 pixels.

Original Apple II
Original Apple II Microcomputer.

The MOS 6502 CPU was a rather revolutionary device by its own right. The 6502 was designed by the fledgeling semiconductor manufacturer, MOS Technology in 1975. The MOS Technology CPU project was headed by Chuck Peddle and three other ex Motorola employees. They sought to produce low cost CPU designs for the broader consumer market, a revolutionary idea at the time, and an idea that lead to them leaving Motorola.

When the 6502 was first released it was priced at $25 USD. At the time this was unheard of, being up to six times cheaper than the nearest competitors. Some people even thought that the low price had to be some form of scam. Ultimately the 6502 was no scam and went on to power many of the early microcomputers, including the Apple I/II and Commodore 64. The 6502 has been referred to as the “The original RISC processor”. Its elegant instruction set and historical significance has led to the processor remaining relevant and revered even forty years on.

The first step for my project of emulating an Apple II was to emulate the 6502 processor. Emulating the 6502 is in itself is a significant undertaking so this phase took on the order of a week or two. Several of the emulator design decisions were made easily, it had to be written in ANSI C, it had to be memory/cpu efficient, it didn’t have to be cycle accurate and it did not need to support BCD arithmetic (as Steve Wozniak never used it in his code).

The 6502 instruction set is remarkably simple, each opcode is a fixed 8 bits in length, there are 56 different instructions and 13 address modes. Most instructions work with most address modes. This simple (instruction/address mode) relationship is referred to as instruction set orthogonality. This approach leads to a much simpler emulation strategy. We can reuse the majority of memory fetching and instruction decoding code.

The MOS 6502 Instruction Set
The MOS 6502 Instruction Set.

Before starting the task of writing my 6502 emulator I took a slightly atypical approach. For programming tasks where you have a predefined input / output relationship it can be handy to use test driven development. In this case test driven development involves first writing a test in 6502 assembly that you expect to perform some action and then running it and observing the output. If the output matches what you expect (from data sheets etc) then your code for that action is correct and functional.

In this case I wrote an exhaustive test for each of the 6502’s opcodes and connected this to my emulator code. By running the test routines I could ensure my code was 6502 compatible throughout the development process. It also helped highlight unimplemented functionality.

I originally wrote the emulation code as a small C application on my OSX box. Running it locally allowed me to quickly test and make changes. My final design used a simple switch statement to decode instructions and a collection of operand decoding utilities. The goal was to keep it as simple as practical.

I decided to use a switch statement for instruction decoding due to it being easy for the C compiler to optimise into an efficient jump table. However I could have used an array of function pointers which would have likely been optimised as well (plus possibly more readable). However this would have required some navigating around the AVR memory model and you would have risked the overhead of call/return instructions.

Completing the emulator was a case of reading the 6502 programming guide and consulting the plentiful resources on 6502 emulation. After completing my first compatible build I decided to test the simulator against some of the original Apple firmware. Sadly I was plagued with strange bugs that made no sense. In a pinch I found the source code another emulator and decided to wire it into my processor unit tests. It turned out I hadn’t properly understood the x-indexed, indirect addressing mode and my unit tests were broken. It was a seriously frustrating one line fix.

The original Apple II firmware, totalling 12 kilobytes, was stored on six 2 kilobyte socketed ROM’s. These memory chips were mapped to the program address space between $D000 – $FFFF. Originally only four of the ROM sockets were populated.

Apple II Firmware:

  • $F800-$FFFF System Monitor (Hardware Routines).
  • $F689-$F7FC Sweet-16 Interpreter (Virtual Machine).
  • $F500-$F63C Mini-Assembler.
  • $E000-$F424 Integer Basic.

The system monitor program functions as a sort of simple shell. It includes functionality that allows you to manipulate memory contents, trace/debug programs, and execute memory locations. It also includes a large amount of hardware routines, such as initialising memory, reading characters from the keyboard, displaying characters on the screen, and saving/loading programs. When the Apple II first starts it loads a reset vector from $FFFC which points to the beginning of the system monitor program.

Integer basic was a handwritten BASIC interpreter written by Steve Wozniak. It was syntax compatible with HP BASIC and used 16 bit signed numbers for math operations. Steve had originally intended to implement floating point math however in order to save several weeks development time he released it in integer only mode. This was the primary software users encountered when using their Apple II. In fact later models booted directly into BASIC.

The original Apple II shipped with 4 kilobytes of DRAM memory, of this approximately 1 kilobytes was dedicated to text video frame buffer memory. This left 3 kilobytes of general purpose memory. The first 768 bytes of this memory was shared between system monitor variables in the first page of memory and the processor stack / input buffer.

Apple II RAM Memory Map:

  • $0000-$00FF Zeropage (System Monitor Variables).
  • $0100-$0300 Processor Stack / Line Input Buffer.
  • $0300-$03FF Free Space.
  • $0400-$07FF Text Video Buffer.
  • $0800-$0FFF Free Space.

The Apple II used a novel approach for video generation, at the time most microcomputers used an interlaced frame buffer where adjacent rows were not stored sequently in memory. This made it easier to generate interlaced video. The Apple II took this approach one step further, using an 8:1 interlacing scheme. This had the first line followed by the ninth line. This approach allowed Steve Wozniak to avoid read/write collisions with the video memory without additional circuitry. A very smart hack!

Video Memory Layout
Apple II Video Memory Layout Demonstrating the 8:1 Interlacing Scheme.

As shown in my previous post on the GhettoVGA project, I designed a video interface for the Arduino Uno that uses the secondary USB interface IC to store/generate video. In the Arduino code it made no sense to keep the interlaced video mode instead my emulator decodes screen addresses and converts them into sequential memory locations. In order to save memory on the Arduino, storing the frame buffer is left to the secondary processor. This frees between 512 – 1024 bytes of memory.

Apple II Video Character Set
Apple II Video Character Set Including Inverse And Flashing Modes.

The original Apple II supports a custom video character set that is loosely based on ASCII however it adds two custom modes, flashing and inverse text. In the original hardware this is implemented with discrete digital logic that essentially inverts the output of the video generator IC using an exclusive-or gate. The signal for the flashing text is generated from a simple clock divider and flashes at approximately 2 Hz.

Apple II Video Interface
Block Diagram Of The Apple II Video Interface Showing XOR On Output.

When I first presented my GhettoVGA project it was focused primarily on the ASCII character set. In order to improve efficiency and simplicity I converted it over to the Apple character set. Within the tight timing constraints of my AVR VGA generator it was not possible to implement the full inverse mode.
However it proved possible to generate flashing characters.

This was achieved by exclusive-or’ing the character lookup address with $80, this had the effect of toggling character lookups above and below the $80 boundary. By keeping remapped normal mode characters constant, but toggling between inverse and non-inverse for flashing I was able to achieve flashing text with very little CPU time. The clock for the flashing text came from dividing the frame counter.

unsigned character_lookup[256];
volatile unsigned char bitmask = 0x00;
// Every second this changes the MSB of bitmask
void asynchronous_thread() {
    for(;;) {
      bitmask |= 0x80;
      delay(1000);
      bitmask = 0;
      delay(1000);
   }
}
// Main character drawing routine
void main() {
   character_lookup[0x01] = NONINVERTEDCHAR;
   character_lookup[0x81] = INVERTEDCHAR;
   for(;;) {
      unsigned char character = 0x01;
      // perform the exclusive-or
      character ^= bitmask;
      putchar(character_lookup[character]);
    }
}

Pseudocode Showing The Software Implementation Of XOR Flashing.

Historically keyboard input hasn’t been subject to the same degree of standardisation as ASCII and thus the Apple II uses a custom keyboard protocol. The keyboard itself is based on a modified QWERTY layout.
As for sourcing a keyboard for this project I decided to use an old PS/2 device.

PS/2 is extremely easy to interface with Arduino, it uses 5 volt TTL logic and uses a synchronous serial protocol. PS/2 outputs are open collector which means you must use a pull up resistor. However the Arduino has internal pull-ups on many pins which can easily be used.

PS/2 Keyboard Timing Diagram
PS/2 Keyboard Timing Diagram.

PS/2 packets are 11 bits in length consisting of a fixed start bit (LOW), 8 data bits, a parity bit, and a fixed stop bit (HIGH). Data is transferred Least Significant Bit first. The parity bit is used to ensure transmission occurred properly. In my project I chose not to implement parity checking. Listening for data bits relies on monitoring the clock lines. One could poll for state changes, however the Arduino provides interrupt on change capability on several pins which is vastly superior.

PS/2 keyboards use an interesting protocol for communicating key presses/releases. When a key is pressed, the keyboard sends a scan code corresponding to the key press. When a key is released, the keyboard first sends the byte $F0 and then the scan code value. It’s up to the host to track modifier keys, etc. For extra fun, the scan codes themselves are mostly random, being a product of the key matrix.

PS/2 Keyboard Scan Codes
Default PS/2 Keyboard Scan Codes.

The only realistic way of mapping PS/2 scan codes to Apple keyboard codes is through the use of a lookup table. The Apple II lacks a key up command and modifiers are processed within the keyboard hardware. This is much simpler, the Apple II acknowledges key reads by clearing the uppermost bit of the keyboard register. The keyboard handling code for my project is shown below. You can modify the scan code lookup table for easy ascii decoding.

Arduino PS/2 Keyboard Decoder: keyboard.c

For nonvolatile storage the Apple II originally shipped with a cassette interface. The idea was that you would plug the cassette interface into the earphone and recording jacks of a standard cassette player. Data was stored using a simple Frequency Shift Keying scheme at approximately 1500 baud.

Apple II Cassette, Frequency Shift Keying
Captured Audio Demonstrating Apple II FSK Modulation.

The idle state of the cassette interface was a 770 Hz square wave, data began with a 200 us sync signal followed by a series of bits, either one full cycle of a 1 KHz square wave (HIGH) or one full cycle of a 2 KHz square wave (LOW). It was the responsibility of the host to keep track of the number of bytes shifted in and terminate the read when appropriate. For most encoding schemes the Apple II saved two records of data to tape. The first record contained a two byte data length indicator and the second record included the data itself.

Apple II Cassette, Zero Crossing Detector
Schematic Of Zero Crossing Detector Used In Apple II’s Cassette Interface.

Tape data is detected using an incredibly simple circuit using a single 741 op amp. The headphone jack of the cassette player is passed through an inverting zero crossing detector with approximately 100mv of hysteresis.

The circuit acts as a sort of comparator, when the input signal is less than -100mv the op-amp’s output is driven high, when the input signal is greater than 100mv the output is driven low (-4v). R29 limits the maximum output current of the op-amp and the input clamping diode clamps the signal to approximately TTL levels (0 – 4v).

Zero Crossing Detector Behaviour
Effect Of A Zero Crossing Detector On A Sinusoidal Input Signal.

The output of the zero crossing detector is made available as a software register. By using a carefully timed loop and looking for the pin toggling one can detect the incoming frequency and hence extract data. It’s an incredibly elegant approach.

Flip Flop Audio Generation
Schematic Of Flip Flop Based Audio Generators Used In Apple II.

Tape data is also written out using an incredibly simple approach, the Apple II uses a 74LS74 flip flop to generate tape and audio signals. By writing to a register address you can cause the flip flop to change state. Essentially toggling its output. By using a carefully timed loop you can toggle the flip flop at the desired frequency and generate an audio signal. R18 and R19 act as a voltage divider to limit the output.

The speaker uses the same approach, however in order to drive the low impedance load of a speaker a darlington transistor was used to provide high current gain.

Early on in the design process I chose to not implement a cycle accurate 6502 cpu, it added extra complexity and on the AVR any extra speed I could get was a huge bonus. However the lack of cycle accurate instructions makes keeping tight timing loops for signal generation / decoding impossible.

In order to avoid these complexities I decided to implement the tape encoding/decoding in the native AVR instruction set and implement hooks that would interrupt the 6502’s execution of the system monitor routines. This gave me a huge amount of flexibility in my decoding approach.

void cassette_header(unsigned short periods) {
  for(int i = 0; i < periods*128; ++i) { // Header Tone
    digitalWrite(SPEAKER_PIN, HIGH);
    delayMicroseconds(650);
    digitalWrite(SPEAKER_PIN, LOW);
    delayMicroseconds(650);
  }
  // Sync pulse, one half cycle at 2500hz and then 2000hz
  digitalWrite(SPEAKER_PIN, HIGH);
  delayMicroseconds(200);
  digitalWrite(SPEAKER_PIN, LOW);
  delayMicroseconds(250);
}

void cassette_write_byte(unsigned char val) {
    for(unsigned char i = 8; i != 0; --i) {
     digitalWrite(SPEAKER_PIN, HIGH);
     delayMicroseconds((val&_BV(i-1)) ? 500 : 250);  
     digitalWrite(SPEAKER_PIN, LOW);
     delayMicroseconds((val&_BV(i-1)) ? 500 : 250);
   }
}

void cassette_write_block(unsigned short A1, unsigned short A2) {
  unsigned char checksum = 0xFF, val = 0;
  for(unsigned short addr = A1; addr <= A2; ++addr) {
    val = read8(addr);
    cassette_write_byte(val);
    checksum ^= val;
  }
  cassette_write_byte(checksum);
  digitalWrite(SPEAKER_PIN, HIGH);
  delay(10);
  digitalWrite(SPEAKER_PIN, LOW);
}

float cassette_center_voltage = 512; //center voltage
boolean cassette_read_state() { //zero crossing detector
  static boolean zerocross_state = false;
  short adc = (analogRead(CASSETTE_READ_PIN) - (short)cassette_center_voltage); // get value
  cassette_center_voltage += adc*0.05f;  // bias drift
  // ~7mv hysteresis
  if(zerocross_state && adc  7) zerocross_state = true;  
  return zerocross_state;
}

short cassette_read_transition() {
  unsigned long start_time;
  static boolean last = false;
  boolean cur = last;
  // loop until state transition
  for(start_time = micros();cur == last;) cur = cassette_read_state();
  last = cur;
  //return duration of transition us
  return micros() - start_time;
}

boolean cassette_read_block(unsigned short A1, unsigned short A2) {
  short bitperiod;
  unsigned char val, checksum = 0xFF, datachecksum = 0x00;
  for(short i = 0; i  300); //find sync
  cassette_read_transition(); //skip second cycle sync
  for(unsigned short addr = A1; addr  300) val |= _BV(i-1);
    }
    write8(addr, val); // write byte
    checksum ^= val; //checksum
  }
  for(unsigned char i = 8; i != 0; --i) { //read checksum
    bitperiod = (cassette_read_transition() + cassette_read_transition()) / 2;
    if(bitperiod > 300) datachecksum |= _BV(i-1);
  }
  return (datachecksum == checksum);
}

void cassette_begin() {
  // ADC prescale, 77khz
  sbi(ADCSRA,ADPS2);
  cbi(ADCSRA,ADPS1);
  cbi(ADCSRA,ADPS0);
  digitalWrite(CASSETTE_READ_PIN, HIGH); //internal pullup
  analogReference(INTERNAL); //1.1v ref
}

Apple II Cassette Demodulation Using Arduino.

The above code generates and decodes Apple II formatted tape cassettes, it uses a very similar algorithm to Steve Wozniak’s original approach, however it allows me to use the handy delay and time routines. This code could potentially be used for other projects, instead of using tape, I recorded audio data to my mobile phone.

Analog Input Circuitry For Cassette Port
Analog Input Circuitry For Arduino.

The analog audio interface on the Arduino is equally simple, all that is needed is a 4.7k resistor and a 10uF capacitor. In order to increase the input sensitivity of the Arduino’s ADC I configured the Arduino to use its internal 1.1 volt reference. I then enabled the internal pull up on the ADC pin, this allowed me to use a single biasing resistor and capacitor.

At this point I had everything I needed in order to boot up an emulated Apple II. At first I configured the emulator code to write characters to the serial port. This proved highly successful and paved the way for further experiments.

Performance Measurement

Device Instructions Microseconds
Real Approx. 10, 000 ~30, 000
Emulated 10, 000 192, 000

Approximately 5 – 8x slower than the MOS 6502 clocked at 1 MHz.

The Atmega328p on the Arduino Uno comes with 2 kilobytes of ram, significantly less than the 4 kilobytes of the original Apple II. As established earlier however not all of that ram was available for general use. Through some smart design the Arduino emulator provides 1.5 kilobytes of general purpose memory. Providing nearly 1 kilobyte for BASIC programs. This has proved sufficient to handle fairly complex programs as demonstrated below.

Completed Apple II Emulator
Completed Apple II Emulator Showing The Complete Hardware.

Now that I had completed the hardware for the Apple II it was time to write some software. I needed a demo to run and prove the device was functional. After seeing a recent article on calculating the Mandelbrot set on early mainframe computers I decided I would attempt to replicate the project using integer BASIC.

The algorithm I chose is known as the escape time algorithm, the escape time algorithm uses a repeating calculation to calculate the number of iterations required before the equation begins to diverge. Iteration values above a threshold are considered to belong to the Mandelbrot set and values below are not. It’s brute force but it’s very simple and memory efficient.

1 DIM LINE$(31)
2 FOR PY=1 TO 15
3 FOR PX=1 TO 31
4 X=0
5 XT=0
6 Y=0
7 FOR I=0 to 11
8 XT = (X*X)/10 - (Y*Y)/10 + (PX-23)
9 Y = (X*Y)/5 + (10*PY - 75)/8
10 X = XT
11 IF (X/10)*X + (Y/10)*Y >= 400 THEN GOTO 15
12 NEXT I
13 LINE$(PX)="*" 
14 GOTO 16
15 LINE$(PX)=" " 
16 NEXT PX
17 PRINT LINE$
18 NEXT PY
19 END

Integer BASIC Program To Calculate The Mandelbrot Fractal.

Calculating the Mandelbrot sequence takes a few minutes on the emulated hardware. The end result is fairly neat considering it was done with 16 Bit fixed point math on an Arduino Uno.

Completed Mandelbrot Fractal
Displaying The Mandelbrot Fractal.

There is a couple small bugs I’ve noticed / improvements. The keyboard code doesn’t reset the bit counter appropriately so very occasionally (quite rare) its possible when resetting the machine to get the keyboard out of sync, need to implement a timeout. Also I think there’s some small bugs with the flashing character functionality. I swear I once saw a back to front flashing “R”, I have absolutely no idea how that happened!

I’d love to add some of the graphics modes functionality but I’ll need more memory for that! Linked below is the source code to the emulator and the video display firmware.

Arduino Apple ][ Sketch: APPLEII.zip
Video Generator Source/Hex: VGAApple.s / VGAApple.hex
Unit Tests For 6502 Processor: 6502tests.zip

Feel free to reach out / follow me. Always interested in people and opportunities!

94

Kudos

94

Kudos


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/JGdZoK58_A0/turning-the-arduino-uno-into-an-apple

Original article

Proposal Deadline: Call for Host Law Schools – Legal Writing Institute – 2018

The Board of Directors of the Legal Writing Institute seeks proposals from schools interested in hosting the 2014 LWI Biennial Conference. As it has done for a number of years, the Board is working with a professional meeting planner, at no cost to LWI, to assist it in identifying possible hotel locations for the 2018 […]


Original URL: http://legalscholarshipblog.classcaster.net/2015/04/06/proposal-deadline-call-for-host-law-schools-legal-writing-institute-2018/

Original article

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑

%d bloggers like this: