Emudore is a C64 emulator, fully written from the scratch, using C++11 and SDL2



emudore is a Commodore 64 emulator,
fully written from the scratch and built on the top of the following technologies:

I have succesfully built and run emudore on Windows (Visual Studio 2015 required)
and Linux platforms and it should theoretically work on any platform supported by
SDL2 provided a C++11 compiler is available.

Long story short: to learn a bit more about computer architecture, graphics,
C++, etc.. while having some fun!

The Commodore 64 is regarded as one of the most epic 8-bit computers, it was actually
the first computer I ever laid hands on – thanks to my dad 🙂 – and it seemed like a
sound choice to write an emulator for.

Well, yeah, mostly…

The BASIC ROM runs just fine, and most simple programs run without issues, however, many
of the more advanced games written in ML do not yet play well due to unimplemented hardware
features, writing an emulator is a tough task and after all my goal wasn’t to write a full
perfect emulator but to learn in the process of making a simple one, here follow some facts
you might be interested in knowing about emudore’s implementation:

emulation is instruction-exact

That’s right, on every emulation cycle a single instruction is fetched and interpreted,
the number CPU cycles the instruction has taken to execute is used to synchronize the
rest of the chips in a C64 board.

Unfortunately, this is not the best approach, other emulators are able to execute a single
instruction over multiple iterations of their emulation loop and mimic more accurately
the behaviour of the real hardware, however, this is an easier approach to implement
and does not impede the emulation from being relatively accurate.

illegal opcodes not supported yet

As many other architectures the MOS 6510 features a number of undocumented opcodes,
most of these are thought to be unintended and usually perform a mix of other opcodes

NMOS 6510 Unintended Opcodes – PDF

Nevertheless, some of these unintended opcodes have proven to be useful and are often
used in games and demos, emudore will need to support these in the future if we ever
intend to emulate serious games and demos.


emudore has been written as a single-threaded program, everything (including graphics) is
handled within the same thread, again, this approach has possibly some drawbacks, especially
in terms of performance, but it greatly simplifies the architecture: things like
synchronization of the mainboard chips become easier to implement.

hardware acceleration and vertical refresh sync

The screen is refreshed once at the end of every frame, when the video raster reaches the
last visible scanline, this way we’re not constantly writing to the host video memory.

Also, to speed things up a bit I implemented hardware acceleration, we use an accelerated
renderer and streaming textures, unfortunately, we need to keep the rendered video frame
within emudore’s memory and upload the texture to the GPU on every frame, direct pixel-access
is not doable straight on the GPU memory.

I also implemented vertical refresh synchronization, at the end of every frame we check
whether we are ahead of time compared to a real C64 computer, if that’s the case we sleep
for a bit and wake up at the point a real C64 would have finished rendering the frame, this
effectively locks the screen refresh down to ~50Hz (PAL).

There are two main benefits of implementing vsync: it helps with performance since GPU
operations are costly and after all we don’t want to run at the limit of fps our GPU can
handle; also, and more importantly, by doing this we emulate the speed of the real C64
computer synce the CPU and other chips are synchronized and run within the same thread, if
we are on a fast computer visual effects won’t look accelerated and games become playable 🙂

VIC-II chip

The vic-ii is a relatively complex chip, my implementation is not yet complete whatsoever,
and certain features are more than likely still buggy, for now four out of the five official
graphic modes are supported:

  • standard character mode
  • multicolor character mode
  • standard bitmap mode
  • multicolor bitmap mode

Smooth scrolling, sprites and raster interrupts have also been implemented and badlines
are also emulated.

Some things that are left to implement include: sprite double height/width mode, sprite
collision interrupts, etc.

A simple approach was taken to emulate the raster beam drawing, pixels drawn to the
screen surface are computed at the end of each scan line, this might result in
certain graphic effects being badly emulated, bear in mind that timing is of the essence
in the c64, well-versed programmers master and exploit it to put together amazing
effects that otherwise wouldn’t be feasible.


Due to some of the aforementioned facts, expect things to fail, don’t even dream
the emulation is going to be pixel-exact, certain effects are likely to get badly
emulated, specially if you’re running things like a demo.

Indeed, for now it’s only suppported on Linux (debug) builds, you can grab a fresh
copy of radare from github

Then fire the emulator up and connect with radare:

r2 -a 6502 rap://localhost:9999//

For now radare can just read and modify memory, further support might be coming
down the line once this feature
gets implemented in radare.

Some pictures of radare in action:


I hope this feature will come in useful to retrodev reverse engineers 🙂

There are many features of the currently emulated chips that I haven’t implemented
yet, time permitting my plan is to keep working and learning about other aspects of
the C64 technology, also, at some stage I’d love to get the time to work on:

If you are running a Linux Debian-based distro:

sudo apt-get install g++ cmake libsdl2-dev

Then simply compile and run:

make relase
cd build

For the time being emudore can load PRGs, you can test:

./emudore assets/prg/monopole.prg

emudore can also type BASIC listings for you (special keys not supported yet):

./emudore assets/bas/10print.bas 
(then type RUN at the emulator window)


parallax mario pacman
hitmen montezuma ghostbusters

If you are interested in computer archeology, and particularly in the C64, among
others, the following resources came in handy to develop (and get inspiration) while
developing emudore:

I don’t think anybody would ever dare to use this for an actual useful purpose, but just in case,
the project is licensed under the Apache 2.0 license

Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/hDvJczx_V-g/emudore

Original article

You Want to Build an Empire Like Google’s? This Is Your OS

You Want to Build an Empire Like Google’s? This Is Your OS

Google called it Borg, and for many years, it was among the company’s best-kept secrets. Borg ran just about everything within the company, including Google Search, Gmail, Google Maps, Google Docs, and any other Google service you can think of—not to mention the private services you and I never see. Basically, it provided a way of parceling […] The post You Want to Build an Empire Like Google’s? This Is Your OS appeared first on WIRED.

Original URL: http://www.wired.com/2016/04/want-build-empire-like-googles-os/

Original article

Fair use prevails as Supreme Court rejects Google Books copyright case

The Supreme Court on Monday declined (PDF) to hear a challenge from the Authors Guild and other writers claiming Google’s scanning of their books amounts to wanton copyright infringement and not fair use.

The guild urged the high court to review a lower court decision in favor of Google that the writers said amounted to an “unprecedented judicial expansion of the fair-use doctrine.” (PDF)

At issue is a June decision (PDF) by the 2nd US Circuit Court of Appeals that essentially said it’s legal to scan books if you don’t own the copyright. The Authors’ Guild originally sued Google, saying that serving up search results from scanned books infringes on publishers’ copyrights even though the search giant shows only restricted snippets of the work. The writers also claimed that Google’s book search snippets provide an illegal free substitute for their work and that Google Books infringes their “derivative rights” in revenue they could gain from a “licensed search” market.

The Supreme Court let stand the lower court opinion that rejected the writers’ claims. That decision today means Google Books won’t have to close up shop or ask book publishers for permission to scan. In the long run, the ruling could inspire other large-scale digitization projects.

Fair use is a concept baked into US copyright law, and it’s a defense to copyright infringement if certain elements are met. The US Copyright Office says the defense is decided on a case-by-case basis. “The distinction between what is fair use and what is infringement in a particular case will not always be clear or easily defined. There is no specific number of words, lines, or notes that may safely be taken without permission. Acknowledging the source of the copyrighted material does not substitute for obtaining permission,” the US Copyright Office says. There are, however, at least four factors that judges must consider when deciding fair use: the purpose of use, the nature of the copyrighted work, the amount and substantiality of the portion taken, and the effect of the use upon the potential market.

The Supreme Court did not comment in its order other than to say that Justice Elena Kagan did not participate.

Google, for its part, urged the justices to side against the writers because, in the end, their works would be more readily discovered. “Google Books gives readers a dramatically new way to find books of interest,” Google’s brief said. (PDF) “By formulating their own text queries and reviewing search results, users can identify, determine the relevance of, and locate books they might otherwise never have found.”

Unlike other forms of Google search, Google does not display advertising to book searchers, nor does it receive payment if a searcher uses Google’s link to buy a copy. Google’s book scanning project started in 2004. Working with major libraries like Stanford, Columbia, the University of California, and the New York Public Library, Google has scanned and made machine-readable more than 20 million books. Many of them are nonfiction and out of print.

Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/Ol_yPGVddKk/

Original article

US hosts more malicious websites than any other country

According to a new report from German security company G DATA, more malicious websites were hosted in the US in 2015 than in any other country, originating around 57 percent of recorded attacks. China, Hong Kong, Russia and Canada are also major hosts of malware, though Europe is little in evidence, only Germany and Italy making the top seven and accounting for just six percent between them. Overall, the number of websites categorized as being ‘evil’ has risen by 45 percent, emphasizing how attacks from the web represent one of the biggest and growing threats facing computer users. It also shows… [Continue Reading]

Original URL: http://feeds.betanews.com/~r/bn/~3/J3ZGqVT54_s/

Original article

Apple Launches MacBook 2016 With Intel Skylake Processor, Longer Battery Life

Apple, on Tuesday, announced a refresh for its 12-inch MacBook laptop. The 2016 MacBook comes with an Intel Skylake processor — sixth-generation dual-core Intel Core M model, offering up to 1.3 GHz clock speed with Turbo Boost speeds of up to 3.1 GHz, faster 1866 MHz memory, and a ‘rose gold’ color variant. Apple assures 10 hours of wireless Web browsing time, or 11 hours of movie playback on a single charge. The new model will hit retail stores on Wednesday. It starts at $1,299 for the 256GB SSD and 8GB (up from 4GB) version, and goes all the way up to $1,599 for the top-of-the-line model which offers 512GB SSD. A couple of points: the first-generation MacBook didn’t fare well with reviewers and plenty of users alike. Second, today’s announcement also hints that the MacBook Air and the MacBook Pro lineups won’t be getting the Intel Skylake upgrade for at least a few more months. Which is really sad, because at present, they are running almost three-year-old processor and graphics chips. No wonder, Oculus executive made fun of Apple’s computers.

Share on Google+

Read more of this story at Slashdot.

Original URL: http://rss.slashdot.org/~r/Slashdot/slashdot/~3/MW4Bo1d0SGM/apple-launches-macbook-2016-with-intel-skylake-processor-longer-battery-life

Original article

Note taking, hand or type?

Laptops are common in lecture halls worldwide. Students hear a lecture at the Johann Wolfang Goethe-University on Oct. 13, 2014, in Frankfurt am Main, Germany.

Thomas Lohnes/Getty Images

hide caption

toggle caption

Thomas Lohnes/Getty Images

As laptops become smaller and more ubiquitous, and with the advent of tablets, the idea of taking notes by hand just seems old-fashioned to many students today. Typing your notes is faster — which comes in handy when there’s a lot of information to take down. But it turns out there are still advantages to doing things the old-fashioned way.

For one thing, research shows that laptops and tablets have a tendency to be distracting — it’s so easy to click over to Facebook in that dull lecture. And a study has shown that the fact that you have to be slower when you take notes by hand is what makes it more useful in the long run.

In the study published in Psychological Science, Pam A. Mueller of Princeton University and Daniel M. Oppenheimer of the University of California, Los Angeles sought to test how note-taking by hand or by computer affects learning.

“When people type their notes, they have this tendency to try to take verbatim notes and write down as much of the lecture as they can,” Mueller tells NPR’s Rachel Martin. “The students who were taking longhand notes in our studies were forced to be more selective — because you can’t write as fast as you can type. And that extra processing of the material that they were doing benefited them.”

Mueller and Oppenheimer cited that note-taking can be categorized two ways: generative and nongenerative. Generative note-taking pertains to “summarizing, paraphrasing, concept mapping,” while nongenerative note-taking involves copying something verbatim.

And there are two hypotheses to why note-taking is beneficial in the first place. The first idea is called the encoding hypothesis, which says that when a person is taking notes, “the processing that occurs” will improve “learning and retention.” The second, called the external-storage hypothesis, is that you learn by being able to look back at your notes, or even the notes of other people.

Because people can type faster than they write, using a laptop will make people more likely to try to transcribe everything they’re hearing. So on the one hand, Mueller and Oppenheimer were faced with the question of whether the benefits of being able to look at your more complete, transcribed notes on a laptop outweigh the drawbacks of not processing that information. On the other hand, when writing longhand, you process the information better but have less to look back at.

For their first study, they took university students (the standard guinea pig of psychology) and showed them TED talks about various topics. Afterward, they found that the students who used laptops typed significantly more words than those who took notes by hand. When testing how well the students remembered information, the researchers found a key point of divergence in the type of question. For questions that asked students to simply remember facts, like dates, both groups did equally well. But for “conceptual-application” questions, such as, “How do Japan and Sweden differ in their approaches to equality within their societies?” the laptop users did “significantly worse.”

The same thing happened in the second study, even when they specifically told students using laptops to try to avoid writing things down verbatim. “Even when we told people they shouldn’t be taking these verbatim notes, they were not able to overcome that instinct,” Mueller says. The more words the students copied verbatim, the worse they performed on recall tests.

And to test the external-storage hypothesis, for the third study they gave students the opportunity to review their notes in between the lecture and test. The thinking is, if students have time to study their notes from their laptops, the fact that they typed more extensive notes than their longhand-writing peers could possibly help them perform better.

But the students taking notes by hand still performed better. “This is suggestive evidence that longhand notes may have superior external storage as well as superior encoding functions,” Mueller and Oppenheimer write.

Do studies like these mean wise college students will start migrating back to notebooks?

“I think it is a hard sell to get people to go back to pen and paper,” Mueller says. “But they are developing lots of technologies now like Livescribe and various stylus and tablet technologies that are getting better and better. And I think that will be sort of an easier sell to college students and people of that generation.”

Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/-Zl9h8y6NAo/attention-students-put-your-laptops-away

Original article

6 Lesser Known Python Data Analysis Libraries

Posted on: 19-04-2016

Python offers a great environment and rich set of libraries to developers while working with data. There are tons of useful libraries out there for novice or experienced developers or analysts for helping out with processing or visualizing datasets. Some of the libraries are really popular and used by millions of developers, for example – Pandas, Numpy, Scikit-learn, NTLK etc. Some of the libraries are not so well known and turned out to be handy in my experience. This article introduces 6 such Python libraries when working with data. Readers might already be familiarized with some of them, but I hope this article still proves to be useful.


mrjob is an useful Python library that lets you write MapReduce jobs using Python. It lets you write your own Mapper and Reducer, run/test your MapReduce job on local environment and deploy on EMR or your own Hadoop cluster. It can be easily installed using pip install mrjob. mrjob is developed by Yelp and receive thousands of downloads every day. The Github page and the project page has lots of documentation that will help users set up their application quickly.


Working with and manipulating datetimes in Python is pain. If you have worked with Python’s default datetime library and multiple time zones in your application, you must have been frustrated at times. delorean makes your life easier by providing nicer abstractions over datetime and pytz. It has useful features for working with multiple timezones, normalizing timezones and shifting from one timezone to other. This package is maintained by Mahdi Yusuf and has great documentation on the project page.


Default sorted() method is efficient enough and will serve for most of your sorting needs. But if you ever had to sort a list like ['a2', 'a9', 'a1', 'a4', 'a10'], you will either have to roll out your own solution or would like to look for external libraries. Thankfully, natsort is available for rescue. It helps you in sorting a list of strings that also contains integers. Normal Python sorting would sort the list lexicographically but that may not be something you would want. natsort provides a method natsorted(), that lets you sort lists like this. Also, you can mix and match integers, float and strings when you sort. Official project page has more details on usage and documentation. This may be required for special cases, but definitely handy when you end up needing it.


Not always you need a huge multi-node database with running daemons for your application. TinyDB is a small document oriented database that will let you insert JSON documents in a local file and query on that. It has 1200 lines of code including documentation with simple and clean APIs. Although it lacks a lot of features like multithreading or data indexing, this should suffice if you are looking a small, hassle-free database for your small projects without the overhead of setting it up or configuring it. It can be installed using pip install tinydb. Refer to this link for documentation and usage information.


PrettyTable lets you draw beautiful ASCII tables on console from multiple data sources. This is extremely useful when you are pretty printing tabular data on terminal. It has options to select specific columns to display, sort columns, align each column to left or right or print the table in MS-WORD friendly format or as a HTML table. PrettyTable can use existing data sources like CSV file or a database cursor. I have used this package on regular basis for past few years and liked it so much that I ported this as a Node module here. The original source code is hosted on Google Code here. The project readme is also available on a mirror Github repository.


Vincent is a cool visualization tool that takes Python data structures and translates them into Vega visualization grammar that works on top on d3js. This lets you create beautiful d3-based visualizations right out of your Python scripts. Vincent uses Pandas dataframes under the hood and currently supports wide range of visualizations – bar, line, scatter, area, stacked bar, grouped bar, pie/donut, map etc. The APIs are simple and coupled with other data analysis tools, Vincent allows you to make beautiful iPython notebooks. The Github page has some examples and the project documentation is available here.

Powered by minni

Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/5Hm-AzRY_Ao/python_libraries.html

Original article

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑

%d bloggers like this: