MuseScore 2.0 Released

rDouglass writes: MuseScore, the open source desktop application for music notation, has released version 2.0 for Linux, Mac OS X, and Windows. This release represents the culmination of four years of development, including technical contributions from over 400 people. In addition to a completely new UI, top features include linked parts (good for pieces with many instruments), guitar tablature, flexible chord symbols, and fret diagrams. The program integrates directly with the online library of scores, and music written with the application can be displayed and played using the MuseScore mobile app.

Share on Google+

Read more of this story at Slashdot.

Original URL:

Original article

Ubuntu Remote Desktop with X2Go

This tutorial explains the installation and usage of X2Go on Ubuntu. X2Go is a remote desktop application to access X-Desktop enviroments over a network connection, it is well suited for low bandwidth connections, has support for sound with PulseAudio and allows desktop sharing. The application has two parts, the x2goclient for the client side and x2goserver (which has to be installed on the desktop system that shall be accessed). The X2Go client software is available for Windows, Mac OS X and Linux.

Original URL:

Original article

Meet the White House’s new open source-happy IT director

Who is this guy?
032315 whitehouse it 1

The White House has plucked 28-year-old David Recordon, engineering director at Facebook, as its first IT director. A strong open source advocate with a decidedly non-button-down appearance, Recordon will be charged with modernizing the White House’s technology. Here’s a closer look at one of our newest public servants…

To read this article in full or to leave a comment, please click here

Original URL:

Original article

Turn a Raspberry Pi 2 Into a Cheap, DIY SteamBox for In-Home Streaming

The new Raspberry Pi 2 has a lot of potential, but when combined with Steam’s In-Home Streaming , it becomes a super-affordable Steam Box capable of streaming your games from your PC to the big screen easily. On their new show, Possibly Unsafe, Patrick Norton and Michael Hand show you how to set yours up from start to finish.

Read more…

Original URL:

Original article

Facebook Engineering Tool Mimics Dodgy Network Connectivity

itwbennett writes: Facebook has released an open source application called Augmented Traffic Control that can simulate the connectivity of a cell phone accessing an app over a 2G, Edge, 3G, or LTE network. It can also simulate weak and erratic WiFi connections. The simulations can give engineers an estimate of how long it would take a user to download a file, for instance, given varying network connections. It can help engineers re-create problems that crop up only on very slow networks.

Share on Google+

Read more of this story at Slashdot.

Original URL:

Original article

GitLab 7.9 released

Marin Jankovski

Mar 22nd, 2015

GitLab is open source software to collaborate on code.
Today we announce the release of a new version of GitLab Community Edition (CE) and GitLab Enterprise Edition (EE), with new features, usability and performance improvements, and bug fixes.
This is the biggest release of GitLab ever. This release alone contains over 70 entries in the GitLab CE changelog and more than 800 commits!
The biggest new features in Community Edition are Bitbucket importer, unsubscribe button and the possibility to drag-and-drop any file-type in issues and merge requests markdown.
In addition to the updates from Community Edition, GitLab Enterprise Edition has gained group level webhooks.

This month’s Most Valuable Person (MVP) is Stan Hu for contributing number of features and fixes in GitLab Community Edition and omnibus-gitlab project.
Thanks Stan!

Continue reading “GitLab 7.9 released”

Original URL:

Original article

Full-Text Search in JavaScript

On March 22, 2015

Full-text search, unlike most of the topics in this machine learning series, is a problem that most web developers have encountered at some point. A client asks you to put a search field somewhere, and you write some SQL along the lines of WHERE title LIKE %:query%. It’s convincing at first, but then a few days later the client calls you and claims that “search is broken!”

Of course, your search isn’t broken, it’s just not doing what the client wants. Regular web users don’t really understand the concept of exact matches, so your search quality ends up being poor. You decide you need to use full-text search. With some MySQL fidgeting you’re able to set up a FULLTEXT index and use a more evolved syntax, the “MATCH() … AGAINST()” query.

Great! Problem solved. For smallish databases.

As you hit the hundreds of thousands of records, you notice that your database is sluggish. MySQL just isn’t great at full-text search. So you grab ElasticSearch, refactor your code a bit, and deploy a Lucene-driven full-text search cluster that works wonders. It’s fast and the quality of results is great.

Which leads you to ask: what the heck is Lucene doing so right?

This article (on TF-IDF, Okapi BM-25, and relevance scoring in general) and the next one (on inverted indices) describe the basic concepts behind full-text search.


It would be convenient to be able to define a “relevance score” that relates a document to a search query. And then, when users search for things, we can sort by the relevance score instead of sorting chronologically. That way the most relevant documents come up on top, regardless (or maybe not) of how old they are.

There are many, many ways to relate one text to another, but let’s start simple and use a statistics-based approach that doesn’t need to understand language itself, but rather looks at the statistics of word usage and matches and weighs documents based on the prevalence of their unique words.

This algorithm doesn’t care about verbs or nouns or even meaning. All it cares about is the simple fact that there are common words and there are rare words, and if your search phrase includes both common and rare words, you’d be better off to rank the documents that have that rare word in it higher, and put less weight on matched common words.

The algorithm we’ll use is called Okapi BM25, but it builds on two basic concepts: term frequency (“TF”) and inverse document frequency (“IDF”). Together, these concepts form “TF-IDF”, which is a statistical measure that represents how important a term is to a document.


Term Frequency, abbreviated “TF”, is a simple metric: it’s the number of times a certain word appears in a document. You can also represent it as the fraction of the number of times a word appears over the total number of tokens (ie, total words) in a document. Term frequency says “I’m 100 words long and ‘the’ shows up 8 times, so the term frequency of ‘the’ is 8 or 8/100 or 8%” (depending on the representation you want).

Inverse Document Frequency, abbreviated “IDF”, is more evolved: the rarer a word is, the higher this value. It’s the log ratio of the number of total documents over the number of documents a term appears in. Rarer words, therefore, yield bigger “IDF”s.

If you multiply these two numbers together, (TF*IDF), you’ll get the importance of a word to a document. “Importance” being defined as “how rare is this word and how often does it appear in this document?”

You can then use this concept to relate a document to a search query. For each term in a search query, calculate its TF-IDF score, add them all up, and whichever document has the highest score is your winner.

Cool? Cool.

Okapi BM25

The algorithm described above is okay but not wonderful. It does provide us with statistically-derived relevance scores, but it could be improved.

Okapi BM25 is considered a state-of-the-art ranking algorithm (so says ElasticSearch). The major improvements that Okapi BM25 bring over TF-IDF are two tunable parameters, called k1 and b, that modulate “term frequency saturation” and “field-length normalization”. What?

To intuit term frequency saturation, imagine two documents of roughly the same length that both talk about baseball. Imagine that the overall corpus doesn’t have much to do with baseball, so the term “baseball”s IDF is pretty high — it’s a rare and important-ish word. These two documents both talk about baseball, and talk about it a lot, but one of them uses the term “baseball” way more. Should that document really show up that much higher in the rankings? Both the documents talk about baseball a hefty amount, and at a certain point it shouldn’t really matter if you use the word “baseball” 40 times or 80 times. Anything above 30 is enough!

This is “term frequency saturation.” The naive TF-IDF algorithm doesn’t saturate, so the document that uses “baseball” 80 times will have twice the score as the one that uses it 40 times. Sometimes that’s desired, sometimes it’s not.

Okapi BM25, on the other hand, has a parameter called “k1″ that actually lets you tune how quickly term frequency will saturate. The parameter k1 is usually taken between 1.2 and 2.0. Lower values result in quicker saturation (meaning that those two documents above will have similar scores, because they both have a significant number of “baseball”s).

Field-length normalization considers the length of the document and normalizes against the average length of all documents. It’s useful in single-field collections (like ours) to put documents of differing lengths on the same playing field. It’s doubly useful in multiple-field collections (like “title” and “body”) in putting the title and body fields on the same playing field as well. The term “b” is ranged from 0 to 1, with 1 giving full normalization and 0 giving no normalization.

The Algorithm

You can see the formula for the Okapi BM25 algorithm on the Okapi BM25 Wikipedia page. Now that you know what each of the terms are, it should be pretty straight-forward to understand, so we won’t dive into the equation here. Let’s dive into code:

BM25.Tokenize = function(text) {
    text = text
        .replace(/W/g, ' ')
        .replace(/s+/g, ' ')
        .split(' ')
        .map(function(a) { return stemmer(a); });

    // Filter out stopStems
    var out = [];
    for (var i = 0, len = text.length; i < len; i++) {
        if (stopStems.indexOf(text[i]) === -1) {

    return out;

We define a simple Tokenize() static method whose purpose is to parse a string into an array of tokens. Along the way, we lower-case all the tokens (to reduce entropy), we run the Porter Stemmer Algorithm to reduce the entropy of the corpus and also to improve matching (so that “walking” and “walk” match the same), and we also filter out stop-words (very common words) to further reduce entropy. I’ve written about all these concepts in-depth previously, so please excuse me if I’m glossing over this section. :)

BM25.prototype.addDocument = function(doc) {
    if (typeof === 'undefined') { throw new Error(1000, 'ID is a required property of documents.'); };
    if (typeof doc.body === 'undefined') { throw new Error(1001, 'Body is a required property of documents.'); };

    // Raw tokenized list of words
    var tokens = BM25.Tokenize(doc.body);

    // Will hold unique terms and their counts and frequencies
    var _terms = {};

    // docObj will eventually be added to the documents database
    var docObj = {id:, tokens: tokens, body: doc.body};

    // Count number of terms
    docObj.termCount = tokens.length;

    // Increment totalDocuments

    // Readjust averageDocumentLength
    this.totalDocumentTermLength += docObj.termCount;
    this.averageDocumentLength = this.totalDocumentTermLength / this.totalDocuments;

    // Calculate term frequency
    // First get terms count
    for (var i = 0, len = tokens.length; i < len; i++) {
        var term = tokens[i];
        if (!_terms[term]) { 
            _terms[term] = {
                count: 0,
                freq: 0

    // Then re-loop to calculate term frequency.
    // We'll also update inverse document frequencies here.
    var keys = Object.keys(_terms);
    for (var i = 0, len = keys.length; i < len; i++) {
        var term = keys[i];
        // Term Frequency for this document.
        _terms[term].freq = _terms[term].count / docObj.termCount;

        // Inverse Document Frequency initialization
        if (!this.terms[term]) {
            this.terms[term] = {
                n: 0, // Number of docs this term appears in, uniquely
                idf: 0


    // Calculate inverse document frequencies
    // This is SLOWish so if you want to index a big batch of documents,
    // comment this out and run it once at the end of your addDocuments run
    // If you're only indexing a document or two at a time you can leave this in.
    // this.updateIdf();

    // Add docObj to docs db
    docObj.terms = _terms;
    this.documents[] = docObj;

This addDocument() method is where most of the magic happens. We’re essentially building and maintaining two similar data structures: this.documents, and this.terms.

this.documents is our database of individual documents, but along with storing the full, original text of the document, we also store the document length and a list of all the tokens in the document along with their count and frequency. Using this data structure we can easily and quickly (with a super-fast, O(1) hash table lookup) answer the question “in document #3, how many times did the word ‘walk’ occur?”

We also build a second data structure called this.terms. This represents all terms in the entire corpus. Through this O(1) data structure we can quickly answer the questions “how many documents does ‘walk’ appear in? And what’s its idf score?”.

Finally, we record the document length for each individual document, and also maintain an average document length for the whole corpus.

Of course, you see above that idf is initialized to zero, and I’ve even commented out the updateIdf() call above. That’s because it’s quite slow, and only needs to be run once at the end of the indexing operation. No need to run this 50,000 times when once will suffice. Leaving this commented out and running it only once at the end of a bulk index operation really speeds things up. Here’s the method:

BM25.prototype.updateIdf = function() {
    var keys = Object.keys(this.terms);
    for (var i = 0, len = keys.length; i < len; i++) {
        var term = keys[i];
        var num = (this.totalDocuments - this.terms[term].n + 0.5);
        var denom = (this.terms[term].n + 0.5);
        this.terms[term].idf = Math.max(Math.log10(num / denom), 0.01);

It’s a simple function, but since it loops over the entire corpus terms list, updating each one, it’s a somewhat expensive operation. The implementation is the standard formula for inverse document frequency (which you can easily find on Wikipedia) — it’s the log ratio of total documents to the number of documents a term appears in. I’ve also modified it to always be above zero. = function(query) {

    var queryTerms = BM25.Tokenize(query);
    var results = [];

    // Look at each document in turn. There are better ways to do this with inverted indices.
    var keys = Object.keys(this.documents);
    for (var j = 0, nDocs = keys.length; j  0) {

    results.sort(function(a, b) { return b._score - a._score; });
    return results.slice(0, 10);

Finally, the search() method loops through all documents and assigns a BM25 relevance score to each, sorting the highest scores at the end. Of course, it’s silly to loop through every document in the corpus when searching, but that’s the subject of Part Two (inverted indices and performance).

The code above is documented inline, but the gist is as follows: for each document and for each query term, calculate the BM25 score. The idf score for each query term is globally pre-calculated and just a simple look-up, the term frequency is document-specific but was also pre-calculated, and the rest of the work is simply multiplication and division! We add a temporary variable called _score to each document, and then sort the results by the score (descending) and return the top 10.

Demo, Source Code, Notes, and Next Steps

There are lots of ways the above can be improved, and we’ll explore those in Part Two of “Full-Text Search”! Stay tuned. I hope to publish it in a few weeks. Some things we’ll accomplish next time are:

  • Inverted index for faster searches
  • Faster indexing
  • Even better search results

For this demo, I built a little Wikipedia crawler that grabs the first paragraph of a decent number (85,000) of Wikipedia articles. Since indexing all 85k documents takes about 90 seconds on my computer I’ve cut the corpus in half for this demo. Don’t want you guys to waste your laptop battery juice on indexing Wikipedia articles for a simple full-text demo.

Because the indexing is a CPU-heavy, blocking operation, I implemented it as a Web Worker. The indexing runs in a background thread — you can find the full source code here. You’ll also find references to the stemmer algorithm and my stop-word list in the source code. The code license, as always, is free to use for educational purposes but not for any commercial purpose.

Finally, here’s the demo. Once the indexing is complete, try searching for random things and phrases that Wikipedia might know about. Note that there’s only 40,000 paragraphs indexed here, so you might have to try a few topics before you find one that the system knows about.

Please enable JavaScript to view the comments powered by Disqus.

Original URL:

Original article

Joe’s Own Editor 4.0 Released

  • JOE now has pop-up shell windows with full terminal emulation and shell commands
    that can control the editor. Hit F1 – F4 to bring up a shell window.
    See Pop-up shell feature for a full description.

  • The status command (^K SPACE) can now be customized using the same syntax
    as the status bar. Look for smsg and zmsg in joerc to see how to do this.

  • parserr (the error parser) will parse only the highlighted block if it’s set. Before it always parsed the entire buffer.

  • Now there is a per-buffer concept of current directory. This was added to
    make the pop-up shell windows work better, but it’s useful in general.

  • At file prompts you can begin a new anchored path without having to delete
    the old one. It means that ~jhallen/foo//etc/passwd is translated to /etc/passwd.
    Prompt windows are now highighted to indicate which parts of the path are
    being dropped. There is a syntax file for this: filename.jsf

  • The error parser now ignores ANSI sequences (some versions of grep
    color their results, now JOE can still parse it).

  • Temporary messages are now dismissed by keyboard input only. Before, they
    could also be dismissed by shell input.

  • Tags search now supports multiple matches. ^K ; can be configured to
    either provide a menu of the matches or to cycle through them.

  • Tags search will now match on the member name part of member functions
    (‘fred’ will match ‘myclass::fred’).

  • Tags search will prepend the path to the tags file file name in the tags
    file. This is important when JOE finds the tags file via the TAGS
    environment variable.

  • Remove ` as quote character from incremental search.

  • Clean up documentation, convert much of it to Markdown.

  • Search JOE image for :include files referenced by the joerc file.
    Include ftyperc file in the JOE image.

  • Change default indent from 2 to 4. Add quick menu to change to common
    indent values: ^T = (1, 2, 4, or 8). Switch to + and – for definitively
    setting or clearing options so that 0 and 1 can be use for quick select.

  • Added option to suppress DEADJOE file

  • Jump to matching delimiter (Ctrl-G) has been improved. It can now use the
    syntax files to parse the document in order to identify strings and
    comments which should be skipped during the matching delimiter search.
    (patch by Charles Tabony).

  • When ‘notite’ mode is enabled, JOE now emits linefeeds to preserve the
    screen contents in the terminal emulator’s scrollback buffer. This can be
    suppressed with a new flag: nolinefeeds.

  • JOE now starts up quiet (prints no extra messages when starting).
    Messages are collected in a startup log (view with ESC x showlog).

  • There is a new flag ‘noexmsg’ which, when set, makes JOE quiet when it shuts
    down (suppresses “File not changed so no update needed” message).

  • Use 80th column if terminal has xn capability (patch by pts and Egmont

  • Support italic text (on some terminal emulators) with “l” (patch by
    Egmont Koblinger)

  • Support bracketed paste (patch by Egmont Koblinger)

  • Fix line number in syntax highlighter error output

  • Prevent infinite loops caused by buggy syntax definitions.

  • New and improved syntax definitions for:

    • Ant: contributed by Christian Nicolai
    • Batch files: contributed by John Jordan
    • C#: contributed by John Jordan
    • Debian apt sources.list: contributed by Christian Nicolai
    • Elixir: contributed by Andrew Lisin
    • Erlang: contributed by Christian Nicolai, Jonas Rosling, Andrew Lisin
    • git-commit messages: contributed by Christian Nicolai
    • Go: contributed by Matthias S. Benkmann
    • HAML: contributed by Christian Nicolai
    • INI: contributed by Christian Nicolai
    • iptables: contributed by Christian Nicolai
    • Javascript: contributed by Rebecca Turner, Christian Nicolai
    • json: contributed by Rebecca Turner
    • Markdown: contributed by Christian Nicolai, Jonas Rosling
    • Powershell: contributed by Oskar Liljeblad
    • Prolog: contributed by Christian Nicolai
    • Puppet: contributed by Christian Nicolai, Eric Eisenhart
    • Sieve: contributed by Christian Nicolai
    • YAML: contributed by Christian Nicolai


  • Syntax definition fixes for: C, Python, Java, Lua, sh, Ruby, PHP, TeX,
    CSS, and XML

  • Save/restore utf-8 and crlf modes when changing in/out of hex edit for
    better display

  • Fix autocomplete for paths containing spaces

  • Accept mouse events beyond column 208 (patch by Egmont Kobligner)

  • Adjust guess_indent behavior based on user feedback

  • Fix infinite loop in search and replace

  • Add a new command ‘timer’ which executes a macro every n seconds. I use
    this for periodically injecting a command into a shell window for
    overnight testing of some device.

  • Convert double to long long (if we have it) when printing hexadecimal.

  • Fix bug where undo was acting strangly in shell windows.

  • Fix crash when hitting ———–.. wordwrap bug.

  • Check for math functions

  • Use joerc if fancyjoerc not there.

  • fix segfault from -orphan

  • fix window size detection bug: can’t take out types.h
    from tty.c

  • update status line immediately on resize.

  • va_copy fix.

  • don’t smartbackspace when smartbacks is off.

  • backspace/DEL means ‘n’ in replace prompt for better emacs

  • Menus are now made up of macros instead of options.

    New commands:

    * menu  Prompt for a menu to display with tab completion.
    * mode  Prompt for an option to change with tab completion.

    Menus are defined in joerc with ‘:defmenu name’ followed
    by a set of menu entries.

    Menu entries are the pair: ‘macro string’. String is a
    format string displayed in the menu. Macro is executed
    when then menu entry is selected.

    Use this to add your own macros to ^T.

  • ^T is now a user definable menu system

  • Treat as a quote character for file I/O. Now you can edit
    files like !test with !test

  • Print NULs in default search string. Handle many s properly.

  • Allow backslashes in file names

  • Fix %A to print unicode

  • Charles Tabony’s (vectorshifts’s) highlighter stack patch

  • ! is relace all in replace prompt

  • Turn off UTF-8 when we enter hex mode

  • Call ttsig on vfile I/O errors.

  • Abort cleanly when malloc returns NULL

  • Add reload command to reload file from disk

  • Modify configure scrips to use docdir for extra documents and
    datadir/joe for syntax and i18n files.

  • Don’t use bold yellow, it’s bad for white screens

  • Fix TeX highlighter: don’t highlight “

  • Make mail.jsf more forgiving for those of us who still use old UNIX mail

  • Fix file rename bugs

  • Improve ubop: can reformat a block of paragraphs again. Reformat
    of adjacent indented paragraphs working again.

  • Improve XML highlighter: allow r in whitespace

  • Preserve setuid bit

  • Fix bug where backup file did not get modtime of original

  • New diff highlighter

  • Fix paragraph format when overtype is on

  • Fix non-french spacing

  • Fix bug with joe +2 on single line files

  • Add syntax file for .jsf files

  • Add ASCII table to joerc help

  • ^KD renames file

  • Improve HTML highlighter… if you see <? it's probably a script…

  • Check for EINTR from ioctl

  • allowed in xml content

  • Add -flowed option: adds a space after paragraph lines.

  • Fix German and French .po files: they were cause search&replace to break.

  • Look at LC_MESSAGES to get the language to use for editor messages.

  • Added -no_double_quoted and -tex_comment for TeX

  • Added -break_symlinks option and changed -break_links option to not
    break symbolic links.

  • Paragraph format of single line paragraph is indented only if autoindent
    is enabled. (jqh)

  • Guessindent no longer overrides istep if indendation is space.

  • Fix low limit of lmargin

  • Allow inserting file in rectangle mode even if selected rectangle is

  • .js is Javascript

  • Fix ^G in perl mode when you hit it on second brace in:


  • Fix LUA highlighter (dirk_s)

  • Improved conf.jsf (dirk_s)

  • Added local option (-nobackup) to suppress backup files (peje)

  • Add Matlab syntax file (neer)

  • Improve mail syntax highlighter (jqh)

  • Fix crash when calling syntax file as subroutine (hal9042)

  • Get “ctags” tag search to work again

  • Fix crash when JOE tries to write to unwritable file

  • Fix crash when entering blank macro ESC x

  • Improve Verilog highlighter

  • Fix crash when typing ESC x !ls

  • Add C++ keywords to highlighter (otterpop81)

  • Added RPM spec file syntax spec.jsf

  • Improve ‘istring’ (.jsf command) (hal9042)

  • Update French .po file (jengelh)

  • Fix infinite search/replace loop bug (yugk)

  • New feature: insert status line format string using ‘txt’ (tykef)

  • Update Russion .po file (yugk)

  • Update Russian manpage (yugk)

  • Update jicerc Russian rc file (yugk)

  • Fix lock prompt message (yugk)

  • Add Ukrainian .po file (yugk)

  • Paragraph reformatter and word wrap now handle ‘*’ and ‘-‘ bullet lists.

  • Better internationalization (i18n):

    JOE now uses gettext(), so that internal messages can be translated to
    the local language. The /etc/joe directory now has a lang subdirectory
    for the .po files.

    Internationalized joerc files are now possible. If LANG is en_GB, JOE
    tries successively to load joerc.en_GB, joerc.en and joerc.

  • Multi-file search and replace:

    There are two new search and replace options:

    ‘a’: the search covers all loaded buffers. So you can say:

        joe *.c
        and then ^KF foo <return>
                 ra <return>
                 bar <return>
        to replace all instances of foo in all .c files.

    ‘e’: the search covers all files in the error list.

        You can use grep-find to create a list of files:
        ESC g
         grep -n foo f*.c
        ^KF foo <return>
        bar <return>
        You can also use 'ls' and 'find' instead of grep to
    create the file list.
  • JOE now restores cursor position in previously visited files.

  • Build and grep window work more like Turbo-C: the messages window is
    forced onto the screen when you hit ^[ = and ^[ -.

  • Syntax highlighter definition files (.jsf files) can now have subroutines.
    This eases highlighter reuse: for example, Mason and PHP can share the HTML

  • I’ve changed the way JOE handles ‘-‘ and redirected input:

    joe < file A shell process is started which 'cat's the
    file into the first buffer.

    tail -f log | joe A shell process is started which ‘cat’s the
    output from ‘tail -f’ (watch a log file) into
    the first buffer.

    joe – JOE does not try to read from stdin, but
    when the file is saved, it writes to stdout.

    echo “hi” | joe – | mail fred
    “hi” ends up in first buffer. When you
    save, mail is sent.

  • Many bugs have been fixed. I’ve tried to address every issue in the bug
    tracker. Hopefully I didn’t create too many new ones 🙂

  • You can now define which characters can indent paragraphs. Also the
    default list has been reduced so that formatting of TeX/LaTeX files works

  • Highlighting now uses less CPU time and always parses from the beginning
    of the file (the number of sync lines option is deprecated). Here is a
    CPU usage comparison for scrolling forwards and backwards through a 35K
    line C file:

    JOE .58
    JED .57
    NEDIT 3.26
    VIM 7.33
    EMACS 11.98

  • JOE now matches Thomas Dickey’s implementation of my xterm patch (but
    configure xterm with –paste64).

  • File selection menu/completion-list is now above the prompt (which is more
    like bash). Also it is transposed, so that it is sorted by columns
    instead of rows.

  • “Bufed” (prompt for a buffer to edit), works like other file prompt
    commands: it’s a real prompt with history and completion.

  • Automatic horizontal left scroll jumps by 5-10 columns.

  • New syntax files: troff, Haskell, Cadance SKILL, REXX, LUA, RUBY. Many of
    the existing syntax files have been improved.

  • A Perforce SCM “p4 edit” macro has been supplied (along with the hooks
    within JOE which support it) so that when you make the first change to a
    read-only file, JOE runs “p4 edit”. (look in joerc file to enable the

  • Hex edit mode has been added. For example: joe -hex /dev/hda,0,1024

  • New ‘-break_links’ option causes JOE to delete before writing files, to
    break hard links. Useful for ‘arch’ SCM.

  • JOE now has GNU-Emacs compatible file locks. A symbolic link called
    .#name is created, “pointing” to “” whenever the buffer
    goes from unmodified to modified. If the lock can’t be created, the user
    is allowed to steal or ignore the lock, or cancel the edit. The lock is
    deleted when buffer goes from modified to unmodified (or you close the

  • JOE now periodically checks the file on the disk and gives a warning if
    it changed when you try to modify the buffer. (JOE already performed this
    test on file save).

  • The built-in calculator (ESC m) is now a full featured scientific
    calculator (I’m shooting for Casio Fx-4000 level here :-), including
    hexadecimal and ability to sum (and perform statistics on) a highlighted
    (possibly rectangular) block of numbers. Hit ^K H at the math prompt for

  • You can now change the current directory in JOE (well, it prompts with
    the latest used directory).

  • Colors can now be specified in the joerc file

  • Macro language now has conditionals and modifiers for dealing with
    repeat arguments. Jmacs now works better due to this.

  • Tab completion works at tags search prompt ^K ;

  • ^G now jumps between word delimiters (begin..end in Verilog, #if #else #endif
    in C, / .. / and XML tags). If it doesn’t know the word, it
    starts a search with the word seeding the prompt. It is also much smarter
    about skipping over comments and quoted matter.

  • TAB completion is now much more like bash (again :-). The cursor stays
    at the file name prompt instead of jumping into the menu system. Also
    ^D brings up the menu, as in tcsh. Also, tab completion now works on user
    names for ~ expansion.

  • Now there is a ~/.joe_state file which stores:
    all history buffers
    current keyboard macros
    yank records

  • Joe now has xterm mouse support: when enabled, the mouse can position
    the cursor and select blocks. The mouse wheel will scroll the screen.
    When enabled, shift-click emulates old xterm mouse behavior (cut &
    paste between applications).

  • More syntax files: TeX, CSS, OCaml, Delphi, SML and 4GL. Thanks to
    all of the contributers.

  • Vastly improved highlighting of Perl and Shell due to the highlighter now
    understanding word and balanced delimiters.

  • Many bugs have been fixed (every bug which has been entered into the
    sourceforge project page has been addressed). Hopefully I didn’t add
    too many new ones 🙂

  • Regex and incremental search (jmacs ^S) now work for UTF-8

  • More and improved syntax highlighting files, including Mason

  • Use ^T E to set character set of file (hit at the
    prompt for a list of available character sets).

  • Can install custom “i18n” style byte oriented character set
    definition files.

  • No longer depends on iconv() (easier to compile)

  • Fix bug where right arrow was not doing right thing on last line

  • Fix UTF-8 codes between 0x10000 – 0x1FFFF

  • Now prints for unicode control characters

  • Improved smart home, indent, etc.

  • TAB completion is now more “bash”-like

  • When multiple files are given on command line, they end up in
    same order on the screen in JOE (before they were shuffled).

  • Menu size is now variable (40% of window size or smaller if
    it’s not filled).

  • Added -icase option for case insensitive search by default.

  • Added -wrap option, which makes searches wrap

  • Added status line sequence %x: shows current context (function
    name if you’re editing C).

  • Added tab completion at search prompts and ESC-Enter for tab
    completion within text windows.

  • Warn if file changed on save.

  • Added Ctrl-space block selection method

  • Added Ctrl-arrow key block selection method

  • ^K E asks if you want to load original version of the file

  • jmacs bugs fixes: upperase word, transpose words, ^X ^C is
    more emacs-like., ^X k and ^X ^V more like emacs.

  • Much improved compile system ^[ c

  • Much improved jpico

  • aspell support.

  • Original URL:

    Original article

    Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

    Up ↑

    %d bloggers like this: