A history of the Amiga, part 9: The Video Toaster

Jeremy Reimer

When personal computers first came into the world in the late 1970s, there wasn’t always an obvious use for them. If the market was going to expand beyond hobbyists and early adopter nerds, there needed to be a “killer app”—some piece of software that could justify the purchase of a particular brand of computer.

The first killer app, VisiCalc, came out in 1979. It turned an ordinary Apple II into a financial planning tool that was more powerful and flexible than anything the world had ever seen. A refined version of this spreadsheet, Lotus 1-2-3, became the killer app that put IBM PCs in offices and homes around the world. The Macintosh, which floundered in 1985 after early adopter sales trailed off, found a profitable niche in the new world of desktop publishing with two killer apps: Aldus Pagemaker and Adobe Photoshop.

To keep up with the Joneses, the Amiga needed a killer app to survive—it found one with the Video Toaster.

The world of video in 1985 was very different from what we know today. Not only was there no YouTube, there was no World Wide Web to view video on. Video content was completely analog and stored on magnetic tape. Computers of the day, like the IBM PC and Macintosh, worked with their own digital displays that didn’t interoperate at all with the world of analog video.

The Amiga, however, was originally designed as a game console, and so it was compatible with standard television frequencies. Where the Amiga designers showed insight and forethought, however, was in creating a bridge between analog and digital. The very first Amiga contained a genlock, which matched video timings with an NTSC or PAL signal and allowed the user to overlay this signal with the Amiga’s internally generated graphics. The first person to realize the potential of this was an engineer living in Topeka, Kansas. His name was Tim Jenison.

Tim and Paul

Tim Jenison was born in 1956, the son of an electrical-mechanical engineer. He once sat on his father’s knee at age five as his dad explained Ohm’s Law. Growing up in rural Iowa, he lived far away from most people. Vacuum tubes and transistors became his best friends.

For his seventh-grade science fair project, Jenison built a rudimentary digital computer that could add and multiply numbers in base 10. He built his first real computer with a Motorola 6800 CPU hooked up to a Teletype because he couldn’t afford kits like the Altair that were popular at the time. “It was so exciting to say, ‘Wow, I have a computer!’” Jenison recalled in an interview with Wired. “But then you had to figure out what to do with it! That was the hard part.”

As a kid, Jenison dabbled in making 8mm home movies. It was a frustrating experience being an aspiring filmmaker at the time. To make any kind of edit required literally cutting and pasting film together. After dropping out of college and teaching himself engineering and programming, he started a small business selling software for the Tandy Color Computer. The very fact that the word “color” was in the name of the computer showed how primitive the technology was. Yet back then, Tim was already dreaming about doing video on a computer.

Around the same time in California, a man named Paul Montgomery went into a RadioShack to look for a device to spruce up his homemade videos. The sales manager showed him a special effects generator that cost about $450. The conversation went like this:

“This looks great! Can I fade from one image to another?” Montgomery asked.

“No, no way,” the RadioShack associate replied.

“Can it do fades at all?”

“Yeah, you can fade to black.”

“Can it do anything else?”

“Yeah, fade to red or green.”

“What about squeezing the image and flipping it?”

“No, no way. That takes a $100,000 piece of equipment. You’re never gonna find that here.”

Montgomery left the RadioShack empty-handed and disappointed.

The Amiga arrives

When Jenison read about the capabilities of the Amiga in the August 1985 issue of Byte, he went straight down to the nearest Commodore dealer and bought the first Amiga 1000 that came in. He immediately created a product called DigiView that was a simple video capture device. It would take snapshots of a single frame of video and save it to a floppy disc in the Amiga’s 4096-color HAM mode.

Jenison had saved three demo pictures on a single floppy when he ran into Jeff Bruette, a Commodore employee. Jeff asked if he could make a copy and take it back to Commodore with him. Tim agreed, but he asked that Bruette delete the disk’s READ.ME file, since it contained his home phone number. But within 24 hours, Tim’s phone started ringing. “This thing had spread all across the country,” he said.

Paul Montgomery was one of the first people to call. His friend Brad Carvey (an engineer and the brother of comedian Dana Carvey) had come over to his house and showed him the images. There was silence in the room as they stared at the pictures; it was like a religious experience. Computers weren’t supposed to be able to do things like that.

Jenison knew he had a winner on his hands with DigiView, so he sold his interest in the Tandy CoCo software company and started a new company to make video products for the Amiga. This was the beginning of NewTek. DigiView eventually sold more than 100,000 units, and it spawned DigiPaint, a paint program that worked with the Amiga’s 4096-color mode. Originally, this HAM mode was supposed to only work with static images because of the sequential algorithms used to store the data. DigiPaint simply worked around that problem to achieve what had formerly been impossible.

At the same time, Montgomery moved on to work at Electronic Arts but resigned when the company failed to live up to its founders’ goals of pushing computing forward with the Amiga. He ended up moving to Topeka and joining NewTek right at the time when the company was looking to expand with a new product.

Montgomery asked Jenison if the Amiga would be able to serve as the centerpiece for a video effects generator. Jenison liked the idea, but Montgomery kept pushing: “What about squeezing the image and flipping it?” he asked.

“No, that would take a $100,000 piece of equipment.” Jenison replied.

“OK, yeah, I knew that,” Montgomery said. “But it would be pretty cool if you could do it.”

In the story of the Amiga, there were many points in which an engineer was challenged to do something impossible. In this instance, Jenison went off and thought more about the problem. Eventually, he figured out a way to do the squeezing and flipping effect—and that was the beginning of the Toaster prototype.

The Toaster takes shape

Montgomery suggested that Jenison meet his friend Brad Carvey, who had been working on projects involving robotic vision. The three of them got together in a pizza restaurant in Topeka and started drawing block diagrams on the placemats.

Brad built the first wire wrap prototype of the board, and Jenison and software engineer Steve Kell helped get it working. In a few days, it was doing the flipping effect, and they were on their way.

The prototype was unveiled at Comdex in November 1987, causing quite a stir. By itself, the Toaster was already an impressive video effects board at an unbeatable price. But Jenison and the NewTek engineers wanted it to be much more. Their dream was for anyone to be able to afford video effects that looked as good as what professional TV studios produced. Creating a single, affordable, add-on card to replace network studio equipment seemed impossible.

In World War II, the slogan of the Army Corps of Engineers was, “The difficult we do immediately. The impossible takes a little longer.” And despite the Amiga’s propensity for handling video, some things couldn’t be done without building new custom chips. To get the performance they needed on the software side, much of the 350,000 lines of code were written in 68000 assembly language. Finishing the Toaster took 15 engineers, three years, and 5,325 hand-made cinnamon cat candies, but the end result was astonishing.

The Toaster

The Video Toaster was released in December 1990 for an entry-level price of $2,399. It consisted of a large expansion card that plugged into an Amiga 2000 and a set of programs on eight floppy disks. The complete package, including the Amiga, could be purchased for less than $5,000.

For that money, an aspiring video editor received a four-input switcher, two 24-bit frame buffers, a chrominance keyer (for doing green or blue screen overlays), and an improved genlock. The software allowed video inputs to switch back and forth using a dazzling array of custom wipes and fades, including the squishing and flipping effect that Montgomery had originally wanted.

Bundled with the system was Toaster CG (a character generator to make titles), Toaster Paint (an updated DigiPaint for making static graphic overlays), Chroma F/X (for modifying the color balance of images), and the real kicker: Lightwave 3D, a full-featured 3D modeling and animation package written by Allen Hastings and Stuart Ferguson.

At the time, 3D modeling and animation was the sort of thing people did on $20,000 SGI workstations, using software that cost nearly as much as the hardware it ran on. Bundling Lightwave with the Toaster was like including a free 3D printer with a new computer. It meant that Toaster users could create any digital effect that they could imagine.

1980s MTV viewers can surely recognize some of their favorite video transition effects.

Suddenly, star wipes

The launch of the Toaster changed the entire equation of producing video content. In the United States, the Federal Communications Commission had long established rules defining a minimum level of video quality called Broadcast-safe that was required to air programming on television. Consumer-level video cameras didn’t reach this level and couldn’t be used to make content for TV, and there were only a few exceptions for news programs showing short video clips taken by amateurs or in other countries. The equipment required to produce broadcast-safe video was expensive, running from hundreds of thousands to millions of dollars. This meant that unless you were an employee of a major television network, you couldn’t make your own programs and show them to anyone but your friends and family.

The Toaster changed this. For less than five thousand dollars, anyone could create programs that looked as good as the networks. One of the earliest and most enthusiastic Toaster adopters was rock bands that needed to make exciting videos for MTV on a budget. Rocker Todd Rundgren got especially motivated and connected 10 Toasters together to render his revolutionary music video for the song “Change Myself.” Effects that we consider “cheesy” today, like star wipes, only became that way because the Toaster made them commonplace. Just as the Macintosh led to a brief period of font abuse in the 1980s, the Toaster made possible a time of wild transitions and fades in the 1990s. The concept of “Wayne’s World” was very much a Toaster-based phenomenon.


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/LE4-WSFhCGA/

Original article

Designing simpler React components

A goal to strive for when using any framework or language is simplicity. Over time, it is the simpler application that is more maintainable, readable, testable, and performant. React is no exception, and we found that one of the best ways to manifest simplicity is by striving for functional purity in components, and by developing patterns that achieve this purity by default. Purity leads to more isolated and inherently simpler components, thereby bringing about a less braided and simpler system.

This is something we’ve thought a lot about at Asana — before we started using React, we had been building our in-house functionally reactive framework, Luna, since 2008. In iterating on this framework and building our web application, we’ve learned what worked and what caused long-term problems (read more). Through that, we’ve developed a series of overarching design principles that can be applied everywhere, but particularly in React.

Immutable data representation

When your data representation is mutable, then you’ll find it very difficult to maintain simple components. Individual components will become more complex by detecting and handling the transition states when data changes, rather than handling this at a higher-level component dedicated to re-fetching the data. Additionally, in React, immutable structures often lead to better performance: when data changes in a mutable representation, you’ll likely need to rely on React’s virtual DOM to determine whether components should update; alternatively, in an immutable representation, you can use a basic strict equality check to determine whether an update should occur.

Any time we have deviated from this and used a mutable object in props, it has resulted in regret and refactoring.

See here for more general benefits of using immutable data structures.

Make liberal use of pure functions

A pure function is a function whose return value is solely determined by its input values, without dependence on global state or causing any side effects. In components, we often have complicated behavior that aids but is not directly tied to our rendering. Use pure helper functions to move this logic outside of the component, so that the component has fewer responsibilities and lower complexity. Additionally, this logic can be tested in an isolated way, and is re-usable in other components. If you notice common sets of these helper functions, then denote them as such by organizing them into sets of modules.

We’ve encountered two main classes of these which occur in almost all of our components:

  1. Data model helpers to derive a result from one or more objects (for example: to determine whether a user is currently on vacation)
  2. Mutation helpers to perform client- and server-side mutations in response to user actions (for example: to heart a task).

Use pure components, avoiding impure pitfalls

A pure component is a React component whose render function is pure (solely determined by props and state). The default behavior in React is to always re-render the entire component tree, even if props/state do not change. This is incredibly inefficient, and React suggests overriding shouldComponentUpdate to take advantage of pure render functions in the component’s lifecycle. This offers an enormous performance boost and increased simplicity, so you should consider doing this early-on.

When using pure components (overriding shouldComponentUpdate), there is no verification that you actually implement your components to be pure. So, it’s possible to accidentally write a component that is not pure, which will cause reactivity problems and show stale data to the user. We’ll discuss two of these “impure pitfalls.”

Globals

Using globals in a component means that the component is no longer pure, as it depends on data outside of props and state. If you rely on a global for rendering or in any of the component’s lifecycle methods, then you won’t achieve correctness and reactivity. We’ve found it immensely helpful to avoid using globals like the Document or Window, and instead pass these as props to the components which use them. We do this by creating a Services object, and by having each component declare in an interface which services it relies on. Through this, components can maintain purity and be independent of the global namespace.

Render Callbacks

A now-antipattern that used to be quite prevalent for us is a render callback: a function passed as a prop to a component, which allows that component to render something. A common use-case of a render callback was to allow a child to render something using data it did not receive in props. For example, if we wanted to have a generalized component that could render many types of child components, we would pass the component a callback to render the child.

Unfortunately, render callbacks are inherently impure because they can use whatever variables its function has closed on. So, because of our assumption of pure components, if any of the outside environment changes then our component would not re-render.

Let’s see this in a code snippet.

// Render callback anti-pattern
interface ParentProps {
someObject: SomeObject;
}
class ParentComponent extends PureComponent {
render() {
return React.createElement(ChildComponent, {
renderSomething: this._renderSomethingForIdx
});
}
private _renderSomethingForIdx(idx: number) {
return React.createElement(SomeOtherComponent, {
object: this.props.someObject,
idx: idx
});
}
}
interface ChildProps {
renderSomething: (idx: number) => React.ReactElement;
}
class ChildComponent extends PureComponent {
render() {
// ... some other behavior ...
return this.props.renderSomething(123);
}
}

In this snippet, ParentComponent passes a render callback to ChildComponent, and that render callback uses someObject from props. Since ChildComponent uses this function for its rendering behavior, then it will not re-render if someObject changes.

Luckily, you can avoid using a render callback in one of three ways, depending on your constraints, and each allows us to keep our pure component assumption.

Alternative 1
Pass all information needed for rendering to the child component, and have that child render the component directly.

interface ParentProps {
someObject: SomeObject;
}
class ParentComponent extends PureComponent {
render() {
return React.createElement(ChildComponent, {
someObject: this.props.someObject
});
}
}
interface ChildProps {
idx: number;
someObject: SomeObject;
}
class ChildComponent extends PureComponent {
render() {
// ... some other behavior ...
return React.createElement(SomeOtherComponent, {
object: this.props.someObject,
idx: idx
});
}
}

We achieve the same rendered output by having ChildComponent render SomeOtherComponent itself. This works well if the additional props do not cause excess re-rendering, and do not break any contextual abstraction boundary in the component.

Alternative 2
Render the component in its entirety and pass that to the child component

interface ParentProps {
someObject: SomeObject;
}
class ParentComponent extends PureComponent {
render() {
return React.createElement(ChildComponent, {
somethingElement: this._renderSomethingElement()
});
}
private _renderSomethingElement() {
return React.createElement(SomeOtherComponent, {
object: this.props.someObject,
idx: 123 // Suppose this had access to the idx
});
}
}
interface ChildProps {
somethingElement: React.ReactElement;
}
class ChildComponent extends PureComponent {
render() {
// ... some other behavior ...
return this.props.somethingElement;
}
}

In cases that ParentComponent has all of the information needed to render SomeOtherComponent, we can just pass it down as a prop to the ChildComponent.

Alternative 3
Render the component partially, pass the ReactElement to the child component, and use React’s cloneElement to inject the remaining props.

interface ParentProps {
someObject: SomeObject;
}
class ParentComponent extends PureComponent {
render() {
return React.createElement(ChildComponent, {
somethingElement: this._renderSomethingElement()
});
}
private _renderSomethingElement() {
return React.createElement(SomeOtherComponent, {
object: this.props.someObject,
idx: null // injected by ChildComponent
});
}
}
interface ChildProps {
somethingElement: React.ReactElement;
}
class ChildComponent extends PureComponent {
render() {
// ... some other behavior ...
    // Clone the passed-in element, and add in the remaining prop.
return React.cloneElement(this.props.somethingElement, {
idx: 123
});
}
}

This alternative is great for cases where neither ParentComponent nor ChildComponent have the full information needed to render SomeOtherComponent, so it shares the responsibility. While this may seem more complicated than the above two alternatives, it has a lot of desirable properties. In the next section, we’ll dig into a real world example to make it more concrete.

Divide components and use the injector pattern to maintain separation of concerns

Composition is an immensely useful pattern in React for achieving separation of concerns. Many great philosophies around this have developed, such as dividing components between presentational and container components. However, for some high-level components, such as a general component for drag-and-drop, composition necessitated either use of a render callback or added complexity. In such cases, we found the aforementioned injector pattern helpful.


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/X1DxQTArM7s/designing-simpler-react-components-13a0061afd16

Original article

We haven’t forgotten how to code – JS just needs to become a better language

I don’t usually write opinionated blog articles. So if don’t agree with me, please remember these are my personal opinions and I would love to hear your own opinion.

  • A quick summary of the npm chaos
  • The aftermath of opinions and the insults to us JavaScript developers
  • The problem is not with the developers, it’s the language JavaScript itself
  • JavaScript is a necessary evil that requires time to improve
  • Comment

A quick summary of the npm chaos

There has been a lot of drama the last couple of days in the npm community, triggered by Azer Koçulu who had a dispute with the company Kik Interactive over who should have access to the npm package named “Kik” – the developer who first published the package with his open source project, or the world wide company who has an 240 million user application called “Kik Messenger”.

Without any lawyers being involved, it ended with npm seized controlled and transferred over the package name to “Kik” to Kik Interactive. In retaliation of the decision, Azer unpublished all of his 273 modules from npm. Among them was package called left-pad. A package with millions of downloads per month and thousands of projects relies on, breaking every single of them and creating chaos. In order to try and save the situation, npm took control over the left-pad package and un-unpublished it.

The aftermath of opinions and the insults to us JavaScript developers

The left-pad package is actually, what is refereed to, a “micro-package” since it is only 11 rows of source code that offers only one functionality; pads the left side of a String with any given character, until a given length is reached.

However, after this incident there has been a lot of drama, blogs, opinions and statements thrown around such as if we JavaScript developers have forgotten how to code since micro-packages such as left-pad exists.

The truth is; No, we haven’t forgotten how to code. On the contrary actually, I would say that we JavaScript developers are probably more experienced with crazy, unexpected, non-consistent results which we constantly have to overcome, since our single written source code is often executed on different environments and browsers.

Our code is not compiled specifically for a deterministic platform, nor is it always executed on deterministic run-time systems as other languages are. I see us JavaScript developers as hurdle-runners where every 30 seconds we need to jump over a new obstacle.

The problem is not with the developers, it’s the language JavaScript itself

I will be very straightforward and honest here.

Micro-packages such as left-pad (but you can also include famous libraries such as Underscore, Lodash and Rambda, and even frameworks such as jQuery, and other types of polyfills) are necessary evils in a JavaScript developer’s work. We don’t like to use them, but we have to. And it’s silly that we web developers should spend time to recreating basic wheels instead of focusing on creating high level solutions.

The only reason why the package left-pad exists, is because the JavaScript language itself is fundamentally flawed and the core APIs is a mess compared to other programming languages.

So without these micro-packages other polyfills in our toolbox, our work becomes really painful.

Another example is that if you are one of those lucky web developers that still needs to support Internet Explorer 9, it means that you you are confined within the rules of EcmaScript 5. This means that if you want to figure out if String contains another String, you need to write:

if( haystack.indexOf(needle) !== -1 ) {  
    ...
}

Compared to:

if( haystack.includes(needle) ) {  
    ...
}

A core API such as String.includes (or String.contains) existed in almost all major programing languages since day 1, but not for JavaScript. That specific API literally was added 18 years after the language was first released, as part of the EcmaScript 6 specification. And this is just one of many examples of basic code APIs that doesn’t exist in JavaScript compared to other languages.

JavaScript is a necessary evil that requires time to improve

I have never met anyone who has ever said they actually like the JavasScript language and its core APIs. They might like functional programming, the idea of functions are first class citizens and its loosely typed, but never the language itself. JavaScript is actually a language that we are forced to deal whether we like it or not.

However the process of improving and progressing EcmaScript is not easy though. It is almost like, I assume, building the International Space Station. It’s a project where multiple countries needs to sign off and where each country has its own prioritization and budgets to achieve the greater goal. A process like this takes time and the chain will never be as strong as its weakest link.

So no, we haven’t forgotten how to code. The JavaScript language just needs to become a better language.

Feel free to share your comments on the Reddit post.


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/1lQdKP2hubk/

Original article

Vulnerability #319816 – npm fails to restrict the actions of malicious packages

Original Release date: 25 Mar 2016 | Last revised: 26 Mar 2016


Overview

npm allows packages to take actions that could result in a malicious npm package author to create a worm that spreads across the majority of the npm ecosystem.

Description

npm is the default package manager for Node.js, which is a runtime environment for developing server-side web applications. There are several factors in the npm system that could allow for a worm to compromise the majority of the npm ecosystem:

  1. npm encourages the use of semver, or semantic versioning. With semver, dependencies are not locked to a certain version by default. For any dependency of a package, the dependency author can push a new version of the package.
  2. npm utilizes persistent authentication to the npm server. Once a user is logged in to npm, they are not logged out until they manually do so. Any user who is currently logged in and types npm install may allow any module to execute arbitrary publish commands.
  3. npm utilizes a centralized registry, which is utilized by the majority of the Node.js ecosystem. Typing npm publish ships your code to this registry server, where it can be installed by anyone.

When these three aspects of npm are combined, it provides the capability for a self-replicating worm. The following steps are an example worm workflow outlined in the report provided by Sam Saccone:

  1. Socially engineer a npm module owner to npm install an infected module on their system.
  2. Worm creates a new npm module
  3. Worm sets a lifecycle hook on the new npm module to execute the worm on any install
  4. Worm publishes the new module to the user's npm account
  5. Worm walks all of the user’s owned npm modules (with publish permissions) and adds the new module as a dependency in each's package.json.
  6. Worm publishes new versions to each of the owned modules with a “bugfix” level semver bump. This ensures the majority of dependent modules using the  ^ or  ~ signifier will include the self­replicating module during the next install.

The full report from Sam Saccone is available here in PDF form: npmwormdisclosure.pdfnpmwormdisclosure.pdf

The timeline provided in the above document is as follows:

    Jan 1 2016 ­­ Initial discovery of exploit
    Jan 4 2016 ­­ Initial disclosure + proof of concept to npm
    Jan 5 2016 ­ ­ Private disclosure to Facebook
    Jan 7 2016 ­­ Response from npm
    Jan 8 2016 ­­ Confirmation of works as intended no intention to fix at the moment from npm.
    Feb 5 2016 ­­ Shared the disclosure doc

Impact

An attacker may be able to create a self-replicating worm that spreads as users install packages.

Solution

The CERT/CC is currently unaware of a practical solution to this problem. Please consider the following workarounds:

  • As a user who owns modules you should not stay logged into npm. (Easily enough, npm logout and npmlogin)
  • Use npm shrinkwrap to lock down your dependencies
  • Use npminstall someModule --ignore-scripts
  • Vendor Information (Learn More)

    Vendor Status Date Notified Date Updated
    npm Affected 11 Feb 2016 25 Mar 2016

    If you are a vendor and your product is affected, let
    us know
    .

    CVSS Metrics (Learn More)

    Group Score Vector
    Base 6.0 AV:N/AC:M/Au:S/C:P/I:P/A:P
    Temporal 5.1 E:POC/RL:W/RC:C
    Environmental 3.8 CDP:ND/TD:M/CR:ND/IR:ND/AR:ND

    References

    Credit

    Thanks to David Ross and Sam Saccone for reporting this vulnerability.

    This document was written by Will Dormann.

    Other Information

    • CVE IDs:
      Unknown
    • Date Public:
      25 Mar 2016
    • Date First Published:
      25 Mar 2016
    • Date Last Updated:
      26 Mar 2016
    • Document Revision:
      38

    Feedback

    If you have feedback, comments, or additional information about this vulnerability, please send us email.


    Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/b84gKr9nYpo/319816

    Original article

    Red Programming Language: 0.6.0: Red GUI System

    Five years ago, when I started writing the first lines of code of what would become later the Red/System compiler, I had a pretty good picture already of what I wanted to achieve with Red, and all the ideal features that should be included, just not sure how much time and efforts it would require to have them. Two years and half ago, baby Red printed its first output. And today, we celebrate a major step forward with the addition of a brand new GUI system entirely written in Red itself! What a journey!

    Here it is, the long awaited 0.6.0 release with its massive 1540 commits! The major new features are:

    • View engine, with Windows backend (from XP to 10)
    • VID dialect
    • Draw dialect
    • Reactive GUI programming
    • GUI console
    • Simple I/O support for files and HTTP(S) queries.
    • Codecs system with following codecs available: BMP, GIF, JPEG, PNG
    • Objects ownership system

    All those additions made our Red executable grow from 767 KB to 885 KB (Windows platform), sorry for the extra 120 KB, we will try to [red]uce that in the future. 😉

    Let’s start with the elephant in the room first, the GUI system. Here is an architecture overview:

    Only the Windows backend is fully usable for now, Android and OS X are work-in-progress, Linux (using GTK) will follow soon, iOS will come later this year. Also, we have other targets in mind, like JS/HTML which are not scheduled yet, but could come this year too.

    Red/View

    First let me mention that View, VID and Draw were invented by Carl Sassenrath (of Amiga fame) in the Rebol language, a long time ago. Red’s version retains all the best features and pushes the boundaries of simplicity even further. The main features of our View engine are:

    • A live updating mode that reduces the need to a single view function in most cases.
    • Full abstraction over rendering backends.
    • Two-way binding using live objects.
    • Event bubbling/capturing stages.
    • Built-in drag’n drop support for most face types.
    • Gestures support (experimental).
    • Native widgets support.
    • Full integration with the OS features.
    • Flexible backend support that can be mapped to virtually any kind of UI library.

    The current list of supported widgets is: base, text, button, check, radio, field, area, text-list, drop-list, drop-down, progress, slider, camera, panel, tab-panel, group-box.

    Next releases will bring more widgets, like: table, tree, divider, date/time picker, web-view and many others!

    For more info about View, see the View reference document.

    Main differences between Red/View and Rebol/View are:

    • Red relies on native widgets, Rebol has custom ones only, built over a 2D vector library.
    • Red faces are synchronized with their widgets on display in realtime by default, Rebol faces require manual calls to many functions for keeping faces and widget updated.
    • Red introduces reactive GUI programming.

    Red/View will update both face and graphic objects in realtime as their properties are changed. This is the default behavior, but it can be switched off, when full control over screen updates is desirable. This is achieved by:
        system/view/auto-sync?: off
    When automatic syncing is turned off, you need to use show function on faces to get the graphic objects updated on screen.
    VID dialect

    VID stands for Visual Interface Dialect. It is a dialect of Red which drastically simplifies GUI construction. VID code is dynamically compiled to a tree of faces, feeding the View engine. You can then manipulate the face objects at runtime to achieve dynamic behaviors. VID offers:

    • Declarative programming.
    • Automatic layout system.
    • Cascading styles.
    • Default values for…everything.

    For more info about VID, see the specification.

    In case you are reading about Red or Rebol for the first time, here are a few code demos to show how simple, yet efficient, is our approach to GUI programming:

        ;-- GUI Hello word
        view [text "Hello World"]
        
        ;-- Say Hi to the name you type in the field
        view [name: field button "Hi" [print ["Hi" name/text]]]
        
        ;-- Demo simple reactive relations, drag the logo around to see the effect
        view [
         size 300x300
         img: image loose http://static.red-lang.org/red-logo.png
         return
         shade: slider 0%
         return
         text bold font-size 14 center "000x000" 
             react [face/text: form img/offset]
             react [face/font/color: white * shade/data]
        ]
        
        ;-- Simple form editing/validating/saving with styles definitions
        digit: charset "0123456789"
        view [
         style label: text bold right 40
         style err-msg: text font-color red hidden
        
         group-box "Person" 2 [
             origin 20x20
             label "Name" name: field 150 return
             label "Age"  age:  field 40  return
             label "City" city: field 150 return
             err-msg "Age needs to be a number!" react [
                    face/visible?: not parse age/text [any digit]
             ]
         ]
         button "Save" [save %person.txt reduce [name/text age/text city/text]]
        ]
        set [name age city] load %person.txt
        ?? name ?? age ?? city
    

    You can run all those examples by copy/pasting them one-by-one into the Red console for Windows. To get the console, just download it and double-click the Red binary, wait ~20 seconds for the console to be compiled for your OS (yes, that little file contains the full Red toolchain, runtime library and console source code), paste the code and have fun. 😉

    Draw dialect

    Draw is a 2D vector-drawing dialect which can be used directly, to render on an image, in faces for local rendering, or specified through VID. It is still a work in progress as not all features are there yet. We aim at full Rebol/Draw coverage and full SVG compatibility in the not-too-distant future.

    A simple example of Draw usage:

        shield: [
            fill-pen red   circle 50x50 50
            pen gray
            fill-pen white circle 50x50 40
            fill-pen red   circle 50x50 30
            fill-pen blue  circle 50x50 20
            
            pen blue fill-pen white
            polygon 32x44 45x44 50x30 55x44 68x44 57x53 60x66 50x58 39x66 43x53
        ]
        
        ;-- Draw in a draggable face, in realtime.
        view [
            size 300x300
            img: image 100x100 draw shield loose
            at 200x200 base white bold react [
                [img/offset]
                over?: overlap? face img
                face/color: get pick [red white] over?
                face/text: pick ["Hit!" ""] over?
            ]
            button "Hulk-ize!" [replace/all shield 'red 'green]
            button "Restore"   [replace/all shield 'green 'red]
        ]
    

    Copy/paste the above code example in a Red console on Windows, and become an Avenger too! 😉

    For more info about Draw, see the specification.

    Main differences between Red/Draw and Rebol/Draw:

    • Red does not yet cover all the commands of Rebol/Draw yet.
    • Red’s version allows commands to be grouped in blocks, ease-ing insertion/removal at run-time.
    • Red’s version allows commands to be prefixed with a set-word, allowing to save local position in Draw blocks in a word.



    Reactive GUI programming

    This is a deep topic which should be part of a future separate blog article. So, I will just copy/paste here the little information already in the VID documentation:

    Reactions (or reactors, not sure yet which terms is the most accurate) are created using the react keyword, directly from Red code or from VID dialect. The syntax is:

        react []
        
         : regular Red code (block!).
    

    This creates a new reactor from the body block. When react is used in VID, as a face option, the body can refer to the current face using face word. When react is used globally, target faces need to be accessed using a name.

    Reactors are part of the reactive programming support in View, whose documentation is pending. In a nutshell, the body block can describe one or more relations between faces properties using paths. Set-path setting a face property are processed as target of the reactor (the face to update), while path accessing a face property are processed as source of the reactor (a change on a source triggers a refresh of the reactor’s code).

    Basically, it is about statically-defined relations between faces properties, without caring when or how the reactive expressions will be evaluated. It will happen automatically, when needed. Here are a couple of examples you can copy/paste in the Red console on Windows:

    Make a circle size change according to slider’s position:

        view [
            sld: slider return
            base 200x200 
                draw  [circle 100x100 5]
                react [face/draw/3: to integer! 100 * sld/data]
        ]
    

    Change the color of a box and a text using 3 sliders:

        to-color: function [r g b][
            color: 0.0.0
            if r [color/1: to integer! 256 * r]
            if g [color/2: to integer! 256 * g]
            if b [color/3: to integer! 256 * b]
            color
        ]
    
        to-text: function [val][form to integer! 0.5 + 255 * any [val 0]]
    
        view [
          style txt: text 40 right
          style value: text "0" 30 right bold
        
          across
          txt "Red:"   R: slider 256 value react [face/text: to-text R/data] return
          txt "Green:" G: slider 256 value react [face/text: to-text G/data] return
          txt "Blue:"  B: slider 256 value react [face/text: to-text B/data]
        
          pad 0x-65 box: base react [face/color: to-color R/data G/data B/data]
          return
        
          pad 0x20 text "The quick brown fox jumps over the lazy dog." font-size 14
            react [face/font/color: box/color]
        ]
    
    



    GUI console

    We have a GUI console now, in addition to the existing CLI one!

    The GUI console is now the default on Windows platform, and is fully Unicode-aware. The system shell (DOS) console is still available using –cli option:

        red --cli

    The GUI console is still in its infancy and will be enhanced a lot in future releases. Anyway, so far, it already supports:

    • history of commands
    • completion on words and object paths
    • multi-line editing for blocks, parens, strings, maps and binaries.
    • navigation using HOME and END keys
    • select/copy/paste using the mouse and keyboard shortcuts
    • auto-scrolling when selecting with the mouse out of the boundaries
    • very fast text rendering
    • automatic vertical scroll bar
    • customizable prompt

    Try this cool one-liner for making the prompt more active:

        system/console/prompt: does [append to-local-file system/options/path "> "]

    This is how the GUI console looks like:



    Simple I/O support

    In order to have really some fun with the GUI, we have added some minimal support for blocking IO basic actions covering files and HTTP(S) requests. read and write action are available now. Their /binary, /lines and /info refinement are working. do, load, save have also been extended to work with files and urls.

    When not using /binary, read and write are expecting UTF-8 encoded data. Support for ISO8859-1 and other common encoding formats will be available in next release.

    The full IO will come in 0.7.0 with ports, full networking, async support and many more features.

    Codecs

    Codec system support has made his entrance in this release. It is a very thin layer of encoders/decoders for binary data, integrated with load, save and actions which rely on /as refinement. load and save will auto-detect the required encoding format and try to apply the right encoder or decoder on the data.

    Currently only image format codecs are provided: BMP, PNG, GIF, JPEG. Any kind of encoding (related to IO) is a good candidate for becoming a codec, so expect a lot of them available in the future (both built-in Red runtime and optionaly installable).

    For example, downloading a PNG image in memory, and using it is as simple as:

        logo: load http://static.red-lang.org/red-logo.png
         
        big: make font! [name: "Comic" size: 20 color: black]
        draw logo [font big text 10x30 "Red"]
        view [image logo]
    

    Saving a downloaded file locally:

        write/binary %logo.png read/binary http://static.red-lang.org/red-logo.png

    Saving images is not fully functional yet, PNG should be safe though.


    Objects ownership system

    Red’s objects ownership system is an extension of object’s event support introduced in previous releases. Now, an object can own series it references, even nested ones. When an owned series is changed, the owner object is notified and its on-deep-change* function will be called if available, allowing the object to react appropriately to any change.

    The prototype for on-deep-change* is:

        on-deep-change*: func [owner word target action new index part][...]
    

    The arguments are:

    • owner: object receiving the event (object!)
    • word: object’s word referring to the changed series or nested series (word!)
    • target: the changed series (any-series!)
    • action: name of the action applied (word!)
    • new: new value added to the series (any-type!)
    • index: position at which the series is modified (integer!)
    • part: number of elements changes in the series (integer!)

    Action name can be any of: random, clear, cleared, poke, remove, removed, reverse, sort, insert, take, taken, swap, trim. For actions “destroying” values, two events are generated, one before the “destruction”, one after (hence the presence of cleared, removed, taken words).


    When modifications affect several non-contiguous or all elements, index will be set to -1.
    When modifications affect an undetermined number of elements, part will be set to -1.

    Ownership is set automatically on object creation if on-deep-change* is defined, all referenced series (including nested ones), will then become owned by the object. modify action has been also implemented to allow setting/clearing ownership post-creation-time.

    As for on-change, on-deep-change* is kept hidden when using mold on object. It is only revealed by mold/all.

    Here is a simple usage example of object ownership. The code below will create a numbers object containing an empty list. You can append only integers to that list, if you fail to do so, a message will be displayed and the invalid element removed from the list. Moreover, the list is always sorted, wherever you insert or poke a new value:

        numbers: object [
            list: []
        
            on-deep-change*: function [owner word target action new index part][
                if all [word = 'list find [poke insert] action][
                    forall target [
                        unless integer? target/1 [
                            print ["Error: Item" mold target/1 "is invalid!"]
                            remove target
                            target: back target
                        ]
                    ]
                    sort list
                ]
            ]
        ]
        
        red>> append numbers/list 3
        == [3]
        red>> insert numbers/list 7
        == [3 7]
        red>> append numbers/list 1
        == [1 3 7]
        red>> insert next numbers/list 8
        == [1 3 7 8]
        red>> append numbers/list 4
        == [1 3 4 7 8]
        red>> append numbers/list "hello"
        Error: Item "hello" is invalid!
        == [1 3 4 7 8]
        red>> numbers
        == make object! [
            list: [1 3 4 7 8]
        ]
    

    Object ownership is deeply used in Red/View, in order to achieve the binding between face objects and the widgets on screen, and the automatic “show-less” synchronization. 


    The work on this is not yet completed, more object events will be provided in future releases and the ownership support extended to enable objects to own more datatypes. More documentation will be provided once the work on that will be finished. In the future, its use will be extending to other frameworks and interfaces. Such “reactive objects” will be called “live objects” in Red’s jargon.

    Red/System changes

    • Full stack traces in debug mode on runtime errors.
    • New compilation directive: #u16 (literal UTF-16LE strings support).
    • Added log-b native function for getting the binary logarithm of an integer.
    • Added equal-string? runtime function for testing c-string! equality.
    • Several improvements to some compiler errors reporting accuracy.
    • Improved function! type support.
    • New compilation option: debug-safe? (for safer stack traces)
    • New –catch command-line option for console to open on script errors.
    • Improved compilation speed of variables assignment.
    • Fixes for broken exceptions support on ARM backend.


    Additions to the Red runtime library

    New functions

    • show, view, unview, draw, layout, react, size-text, to-image, do-events, dump-face, within?, overlap?, remove-reactor, set-flag, find-flag?, center-face, insert-event-func, remove-event-func.
    • event?, image?, binary?.
    • debase, wait.
    • request-file, request-dir.
    • read, write, exists?, to-local-file, to-red-file, dirize, clean-path, split-path.
    • what-dir, change-dir, list-dir.
    • also, alter, extract, collect, split, checksum, modify, unset.
    • as-color, as-rgba, as-ipv4.
    • cd, ls, ll, pwd, dir. (console-only)
    Use help in the console to get more information about each function.

    New datatypes

    • binary!
    • event! (Windows only for now)
    • image! (Windows only for now)

    Binary! datatype supports all the series actions. Literal base 2, 16 and 64 encodings are available:

        red>> 2#{11110000}
        == #{F0}
        red>> to string! 64#{SGVsbG8gV29ybGQh}
        == "Hello World!"
    

    Event! and image! are a work-in-progress, though image! is already very capable (documentation will be added soon).

    Other changes

    set and get native improvements:

    If A and B are object values, set A B will now set the values in A from B, for the fields they have in common (regardless of the fields definition order in the objects).

    Added /only and /some refinements:

    • /only: set argument block or object as a single value
    • /some: `none` values in the argument block or object are not set

    o Icons and other “resources” are now supported for inclusion in Windows executables. They can be set from Red’s main script header, these are the currently supported options:

    • Icon: file! or block! of files
    • Title: string!
    • Author: string!
    • Version: tuple!
    • Rights: string!
    • Company: string!
    • Notes: string!
    • Comments: string!
    • Trademarks:  string!

    If no Icon option is specified, a default Red icon will be provided.

    o index? action is now allowed on words. It will return the word’s index inside a context or none if the word is unbound. This is a shortcut for the following idiom:
        index? find words-of
     
    o Remaining list of changes:

    • Implemented type-checking for infix operators in the interpreter.
    • Implemented native! functions type-checking support when called by compiled code.
    • Added system/state/trace? for enabling/disabling call stack traces on errors.
    • system/options/args gets the command-line string.
    • Added DO/ARGS support.
    • Error report for catchable infinite block rules recursions in Parse.
    • Added limits to Parse stack to avoid eating up all the memory.
    • Auto-conversion of float values in routines.
    • Big series (> 2MB) support enabled.
    • Lexer support for base2 and base64 encoding.
    • DO and LOAD work on file! and url! values now.
    • Added support for cycles detection for MOLD/FORM and comparisons.
    • Support for set operations on hash!.
    • SORT works on paren! now.
    • string! to issue! conversion support.
    • file! to string! conversion support.
    • Allowed float! values as arguments to AS-PAIR and MAKE pair!.
    • Added percent! support in vector! series.
    • Added matching typesets support to Parse.
    • Added PUT support to object! and any-series!.
    • Added support for make bitset!
    • Setting a tuple component to none now eliminates the component.
    • Support for HOME and END key in console.
    • Multiline editing support for paren! and map! in console.
    • Added proper error handling for malformed paths evaluation attemps.
    • Scripts using routines will now output a proper error when run from interpreter.
    • Better error handling when decoding UTF-8 string.
    • Allow PROBE to have an unset! value as argument.
    • Support X in addition to x for pair! literal syntax.
    • Prevent empty conditions in conditional iterators from entering an infinite loop.
    • Improved formatting of error messages arguments.
    • Several output improvements to HELP.
    • Allow DIR? to take an url!.
    • Allow system/console/prompt to be an active value (e.g.: a function).

    Ticket processing

    We have closed 260+ tickets since last release (0.5.4), among which 54 concern issues in previous releases. We have currently ~92.5% closed tickets overall, which is a bit lower than the usual 95%+, mostly due to the huge amount of new code and feature in this release. So, we will aim at getting back to a lower number of opened tickets for the next release.

    I would like to make a big thank to all the contributors who reported issues, tested fixes and sent pull requests. It has been, more than ever given the number of newly implemented features, a huge help in making Red better. I would like to especially thank namely a few people for their outstanding contributions:

    • WiseGenius: for helping us solve the epic crush library generation bug, improvement suggestions and huge work on testing/reporting GUI issues! 
    • nc-x: for help in testing the GUI, making many useful issue reports and improvement suggestions.
    • The “Czech group” (Pekr, Oldes, Rebolek): for their constant support and for taking care of the Red community when I’m not available. 😉
    • PeterWAWood: for bringing us the ~30’000 unit tests, testing framework and constant help and support, since day one!
    • Micha: for issues reporting and kindly providing us an online Mac OSX server for our build farm.

    What’s next?

    Our focus for next releases (0.6.x) will be:

    • Drastically speed up compilation time by pre-compiling the runtime library.
    • Simple garbage collector integration.
    • Improvement of our Windows GUI backend.
    • First usable versions of MacOSX and Android GUI backends.
    • Integration of our Android APK building toolchain in master branch.
    • Improvements for reactive GUI programming support.
    • Custom widgets creation framework.
    • Animation and timers support.
    • More documentations and tutorials for beginners.
    • More code demos.

    See the detail for next releases on our public Trello board and come to our chat room to ask any question.

    In the meantime, enjoy the new Red, I hope to see many impressive GUI demos and apps in the next weeks. 😉


    Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/6NGs6XgAnLA/060-red-gui-system.html

    Original article

    Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

    Up ↑

    %d bloggers like this: