The Last Audio Cassette Factory [video]

Offentliggjort den 4. sep. 2015

Sept. 1 — Springfield, MO-based National Audio Company opened in 1969 and when other major manufacturers abandoned tape manufacturing for CD production in the late 1990s, the company held on tight. Now, the cassette maker is pumping out more cassettes than ever before. (Video By: Jeniece Pettitt, Ryo Ikegami)

— Subscribe to Bloomberg on YouTube: http://www.youtube.com/Bloomberg

Bloomberg Television offers extensive coverage and analysis of international business news and stories of global importance. It is available in more than 310 million households worldwide and reaches the most affluent and influential viewers in terms of household income, asset value and education levels. With production hubs in London, New York and Hong Kong, the network provides 24-hour continuous coverage of the people, companies and ideas that move the markets.


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/XwsHSzqehu4/watch

Original article

Apple’s iPhone installment plan threatens carriers’ ties to customers

Apple’s announcement Wednesday of its own iPhone installment plan, tucked toward the end of what CEO Tim Cook called a “monster roll-out,” was the big deal of the morning-after talk about smartphones, analysts said.

“If you like to have a new iPhone every single year, this is the best way to do that,” proclaimed Philip Schiller, Apple‘s head of marketing, during the Wednesday event. He spent just seconds outlining the iPhone Upgrade Program.

Customers walk into an Apple retail store after making an appointment — the program will be initially available only in the U.S., and is restricted to Apple’s brick-and-mortar outlets — select an unlocked iPhone, choose a carrier, and then pay for the device over a 24-month stretch. That phone will be covered by the AppleCare+ aftermarket warranty, which includes accidental damage coverage.

To read this article in full or to leave a comment, please click here


Original URL: http://www.computerworld.com/article/2983237/smartphones/apples-iphone-installment-plan-threatens-carriers-ties-to-customers.html#tk.rss_all

Original article

React v0.14 Release Candidate

We’re happy to announce our first release candidate for React 0.14! We gave you a sneak peek in July at the upcoming changes but we’ve now stabilized the release more and we’d love for you to try it out before we release the final version.

Let us know if you run into any problems by filing issues on our GitHub repo.

Installation #

We recommend using React from npm and using a tool like browserify or webpack to build your code into a single package:

  • npm install --save react@0.14.0-rc1
  • npm install --save react-dom@0.14.0-rc1

Remember that by default, React runs extra checks and provides helpful warnings in development mode. When deploying your app, set the NODE_ENV environment variable to production to use the production build of React which does not include the development warnings and runs significantly faster.

If you can’t use npm yet, we also provide pre-built browser builds for your convenience:

These builds are also available in the react and react-dom packages on bower.

Changelog #

Major changes #

  • Two Packages: React and React DOM #

    As we look at packages like react-native, react-art, react-canvas, and react-three, it has become clear that the beauty and essence of React has nothing to do with browsers or the DOM.

    To make this more clear and to make it easier to build more environments that React can render to, we’re splitting the main react package into two: react and react-dom. This paves the way to writing components that can be shared between the web version of React and React Native. We don’t expect all the code in an app to be shared, but we want to be able to share the components that do behave the same across platforms.

    The react package contains React.createElement, .createClass, .Component, .PropTypes, .Children, and the other helpers related to elements and component classes. We think of these as the isomorphic or universal helpers that you need to build components.

    The react-dom package has ReactDOM.render, .unmountComponentAtNode, and .findDOMNode. In react-dom/server we have server-side rendering support with ReactDOMServer.renderToString and .renderToStaticMarkup.

    var React = require('react');
    var ReactDOM = require('react-dom');
    
    var MyComponent = React.createClass({
      render: function() {
        return <div>Hello World</div>;
      }
    });
    
    ReactDOM.render(<MyComponent />, node);
    

    We’ve published the automated codemod script we used at Facebook to help you with this transition.

    The add-ons have moved to separate packages as well: react-addons-clone-with-props, react-addons-create-fragment, react-addons-css-transition-group, react-addons-linked-state-mixin, react-addons-perf, react-addons-pure-render-mixin, react-addons-shallow-compare, react-addons-test-utils, react-addons-transition-group, and react-addons-update, plus ReactDOM.unstable_batchedUpdates in react-dom.

    For now, please use matching versions of react and react-dom in your apps to avoid versioning problems.

  • DOM node refs #

    The other big change we’re making in this release is exposing refs to DOM components as the DOM node itself. That means: we looked at what you can do with a ref to a React DOM component and realized that the only useful thing you can do with it is call this.refs.giraffe.getDOMNode() to get the underlying DOM node. In this release, this.refs.giraffe is the actual DOM node. Note that refs to custom (user-defined) components work exactly as before; only the built-in DOM components are affected by this change.

    var Zoo = React.createClass({
      render: function() {
        return <div>Giraffe name: <input ref="giraffe" /></div>;
      },
      showName: function() {
        // Previously: var input = this.refs.giraffe.getDOMNode();
        var input = this.refs.giraffe;
        alert(input.value);
      }
    });
    

    This change also applies to the return result of ReactDOM.render when passing a DOM node as the top component. As with refs, this change does not affect custom components. With these changes, we’re deprecating .getDOMNode() and replacing it with ReactDOM.findDOMNode (see below).

  • Stateless function components #

    In idiomatic React code, most of the components you write will be stateless, simply composing other components. We’re introducing a new, simpler syntax for these components where you can take props as an argument and return the element you want to render:

    // Using an ES2015 (ES6) arrow function:
    var Aquarium = (props) => {
      var fish = getFish(props.species);
      return <Tank>{fish}</Tank>;
    };
    
    // Or with destructuring and an implicit return, simply:
    var Aquarium = ({species}) => (
      <Tank>
        {getFish(species)}
      </Tank>
    );
    
    // Then use: 
    

    This pattern is designed to encourage the creation of these simple components that should comprise large portions of your apps. In the future, we’ll also be able to make performance optimizations specific to these components by avoiding unnecessary checks and memory allocations.

  • Deprecation of react-tools #

    The react-tools package and JSXTransformer.js browser file have been deprecated. You can continue using version 0.13.3 of both, but we no longer support them and recommend migrating to Babel, which has built-in support for React and JSX.

  • Compiler optimizations #

    React now supports two compiler optimizations that can be enabled in Babel. Both of these transforms should be enabled only in production (e.g., just before minifying your code) because although they improve runtime performance, they make warning messages more cryptic and skip important checks that happen in development mode, including propTypes.

    Inlining React elements: The optimisation.react.inlineElements transform converts JSX elements to object literals like {type: 'div', props: ...} instead of calls to React.createElement.

    Constant hoisting for React elements: The optimisation.react.constantElements transform hoists element creation to the top level for subtrees that are fully static, which reduces calls to React.createElement and the resulting allocations. More importantly, it tells React that the subtree hasn’t changed so React can completely skip it when reconciling.

Breaking changes #

As always, we have a few breaking changes in this release. Whenever we make large changes, we warn for at least one release so you have time to update your code. The Facebook codebase has over 15,000 React components, so on the React team, we always try to minimize the pain of breaking changes.

These three breaking changes had a warning in 0.13, so you shouldn’t have to do anything if your code is already free of warnings:

  • The props object is now frozen, so mutating props after creating a component element is no longer supported. In most cases, React.cloneElement should be used instead. This change makes your components easier to reason about and enables the compiler optimizations mentioned above.
  • Plain objects are no longer supported as React children; arrays should be used instead. You can use the createFragment helper to migrate, which now returns an array.
  • Add-Ons: classSet has been removed. Use classnames instead.

And these two changes did not warn in 0.13 but should be easy to find and clean up:

  • React.initializeTouchEvents is no longer necessary and has been removed completely. Touch events now work automatically.
  • Add-Ons: Due to the DOM node refs change mentioned above, TestUtils.findAllInRenderedTree and related helpers are no longer able to take a DOM component, only a custom component.

New deprecations, introduced with a warning #

  • Due to the DOM node refs change mentioned above, this.getDOMNode() is now deprecated and ReactDOM.findDOMNode(this) can be used instead. Note that in most cases, calling findDOMNode is now unnecessary – see the example above in the “DOM node refs” section.

    If you have a large codebase, you can use our automated codemod script to change your code automatically.

  • setProps and replaceProps are now deprecated. Instead, call ReactDOM.render again at the top level with the new props.

  • ES6 component classes must now extend React.Component in order to enable stateless function components. The ES3 module pattern will continue to work.

  • Reusing and mutating a style object between renders has been deprecated. This mirrors our change to freeze the props object.

  • Add-Ons: cloneWithProps is now deprecated. Use React.cloneElement instead (unlike cloneWithProps, cloneElement does not merge className or style automatically; you can merge them manually if needed).

  • Add-Ons: To improve reliability, CSSTransitionGroup will no longer listen to transition events. Instead, you should specify transition durations manually using props such as transitionEnterTimeout={500}.

Notable enhancements #

  • Added React.Children.toArray which takes a nested children object and returns a flat array with keys assigned to each child. This helper makes it easier to manipulate collections of children in your render methods, especially if you want to reorder or slice this.props.children before passing it down. In addition, React.Children.map now returns plain arrays too.
  • React uses console.error instead of console.warn for warnings so that browsers show a full stack trace in the console. (Our warnings appear when you use patterns that will break in future releases and for code that is likely to behave unexpectedly, so we do consider our warnings to be “must-fix” errors.)
  • Previously, including untrusted objects as React children could result in an XSS security vulnerability. This problem should be avoided by properly validating input at the application layer and by never passing untrusted objects around your application code. As an additional layer of protection, React now tags elements with a specific ES2015 (ES6) Symbol in browsers that support it, in order to ensure that React never considers untrusted JSON to be a valid element. If this extra security protection is important to you, you should add a Symbol polyfill for older browsers, such as the one included by Babel’s polyfill.
  • When possible, React DOM now generates XHTML-compatible markup.
  • React DOM now supports these standard HTML attributes: capture, challenge, inputMode, is, keyParams, keyType, minLength, summary, wrap. It also now supports these non-standard attributes: autoSave, results, security.
  • React DOM now supports these SVG attributes, which render into namespaced attributes: xlinkActuate, xlinkArcrole, xlinkHref, xlinkRole, xlinkShow, xlinkTitle, xlinkType, xmlBase, xmlLang, xmlSpace.
  • The image SVG tag is now supported by React DOM.
  • In React DOM, arbitrary attributes are supported on custom elements (those with a hyphen in the tag name or an is="..." attribute).
  • React DOM now supports these media events on audio and video tags: onAbort, onCanPlay, onCanPlayThrough, onDurationChange, onEmptied, onEncrypted, onEnded, onError, onLoadedData, onLoadedMetadata, onLoadStart, onPause, onPlay, onPlaying, onProgress, onRateChange, onSeeked, onSeeking, onStalled, onSuspend, onTimeUpdate, onVolumeChange, onWaiting.
  • Many small performance improvements have been made.
  • Many warnings show more context than before.
  • Add-Ons: A shallowCompare add-on has been added as a migration path for PureRenderMixin in ES6 classes.
  • Add-Ons: CSSTransitionGroup can now use custom class names instead of appending -enter-active or similar to the transition name.

New helpful warnings #

  • React DOM now warns you when nesting HTML elements invalidly, which helps you avoid surprising errors during updates.
  • Passing document.body directly as the container to ReactDOM.render now gives a warning as doing so can cause problems with browser extensions that modify the DOM.
  • Using multiple instances of React together is not supported, so we now warn when we detect this case to help you avoid running into the resulting problems.

Notable bug fixes #

  • Click events are handled by React DOM more reliably in mobile browsers, particularly in Mobile Safari.
  • SVG elements are created with the correct namespace in more cases.
  • React DOM now renders elements with multiple text children properly and renders elements on the server with the correct option selected.
  • When two separate copies of React add nodes to the same document (including when a browser extension uses React), React DOM tries harder not to throw exceptions during event handling.
  • Using non-lowercase HTML tag names in React DOM (e.g., React.createElement('DIV')) no longer causes problems, though we continue to recommend lowercase for consistency with the JSX tag name convention (lowercase names refer to built-in components, capitalized names refer to custom components).
  • React DOM understands that these CSS properties are unitless and does not append “px” to their values: animationIterationCount, boxOrdinalGroup, flexOrder, tabSize, stopOpacity.
  • Add-Ons: When using the test utils, Simulate.mouseEnter and Simulate.mouseLeave now work.
  • Add-Ons: ReactTransitionGroup now correctly handles multiple nodes being removed simultaneously.


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/VTcIb4Q-Mm8/react-v0.14-rc1.html

Original article

Ashley Madison coding blunder made 11M passwords easy to crack

Until today, the creators of the hacked AshleyMadison.com infidelity website appeared to have done at least one thing well: protect user passwords with a strong hashing algorithm. That belief, however, was painfully disproved by a group of hobbyist password crackers.

The 16-man team, called CynoSure Prime, sifted through the Ashley Madison source code that was posted online by hackers and found a major error in how passwords were handled on the website.

They claim that this allowed them to crack over 11 million of the 36 million password hashes stored in the website’s database, which has also been leaked.

A few weeks ago such a feat seemed impossible because security experts quickly observed from the leaked data that Ashley Madison stored passwords in hashed form — a common security practice — using a cryptographic function called bcrypt.

To read this article in full or to leave a comment, please click here


Original URL: http://www.computerworld.com/article/2982959/cybercrime-hacking/ashley-madison-coding-blunder-made-11m-passwords-easy-to-crack.html#tk.rss_all

Original article

Time Traveling in Node.js Notebooks

As part of a two-post series, I’d like to share some in-depth details behind one of
Tonic‘s most asked about features: time traveling. In this first installment I’ll be
focusing mainly on how the back-end works: specifically, how we’re able to not only
rewind the state of your code, but any changes to the filesystem and spawned subprocesses
as well. From a high level, this allows for a lot of cool functionality like real undo
in a REPL. However, we’ll also see how time traveling is actually essential to
the way notebooks fundamentally work in Tonic.

Why Time Traveling is Important

One of the earliest design goals for Tonic was that notebooks should just be node modules.
There should be no real need for a “tonic file format” per se: you should be able to download a notebook
as a node module and get the same results by running it yourself in a vanilla node distribution.
And it should follow that you should of course also be able to import one notebook from another,
making them composable and truly shareable. Kind of like a cross between Wikipedia and GitHub, we wanted
to transform code files into living documents where you could dig deeper into connected live,
literate, code:

View this example on Tonic

This is actually a pretty big departure from the way many existing notebook environments
currently work. Systems like Jupyter (which we took great inspiration from) function more
as UI layers that communicate with a background REPL, and so they unfortunatley also
suffer from many of the same runtime limitations of traditional REPLs. Namely, code
that’s been run can’t really be “un-run” or changed.

Environments like this are fundamentally append-only. Jupyter allows you to edit
and re-run old code cells, but while it may visually appear that this new code has
“replaced” the old code like in Tonic, in actuality it is simply being run on top
of everything that’s already been executed. This means that the document the user sees
can quickly diverge from the actual document that was run. Lines of code will appear
in a different order than they were run, and some lines may be missing altogether:

With such a model, features like requiring notebooks or downloading truly reproducible
representations becomes difficult to accomplish and even to understand. How should a
document be interpreted when it is imported by another notebook? Should it execute
the cells in the order they appear, or the order they were originally run, and what
is to be done of cells that no longer appear in the document at all but were a
part of its execution history?

So while having time traveling integrated seemlessly certainly provides a more
intuitive and direct experience when editing code in notebooks, it also
quickly became apparent to us that it would also be absolutely necessary
if we wanted to turly deliver on a living document that could be easily shared,
composed, and re-run.

First Approaches

Explaining the ideal behavior of our notebooks is relatively simple: regardless of how you enter or edit
cells, it should show the results of executing the file from top to bottom: the same way node does.
The easiest way to accomplish this of course is to just re-run the entire document from the start after
every change. This is in fact how the “rewind” feature in works in bpython. Of course, this quickly
becomes unbearably slow with each additional cell you add, especially if the earlier cells are performing
any networking or I/O. Additionally, if you had made use of any random variables, they would change
right from under you with every change!

A different approach is to try to serialize the state of the runtime, such that you can unserialize it
later to get things back to the same state. If you’ve ever worked in a pure functional programming language,
you’re probably familiar with this strategy and it can be pretty successful. On an application level, it is
trivial to simply save your top level immutable data structures, and rewind by simply loading an older one.
However, in JavaScript it is non-trivial to even access the entire state of your program. Any variables
trapped in a closure are by design completely opaque to the outside world, and thus can’t really be saved
or restored. Additionaly, even in a pure functional context its difficult to store the state of things
outside your application.

You can however imagine taking a step back and applying this same idea to your entire computer, instead
of just your runtime. What if you simply took the current state of your memory and wrote it out to disk?
Then, you could read it back at a later time and re-bootstrap the entire process. This is a very similar
idea to what your computer does when it goes to sleep, or even kind of what it does when it pages memory
out of inactive processes. In both cases it then reads that memory back in to “pick up where it left off”.

It turns out you can execute some control over this by using a virtual machine. From a high level, a virtual
machine hosts and entire OS inside of a process. This virtualized OS could then run a similar REPL setup to
what Jupyter uses:

Now after running each cell, you can have the virtual machine create a “snapshot” of the computer.
A snapshot simply stores the entire contents of memory of the computer, as well as the current state of the
filesystem, and stores it in a file. At this point you essentially have “frozen” states of the entire computer
for every cell the user has ever run. To rewind, simply restart the computer from and old checkpoint and run
the new code from there as if the other cells never happened.

This setup is ideal from a behavior perspective, and even performs alright for a few notebooks. However,
this doesn’t scale. Virtual machines are pretty heavy, and running one for every notebook every user opens
quickly becomes infeasible.

Checkpoint and Restore In User Space (CRIU)

Fortunately we were able to take a different approach thanks to an ambitious open source project
called CRIU (which stands for checkpoint and restore in user space). The name says it all.
CRIU aims to give you the same checkpointing capability for a process tree that virtual machines
give you for an entire computer. This is no small task: CRIU incorporates a lot of lessons learned
from earlier attempts at similar functionality, and years of discussion and work with the Linux
kernel team. The most common use case of CRIU is to allow migrating containers from one computer
to another. If you happened to be at DockerCon this year, you might have seen their demo where
they actually did this with a running game server.

Who would have guessed that it could also be used to make JavaScript even more dynamic?
The next step was to get CRIU working well with Docker. The CRIU developers have been incredibly supportive
in this task, and Ross has been working closely with them to make this a first class feature of Docker. So
with this final piece of the puzzle, we could now run our notebooks in Docker containers (which are
considerably more light-weight than a full-blown virtual machine). Just like a virtual machine,
after a given a code cell is run, we can immediately checkpoint the container (which stores the
memory state of node, as well as any subprocesses it may have spawned), and commit the contianer
(which stores the state of the filesystem). And just as before, if the user wishes to rewind, we simply
resurrect the container using these two pieces of information.

Conclusion

Obviously there is a lot of infrastructure and operations work that goes alongside this to be able to
properly support every user spinning up their own container of course, but this blog post gives a pretty
good overview of the general concepts we use to pull off time traveling on the back-end in Tonic. Much of
this work has been done in open source, and we’d love for more people to contribute to CRIU and to try
other creative uses of the technology, it really is an amazing piece of software and we are incredibly
thankful to that team.

In the next post I’ll be covering the front-end aspects of making this all come together. There’s a lot
of tricky edge cases in JavaScript that require some interesting semantic work. If at any point you
found yourself wondering how hoisting could work in a top-to-bottom environment that needs to behave
the same as a module, then you won’t want to miss it!

Also make sure to try some of this stuff out over at Tonic.

Discuss on Hacker News


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/t1tLfYxwPd0/time-traveling-in-node.js-notebooks.html

Original article

The Role of Libraries in Access to Justice Initiatives

Many local public libraries as well as law libraries are actively involved in access to justice initiatives.

In a recent post entitled Justice at your library? on the website of PLE Learning Exchange Ontario, Michele Leering, the Executive Director with the Community Advocacy & Legal Centre in Belleville, Ontario, writes about one such project, the Librarians & Justice partnership in southeastern Ontario.

She also provides a link to a page about PLE for librarians [PLE = public legal education]:

“Library staff in Ontario are ideally placed to serve as key intermediaries in distributing legal information and referrals to library patrons. Public libraries, law libraries and courthouse libraries host dozens or hundreds of people a day, many of whom might be dealing with legal problems.

As an example of the many initiatives taking place in Canada, the Community Advocacy & Legal Centre is organizing a meeting on Thursday, October 29 at the offices of the Law Society of Upper Canada on Queen St. in downtown Toronto to discuss how justice partners and librarians can together enhance access to legal services in Ontario’s rural and remote communities.

The meeting will go from 10 in the morning until 4 in the afternoon. Registration is free but places are limited. The deadline for registering is September 30.

Organizers want to:

  • Raise awareness about the prevalence of common legal problems with significant impacts if left unresolved
  • Learn about the need for credible and plain language legal information caused by a growing access to justice crisis in Canada.
  • Learn about interesting initiatives in Ontario and across Canada and in other countries like Australia and the U.S. and hear from B.C. Courthouse Librarian Janet Freeman about their innovative Law Matters program, Wikibooks, and ClickLaw website initiatives.
  • To provide fodder and inspiration for potential new projects and new prototypes to make a real difference in our communities.
  • Help you take action with new partners.


Original URL: http://www.slaw.ca/2015/09/10/the-role-of-libraries-in-access-to-justice-initiatives/

Original article

New human-like species discovered

By Pallab Ghosh
Science correspondent, BBC News, Johannesburg

Naledi SkeletonImage copyright
John Hawks

Image caption

Homo naledi has a mixture of primitive and more modern features

Scientists have discovered a new human-like species in a burial chamber deep in a cave system in South Africa.

The discovery of 15 partial skeletons is the largest single discovery of its type in Africa.

The researchers claim that the discovery will change ideas about our human ancestors.

The studies which have been published in the journal Elife also indicate that these individuals were capable of ritual behaviour.

Image copyright
NAtional Geographic

Image caption

Homo naledi may have looked something like this

The species, which has been named naledi, has been classified in the grouping, or genus, Homo, to which modern humans belong.

The researchers who made the find have not been able to find out how long ago these creatures lived – but the scientist who led the team, Prof Lee Berger, told BBC News that he believed they could be among the first of our kind (genus Homo) and could have lived in Africa up to three million years ago.

Like all those working in the field, he is at pains to avoid the term “missing link”. Prof Berger says naledi could be thought of as a “bridge” between more primitive bipedal primates and humans.

“We’d gone in with the idea of recovering one fossil. That turned into multiple fossils. That turned into the discovery of multiple skeletons and multiple individuals.

“And so by the end of that remarkable 21-day experience, we had discovered the largest assemblage of fossil human relatives ever discovered in the history of the continent of Africa. That was an extraordinary experience.”

Prof Chris Stringer of the Natural History Museum said naledi was “a very important discovery”.

“What we are seeing is more and more species of creatures that suggests that nature was experimenting with how to evolve humans, thus giving rise to several different types of human-like creatures originating in parallel in different parts of Africa. Only one line eventually survived to give rise to us,” he told BBC News.

I went to see the bones which are kept in a secure room at Witwatersrand University. The door to the room looks like one that would seal a bank vault. As Prof Berger turned the large lever on the door, he told me that our knowledge of very early humans is based on partial skeletons and the occasional skull.

The haul of 15 partial skeletons includes both males and females of varying ages – from infants to elderly. The discovery is unprecedented in Africa and will shed more light on how the first humans evolved.

“We are going to know everything about this species,” Prof Berger told me as we walked over to the remains of H. naledi.

“We are going to know when its children were weaned, when they were born, how they developed, the speed at which they developed, the difference between males and females at every developmental stage from infancy, to childhood to teens to how they aged and how they died.”

A chronology of human evolution

Ardipithecus ramidus (4.4 million years ago) : Fossils were discovered in Ethiopia in the 1990s. Pelvis shows adaptations to both tree climbing and upright walking.

Australopithecus afarensis (3.9 – 2.9 million years ago) : The famous “Lucy” skeleton belongs to this species of human relative. So far, fossils of this species have only been found in East Africa. Several traits in the skeleton suggest afarensis walked upright, but they may have spent some time in the trees.

Homo habilis (2.8 – 1.5 million years ago) : This human relative had a slightly larger braincase and smaller teeth than the australopithecines or older species, but retains many more primitive features such as long arms.

Homo naledi (Of unknown age, but researchers say it could be as old as three million years) : The new discovery has small, modern-looking teeth, human-like feet but more primitive fingers and a small braincase.

Homo erectus (1.9 million years – unknown) : Homo erectus had a modern body plan that was almost indistinguishable from ours. But it had a smaller brain than a modern person’s combined with a more primitive face.

Homo neanderthalensis (200,000 years – 40,000 years) The Neanderthals were a side-group to modern humans, inhabiting western Eurasia before our species left Africa. They were shorter and more muscular than modern people but had slightly larger brains.

Homo sapiens (200,000 years – present) Modern humans evolved in Africa from a predecessor species known as Homo heidelbergensis. A small group of Homo sapiens left Africa 60,000 years ago and settled the rest of the world, replacing the other human species they encountered (with a small amount of interbreeding).

I was astonished to see how well preserved the bones were. The skull, teeth and feet looked as if they belonged to a human child – even though the skeleton was that of an elderly female.

Its hand looked human-like too, up to its fingers which curl around a bit like those of an ape.

Homo naledi is unlike any primitive human found in Africa. It has a tiny brain – about the size of a gorilla’s and a primitive pelvis and shoulders. But it is put into the same genus as humans because of the more progressive shape of its skull, relatively small teeth, characteristic long legs and modern-looking feet.

“I saw something I thought I would never see in my career,” Prof Berger told me.

“It was a moment that 25 years as a paleoanthropologist had not prepared me for.”

One of the most intriguing questions raised by the find is how the remains got there.

I visited the site of the find, the Rising Star cave, an hour’s drive from the university in an area known as the Cradle of Humankind. The cave leads to a narrow underground tunnel through which some of Prof Berger’s team crawled in an expedition funded by the National Geographic Society.

Small women were chosen because the tunnel was so narrow. They crawled through darkness lit only by their head torches on a precarious 20 minute-long journey to find a chamber containing hundreds of bones.

Among them was Marina Elliott. She showed me the narrow entrance to the cave and then described how she felt when she first saw the chamber.

“The first time I went to the excavation site I likened it to the feeling that Howard Carter must have had when he opened Tutankhamen’s tomb – that you are in a very confined space and then it opens up and all of a sudden all you can see are all these wonderful things – it was incredible,” she said.

Ms Elliott and her colleagues believe that they have found a burial chamber. The Homo naledi people appear to have carried individuals deep into the cave system and deposited them in the chamber – possibly over generations.

If that is correct, it suggests naledi was capable of ritual behaviour and possibly symbolic thought – something that until now had only been associated with much later humans within the last 200,000 years.

Prof Berger said: “We are going to have to contemplate some very deep things about what it is to be human. Have we been wrong all along about this kind of behaviour that we thought was unique to modern humans?

“Did we inherit that behaviour from deep time and is it something that (the earliest humans) have always been able to do?”

Image copyright
John Hawks

Image caption

The team of scientists who discovered the Homo naledi remains pose for a picture

Prof Berger believes that the discovery of a creature that has such a mix of modern and primitive features should make scientists rethink the definition of what it is to be human – so much so that he himself is reluctant to describe naledi as human.

Other researchers working in the field, such as Prof Stringer, believe that naledi should be described as a primitive human. But he agrees that current theories need to be re-evaluated and that we have only just scratched the surface of the rich and complex story of human evolution.

Follow Pallab on Twitter


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/dJxOza6AiTs/science-environment-34192447

Original article

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑

%d bloggers like this: