How to rip Vinyl

This tutorial outlines a set of example steps using Audacity to digitize LPs to create files that are ready for CD creation, loading into a digital jukebox or portable music player. There is no fixed “right” way of working – there are many alternatives; like any recipe, it can be adapted to suit your personal needs.

This workflow does not at any stage necessitate the saving of an Audacity project (though you may wish to do that if you need to interrupt your work). Your final goal will be to export WAV files for CD creation or other file formats which might be more suitable for your needs.

While all of the processing in this tutorial is carried out using Audacity, some users may prefer to use alternative software for specific sub-tasks like noise removal and the removal of clicks and pops (Audacity’s Click Removal may not give as good a result as other software).

For more details of the steps involved in this workflow please see the tutorial set Copying tapes, LPs or MiniDiscs to CD.

Workflow

  1. Audacity Settings
  2. Clean the LP
  3. Flattening a warped LP
  4. Recording levels
  5. Capture
  6. Raw master backup
  7. Remove DC offset
  8. Reduce subsonic rumble and low frequency noise
  9. Remove clicks and pops
  10. Reduce hiss and high frequency noise
  11. Place the song labels
  12. Silence the inter-track gaps
  13. Fade Ins/Outs
  14. Adjust Label positions
  15. Track names
  16. Advanced labeling techniques
  17. Amplitude adjustment
  18. Compression
  19. Export a set of WAVs
  20. Export labels
  21. Backup
  22. Alternative software
Making vinyl to digital transfers is a skill and the more you do the more expert you will become.

  • Consider starting out with some LPs or singles that you care less about, this way you will not need to go back and repeat important earlier transcriptions that you made.
  • Start with a recording that you are very familiar with; your first goal will be to ensure that you have as perfect a digital copy of the material as possible.
  • Clean-up steps are optional and need only be applied if your recording requires them.

Audacity settings

Work with Audacity set to a project rate of 44100 Hz and 32-bit sample format (these are the default quality settings). You may use 16-bit if you prefer; it will give smaller working file sizes but you may lose a little quality in some of the processes. Export WAV files at 44100 Hz 16-bit PCM stereo, the standard required for burning CDs; this will also produce WAVs which are accepted for import by iTunes (and most other music player software).

Clean the LP

Cleaning the LP carefully and thoroughly before recording it will reduce the number of clicks and pops and will improve the quality of your recording.

Prepare a solution of lukewarm water mixed with a little dish washing detergent. Use a soft, clean washcloth (or piece of velvet) to carefully wipe the LP’s surfaces – try not to get the label wet. The detergent will float away all the greasy fingerprints – a gentle scrubbing motion will help. Rinse in lukewarm water until all the detergent is gone. Finally, rinse in distilled water (which dries and leaves no residue behind). Air dry your record thoroughly before playing – do not be tempted to play the record “wet” as this may damage the LP and possibly your stylus.

There are a number of commercially available cleaning fluids and cleaning machines that you may wish to consider:

  • KAB EV-1 Record Cleaner & KAB cleaning solution
  • Disco Antistat
  • Discwasher

Flattening a warped LP

If an LP is warped it may not track or play properly; if so, you could try to ease the warps in the vinyl. Place the album in its sleeve and cover between two sheets of flat wood, plywood, glass or similar in a warm room and place some heavy (but not too heavy) weight on top. Leave in the warm room for several days and then try playing it.

Alternatively, stabilizing rims or clamps on a conventional turntable can be used to safely play all but the most extremely warped LPs (some high-end turntables come supplied with such a clamp). A more expensive alternative is to use a laser turntable.

Recording levels

Read this page about making a test recording then make a test recording of portions of the LP (or even a whole side) so as to check the levels. It is important to avoid any clipping during the recording! Try to aim for a maximum peak of around –6 dB (or 0.5 if you have your meter set to linear rather than dB).

Capture

Record both sides into the project prior to doing the processing. You can either Stop the recording after the first side using the The Stop button button or and then use (or use SHIFT + R) to restart recording when you are ready. Alternatively you can pause by pressing the The Pause button button (or use ) at the end of the first side and then press the The Pause button button again once you are ready to record the second side.

After recording you may find it helpful to zoom out to display the entire recording in the Audacity window.

You may prefer to work with a single side of an LP at a time as that gives a smaller working set.

Raw master backup

Export a single WAV for this recording at 32-bit float (not 16-bit).

Retain this WAV file as a maximum quality “raw capture” file that you can import back into Audacity later to start over (if you damage the project while working on it).

Remove DC offset

DC offset can occur at the recording stage so that the recorded waveform is not centered on the horizontal line at 0.0 amplitude. If this is the case with your recordings, see the Normalize page for how to use Normalize to remove DC offset and how to check if your Windows sound device can perform this correction automatically.

Reduce subsonic rumble and low frequency noise

This step can probably be omitted given a flat record and high quality turntable, arm and cartridge.

Use with a setting of 24 dB per octave rolloff, and a cutoff frequency of 20 – 30 Hz to reduce unwanted subsonic frequencies which can cause clicks when editing. If your record is warped, this will definitely generate unwanted subsonics, in which case consider a lower cutoff frequency.

Remove clicks and pops

There are a number of ways you can use Audacity to remove clicks and pops from your recording. First, without zooming in too far, visually inspect your recording for clicks – they will show up as abnormally tall (sticking up or down), very narrow (one or two pixels wide) vertical lines protruding from the waveform. Select a region with one or more of these spikes and listen to it to ensure that they are clicks. After determining that your selection indeed needs to have clicks removed use the Click Removal effect with various settings – preview the effect with these different settings to get the best results. Then, using the settings from your preview testing, use the Click Removal effect on selected regions of audio or on the whole project.

Clicks which did not get removed with Click Removal can be treated individually with other methods. These methods are only really useful if you have a relatively small number of clicks and pops to deal with; otherwise, these approaches will be too labor-intensive and time-consuming:

  • Try Audacity’s Repair effect. This repairs a very short length of up to 128 samples by interpolating from the neighboring samples. You will need to zoom in to see the individual samples to use this effect.
  • For hard to spot clicks you may want to try Click removal using the Spectrogram view.
  • For somewhat longer regions of audio, try:
    • Draw Tool. You need to be zoomed in to the individual samples to use this. Some patience may be needed with this tool, but the principle is to put samples back into line with their neighbors so that a smooth contour is presented.
    • . You don’t need to be zoomed in so far as to see the individual samples, but the silenced sections must be short enough so as to not be audible.

See Click and pop removal techniques for a detailed tutorial on these tools. Also see further down this page for a set of alternative tools known to work well in removing clicks and pops.

Reduce hiss and high frequency noise

Whether you need to use Noise Reduction will depend on the quality of your LPs, your stylus and cartridge.

Use the Noise Reduction effect’s Get Noise Profile to obtain a noise sample from either the lead-in grooves immediately before the music starts, or from a lead-in between tracks. The length is not important but, typically, it will be less than a second; what is important is that you have a true representative sample of the noise without any audio signal at all (such as a very quiet fade lead in). Try amplifying the sample and audition it to ensure that no real audio signal is present. If it’s OK, undo the amplify, then re-apply the Noise Removal effect with these recommended settings:

  • Noise reduction – no more than 12 dB (9 dB is a good guideline)
  • Sensitivity – 6.00
  • Frequency smoothing (bands) – no more than 6 (3 or lower is a good setting for Music)

Noise reduction is always a compromise because, on the one hand, you can have all the music and a lot of noise and, on the other hand, no noise and only some of the music. Try different settings on the “Noise Reduction (dB)” slider until you get the best compromise.

Place the song labels

Mark the approximate label points – click in the waveform at the approximate point between the tracks on the album, press CTRL + B then ENTER. Don’t forget to insert a label at the beginning for the first track. Alternatively you can mark a label point while recording (or on playback) using CTRL + M ( COMMAND + . on Mac OS X ).

Silence the inter-track gaps

These are rarely truly silent so you may want to replace them with silence by selecting the gap and using CTRL + L or the Silence Generator effect. Edit the inter-track gap as desired to around a maximum of 2 seconds; you may wish to use a shorter gap or even no gap at all for some recordings.

Fade Ins/Outs

You may wish to more cleanly fade-in and fade-out the song beginnings and endings by using and . Normally fade outs should be longer (typically a few seconds), and fade ins, if required, quite short (typically a fraction of a second).

Consider using instead of the linear Fade Out. It applies a more musical fade out to the selected audio, giving a more pleasing (more “professional studio”) sounding result.

You may also get a more musical fade-in by applying multiple times to the selected audio; three times is a good guideline. This will produce a shaped, curved, fade rather than a linear one.

Although no keyboard shortcuts for effects are provided by default in Audacity it is possible to set up your own shortcuts for any effects you choose. You may find it particularly beneficial to assign shortcuts for Fade In and Fade Out (or Studio Fade Out) as these will used repeatedly for LP digitization.
For instructions on how to do this please see the Keyboard Preferences page in the manual.

Adjust label positions

If you are using a 2-second gap, adjust the label position as desired to be 0.5 seconds before the start of the next track. To move the label, drag it by its center circle.

Track names

Edit the labels for the song names – we suggest using “01 First Song Name”, “02 Second Song Name”, and so on as this helps keep them in the right order for CD production or loading into iTunes. You may find that changing the zoom level will help you with this task; you can advance to the next label by ensuring that the focus is in the current label then using TAB.

If you wish you may instead automatically prefix named tracks with a sequential two-digit number.
To do this, in the “Name files” section of the Export Multiple dialog select the Numbering before Label/Track Name radio button.

Amplitude adjustment

Normalize the amplitude of the recording; either do each track of the recording individually (especially if the tracks will be randomly played from a library containing many different styles of music) or, do the whole recording at once (which will work fine if all the tracks have the same average volume). Use as the last editing step to bring the amplitude to around -3.0 dB. The Normalize effect can be set to either:

  • Adjust the amplitude of both stereo channels by the same amount (thus preserving the original stereo balance), or
  • Adjust each stereo channel independently (this can be useful if your equipment is not balanced).

Compression

The Compressor effect reduces the dynamic range of audio. One of the main purposes of reducing dynamic range is to permit the audio to be amplified further (without clipping) than would be otherwise possible.

Compressor makes the loud parts quieter and (optionally) the quiet parts louder. It can be very useful for listening to classical music in a car. Such music normally has a wide dynamic range and can thus be difficult to listen to in a car without constant volume re-adjustment.

Export a set of WAVs

Use to produce a set of WAVs for each track on the LP at 44100 Hz 16-bit PCM stereo. Audacity will down-sample on export from 32-bit to 16-bit. Shaped dither noise will be applied by default to cover any defects (clicky noise) that may result from the conversion from 32-bit to 16-bit. Advanced users can change the type of dither, or turn it off, in Quality Preferences.

In order to facilitate later retrieval and use, place all the files for a particular album in a specifically named folder for that album.

Export Labels

Some users advise a final step of exporting a file containing the labels. Use This produces a text file that you can later re-import using should you wish to re-edit from the raw capture file that you backed up earlier in the workflow.

Backup

Backup your exported WAV or MP3 files – you don’t want to lose all that valuable work and have to do it all over again, do you? Computer hard drives can fail, destroying all data.

Ideally use a dedicated drive (1+ TB external magnetic drives are convenient and economical), or upload to an online (cloud) storage service to store the WAVs or MP3s. Better still is to make two copies on different external devices and even better is to hold an online backup as well as the local copies.

You may want to create a taxonomic file structure – for example each album can be stored in its own folder (named for the album) within a folder named for the artist (or, perhaps, composer for classical music) to make searching and retrieval easier.

Alternative software

  • GoldWave: Though nominally not free it is a top class, free trial click remover as well as an excellent alternative audio editor. Its click removal is an effect, just like in Audacity, and there is a “Smoother” effect for broad unwanted noises and an excellent “Noise Reduction” effect for steady noise. The trial version limits you to a hundred or so commands per session, and a total number of several thousand commands before it expires, but if you export from Audacity as 32-bit WAV and just do Click Removal in it, you should be able to declick several hundred records for free.
  • Gnome Wave Cleaner: Only for Linux users. Digital restoration of CD-quality audio files. Dehiss, declick and decrackle in a GUI environment. It can also automatically mark song boundaries if required.

Clicks and pops

  • ClickRepair: An excellent tool for removing clicks and pops is Brian Davies’ ClickRepair. Some new users may find it a bit intimidating as an entry level tool but, once you have understood the settings you want to use, it is effectively an automated tool. It requires Java and is not free, but many users report that it saves a lot of time and produces good results. Since ClickRepair will work with 32-bit files it is worth exporting a 32-bit float WAV file for processing though ClickRepair and importing back into Audacity, that way no dithering will be applied in the process.
You may find the default settings for this application remove a little too much signal. An alternative recommendation:

  • DeClick = 30 (default is 50)
  • Pitch Protection = “on” (default is “off”) though leave this “off” for brass recordings
  • Reverse = “on” (there is no processing penalty for this and it helps on percussive music)
  • Method = Wavelet

Hiss and noise removal

  • DeNoise: Brian Davies also supplies a tool called DeNoise; this is effective at removing noise and hiss. As with ClickRepair, some new users may find it similarly intimidating as an entry level tool. Users report that settings normally have to be reset for each recording to optimize the noise removal thus making it difficult to use in a semi-automated way.
    • DeNoiseLF is supplied as a separate package bundled with DeNoise. It is used for reducing low-frequency noise (such as turntable rumble) and hum.

Compression

Please see Chris’s Dynamic Compressor for a popular alternative compressor which may be downloaded for free. It works by trying to even out abrupt changes of volume by employing “lookahead” (this attempts to anticipate volume changes by starting to apply compression before the volume rises to the threshold level). It has options to soften the softer audio and invert loudness.

Links

|< Tutorial – Copying tapes, LPs or MiniDiscs to CD

>  See also tutorial on: Recording 78 rpm records


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/1n5pvms8UOc/sample_workflow_for_lp_digitization.html

Original article

Microsoft and HackerRank Add a Live Code Editor Into Bing

An anonymous reader writes: Microsoft’s Bing search engine now includes a live code editor, allowing programmers to edit and execute snippets of example code and see the results in real-time. HackerRank announced the new educational tool on their blog, calling it “a streamlined alternative” to Stack Overflow’s sites and programming sites, and sharing a video of the new feature providing results for the search “quick sort Java”. “In addition to learning how a certain algorithm/code is written in a given language, users will also be able to check how the same solution is constructed in a range of other programming languages too,” says Bing’s Group Engineering Manager for UX Features, “providing a Rosetta-stone model for programming languages.”


Share on Google+

Read more of this story at Slashdot.


Original URL: http://rss.slashdot.org/~r/Slashdot/slashdot/~3/vBeJBFp9wzc/microsoft-and-hackerrank-add-a-live-code-editor-into-bing

Original article

Systems Admins: We Need to Talk

I’m frustrated. Frustrated because I keep seeing articles about businesses, specifically hospitals, being ransom-wared into submission. In the past month, I can recall three specific instances. Some paid the ransom. One is still in limbo. Each system claims that their data is being held hostage, and that the ransomer is demanding somewhere between $1600 and $3.7 Million dollars – all negotiable, of course. Hospital administrators cry foul, sysadmins look to expensive solutions, and patient care suffers.

None of this has to happen.

Sysadmins, we need to talk. I know the struggle – I’ve been a systems administrator for 15 years. You have too few resources, too small a budget, and no respect. I get it. I do. Your users click links they shouldn’t, download things without forethought, and go to websites that you would firebomb from afar if you had your way. I understand that ransomware is a fast-changing, ever evolving beast that is mitigating your defenses as quickly as you’re mitigating its attacks. Its impossible to stop every attack. I get that. However, I’d like to pose question to you, and I ask this with as little snark as I can muster: Is that really an excuse? Can we really throw up our hands because “its hard,” and not even attempt good, basic security measures?

Admins, lend me your ears. With good, basic, and built-in tools, you can defend against ransomware. With just a few hours of configuration (at most!), you can stop this madness. Let’s talk turkey.

Fix Your Email

  • Filtering extensions. Do you block incoming file attachments? Most companies don’t, and can’t – that’s fine. However, you can certainly block the dangerous ones. All modern email systems block executables (.exe) and batch (.bat/.cmd) files from the get go – most will also block VB scripting (.vbs), screen savers (.scr) and a few others. Lets get to whats not being blocked:
    • .doc / .xls files – Yep, MS Office. No, I am not suggesting you disallow your users from sharing office files – but modern Office extensions are .docx and .xlsx – so ditch the old versions. Inside these files are malicious macros that will grab the ransomware payload and pull it onto your machine. While you’re at it, block .rtf
    • .js files – Nobody emails you raw Javascript, with a glaring exception: Locky. Locky’s vector is commonly a .js file attached to an email (often in a zip)
    • .zip files and .rar files – Yes, some businesses use these to transfer files. Say it with me (and if you’re a sysadmin, you’ve been shouting this for years) – email is not a file transfer mechanism. Find an alternative. Utilize network shares or a third party system like OwnCloud. Ransomware often comes in a .zip, and sometimes even password protected (with the password in the email body). Why? Mail scanners can’t look inside zipped files. Block them outright if you can.
  • Filtering countries. Does your company do business with China, Romania, or the Ukraine? What’s the business impact of never receiving mail from Russia again? In a great majority of cases, this will not impact you at all – but will cut down exponentially on both spam and phishing. Many email servers will allow you to block based on region or country. Take heavy advantage of this. If not, you can look at netblocks by country and black/grey-list them manually.
  • Crank up your spam protection. A lot of ransomware coming through is going to be flagged as spam by the same criteria that “13UY V1@GAR4” ads get stopped with. It doesn’t have to be turned to max, but it does have to be turned on.
  • Consider blocking any of the generic gTLD domains out there. Domains such as “.xyz” and “.info” are cheap and used as throwaways by spammers. Stop them from entering your email environment and you’ll reduce the number of phishing attacks and spam emails your users receive.

Defend Your Servers

  • Software Restriction Policies. Via group policy, you can restrict any executables from running out of the %TMP% directory – which is how all ransomware I have encountered or read about starts. Pushing this down to your users should be a no-brainer. Now, I say that with a grain of salt – this will break things. In my experience, Quickbooks installers, MS Office installers, and Spotify all break with an SRP is in place. These, however, can be whitelisted. This takes testing and should be rolled out slowly, especially in complex environments. Here’s a very thorough tutorial with screenshots on how to implement a Software Restriction Policy.
  • File Server Resource Monitor. FSRM is a method for actively monitoring file shares. One of the first things ransomware does is drop a file explaining how to pay the ransom. With FSRM you can easily alert on those files and run a script. The script I wrote is extremely basic – it kills the file sharing service, sends the admin an email, and writes the event to the event log. Here’s a list of filenames I monitor for.
  • Follow good security practices. Does everyone have read, write, execute access on every share? They shouldn’t. Follow good security practices for accessing data – use the principal of least privilege and role-based access control. This is good practice aside from ransomware, but will help contain the damage should something slip through your other controls. Users in Groups, Groups assigned to Folder/File permissions. Add/Remove users from groups as their access or roles change. This makes management easy
  • Monitor Handles. Consider setting up a “canary” to alert you of processes generating a high handle count. There are a few that we should expect to do so – system, SqlServer, and lsass come to mind – but a process actively encrypting or modifying thousands of files at once will generate a high number of handles. I wrote this script when the first CryptoLocker hit, and run it as a scheduled task every 15 minutes; feel free to modify it as you wish. Be warned that it is fairly ugly, but it does what it says on the box.

Defend Your Endpoints

  • Antivirus. Some people will tell you that antivirus is dead. There are certainly arguments for that – an antivirus can act as a last line of defense if your other controls fail. Make sure your definitions are updated, and the antivirus is up to date. Microsoft Security Essentials is free, and will defend against known ransomware. Teach your users to report virus alerts, not ignore them.
  • Patching. Keep your endpoints patched. You can download and install Windows Server Update Service for free, and have it manage your updates and reboot cycles.
  • Phish your users. It can be done for free and teaches them not only to be suspicious of emails they aren’t expecting, but helps train them on indications that an email is not from who they think.
  • Remove local administrator rights from machines. Users may kick and scream that they can’t install Skype, but reducing the local machine rights drastically reduces the damage that can be done. Without admin rights, you can only install and run applications out of very limited folders (My Documents and %TMP%), so its easier to mitigate malicious software trying to do you harm.

Defend Your Network

  • DNS. If you’re using your ISP’s DNS server, I would encourage you to change it to the free OpenDNS service. OpenDNS is good about blackholing known-bad IP addresses and command & control channels. It will reduce malware from web-browsing significantly and costs you nothing.
  • Block Tor. Tor has many legitimate, and noble uses. However, many pieces of ransomware use it to establish a connection to a C&C channels to generate the key used to encrypt data. If this step fails, ransomware stops. Block Tor unless you are actively using it for business – which you likely are not.

Defend Your Data

  • Backups. If all else fails, you need the security of having recent, tested, GOOD backups. Windows Server Backup is not the most elegant solution, but it works – and costs you nothing. A large USB drive is all you need to back up your data. Find out what your company’s tolerance for data loss is, and take the drive off-site that often. If they can tolerate a week of lost data, take it off-site every Friday. If they can tolerate no more than a day, take it off-site every night. A note about ransomware: if the backup drive is plugged in, and the system infected? It will encrypt your backup drive. Its important that you eject the USB or physically remove the drive every time you complete a backup. If you can spare a few dollars and some bandwidth, a service like CrashPlan runs about $8 per month and backs up changes in real time, and maintains a version history. Not an ideal way to recover the data should you lose everything, but it’s a “set it and forget it” approach that requires little maintenance and no drive swapping.

Sysadmins: this is what the phrase Defense-in-Depth means. Multiple solutions to solve a problem that may mitigate one or more defenses you have in place. An antivirus and firewall are no longer enough. There is no excuse for a ransomware infection resulting in lost data and days/weeks/months offline. You can accomplish every step outlined above with a zero-dollar budget.

Any other tips, tricks, or $0 mitigations you’d like to share? Please comment below!


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/YG7xygvulD8/

Original article

‘Frankenstein’ in some ways predicted Microsoft’s racist chatbot?

FrankensteinSceneMonsterGirlMy son Tobias Wilson-Bates, currently a Marion L. Brittain Postdoctoral Fellow at Georgia Tech, recently published a short essay about robotics and literature in a school newsletter. Sign me up immediately for the Proud Fathers Club.

The relationship between machines and literature has long fascinated Tobias, which makes Georgia Tech a good place for him at the moment. As he points out, Mary Shelley anticipated almost 200 years ago problems that we are currently encountering with artificial intelligence.

Tobias is currently teaching a course that “challenges students to engage with the topic of robotics as existing in a cultural network of information.” Among the problems he addresses is whether our artificial creations will reflect our social failures. In the article, which I’ve shared below, he alludes to the following passage, where Dr. Frankenstein’s “monster” explains how he began his war against humankind:

Here I paused, not exactly knowing what path to pursue, when I heard the sound of voices, that induced me to conceal myself under the shade of a cypress. I was scarcely hid when a young girl came running towards the spot where I was concealed, laughing, as if she ran from someone in sport. She continued her course along the precipitous sides of the river, when suddenly her foot slipped, and she fell into the rapid stream. I rushed from my hiding-place and with extreme labor, from the force of the current, saved her and dragged her to shore. She was senseless, and I endeavored by every means in my power to restore animation, when I was suddenly interrupted by the approach of a rustic, who was probably the person from whom she had playfully fled. On seeing me, he darted towards me, and tearing the girl from my arms, hastened towards the deeper parts of the wood. I followed speedily, I hardly knew why; but when the man saw me draw near, he aimed a gun, which he carried, at my body and fired. I sank to the ground, and my injurer, with increased swiftness, escaped into the wood.

This was then the reward of my benevolence! I had saved a human being from destruction, and as a recompense I now writhed under the miserable pain of a wound which shattered the flesh and bone. The feelings of kindness and gentleness which I had entertained but a few moments before gave place to hellish rage and gnashing of teeth. Inflamed by pain, I vowed eternal hatred and vengeance to all mankind.

Here’s the article:

FrankenBot: The Inevitable Monstrosity of Artificial Life

By Tobias Wilson-Bates, Postdoctoral Fellow at Georgia Tech

It has been quite a year for artificial intelligence. Google has continued its rapid movement toward integrating neural networks into the heart of its information empire; in March, its A.I., AlphaGo, defeated world champion Lee Se-dol in the complex game of Go. The excitement seems to lend credibility to Ray Kurzweil’s prediction that machines will attain consciousness by 2029.

Given all the recent success, it came as quite a shock this week when Microsoft’s artificially intelligent chatbot, Tay, failed about as remarkably as possible at enacting the part of a “chill millennial.”

Within 24 hours she was posting an embarrassing slew of racist, sexist, incestuous and genocidal messages that were gleefully harvested by both social media users and news media alike.

Journalists have posed a number of arguments about the meaning of Tay’s corruption, but whatever the motives were, they are likely more complex than the theory of Microsoft’s vice president of research Peter Lee, who wrote off the incident as “malicious intent that conflicts with our values and principles.” In fact, the likelihood of an attack was so strong that Microsoft had already produced extensive firewalls (which proved ultimately ineffective) for just such a circumstance.

In other words, everyone involved knew beforehand that new life is prone to attacks. Indeed, the scenario goes back at least as far as the first work of science fiction, Mary Shelley’s Frankenstein (1818). In the novel, the creature is abandoned by its maker and wanders aimlessly until it is attacked for saving a child from drowning. The creature is rejected by human society and does monstrous things in return. Of course, one could argue that its revenge is merely its adaptation to the very forms of violence it experiences from the humans it meets.

Tay, as an artificial being subjected to the group politics of social media, occupies the same position as a sort of “human” existing to both reestablish the humanity of her peers while also bringing the category into question. Whether they mean to or not, Microsoft is deconstructing what it means to be human and, as such, creating a monster.

As intelligent machines are increasingly woven into the fabric of the modern world, areas of knowledge associated with the humanities are becoming structurally necessary for producing and integrating new technologies. Tay was not a technological failure but a sociological one. It is to be hoped that in turning toward increasingly autonomous social machines, we draw upon the ethics discovered in the careful examination of narrative and social patterns.

Added note: Just as Mary Shelley offers an alternative educational model for how Frankenstein’s monster could have been raised, so Tobias cites his colleague Mark Riedl for his attempts to align robots with our ethical ideals. You can go to “Quixote” to see how Riedl is “programming robots to read stories that may act as a user manual for ethical human behavior in real-life scenarios.”

Image: Scene from “Frankenstein” (1931).

(Reproduced from Better Living through Beowulf.)

The post ‘Frankenstein’ in some ways predicted Microsoft’s racist chatbot? appeared first on TeleRead News: E-books, publishing, tech and beyond.


Original URL: http://www.teleread.com/shelley-predicted-microsofts-ai-problems/

Original article

The open web is not going away

Monument Valley on a Clear Blue Sky Day Dries Buytaert and Matt Mullenweg recently posted calls to arms in defense of the “open web.” I, too, am a believer in the open web. It delivers on the promise of the Internet: a world in which everyone is connected, and you can command as much attention as your content deserves. But I agree with them that it is threatened by dominant technology companies who have an economic… Read More


Original URL: http://feedproxy.google.com/~r/Techcrunch/~3/3IUemHzzejA/

Original article

Rendering React without browser JavaScript

Quick Start Tutorial: Universal React, with Server Side Rendering

Rendering React without browser JavaScript. The React Tutorial (github), modified to include Server-Side Rendering, React Router, and Redux (github).

Universal React: render on the server before handing off to the browser (image)

Imagine creating a web app in cutting-edge React, only to discover it feels sluggish as users wait for all the JavaScript to download before they can see anything, and what’s more, amateur blogs are better search-engine optimized than your new site. In the inimitable words of DJ Khaled, “Congratulations, you played yourself.”

Fortunately, React provides a way to avoid these issues: rendering on the server.

You can use this method to generate HTML on the server and send the markup down on the initial request for faster page loads and to allow search engines to crawl your pages for SEO purposes.

Perfect. To render React on the server, our example app will handle requests using routes, and respond with markup generated using React components and the app state. When the app loads in the browser, we will use the same routes, components, and state to initialize React, handing off rendering from the server to the browser.

Back to the Comment Box

The official React tutorial demonstrates how to build a “simple but realistic comments box.” In my previous article, I modified the Comment Box code to demonstrate Redux usage: Quick Start Tutorial: React Redux. If you are new to React, I recommending checking out those tutorials first.

This tutorial for Universal React builds on the Comment Box, with Redux already integrated, to demonstrate Server-Side Rendering and React Router. The code for this revised example app is on Github.

Setting Up

Since universal React apps need to run JavaScript on the server, the first change I made to the official tutorial was to remove server scripts for other languages (PHP, Ruby, etc.), and to add more modules to the node.js app:

  • react, react-dom: to create and render React components on the server.
  • redux, react-redux: to manage the state of application data.
  • react-router: to demonstrate React Router usage.
  • marked: since the original Comment Box tutorial uses Markdown to format text in the browser, we need the same library to format text on the server.
  • babel-register, babel-reset-react: to compile JSX syntax, which is useful when writing React components, into standard JavaScript.

I added a polyfill (polyfill.js) that ensures browser support for Object.assign, a method used by our Redux reducer.

I also wrote a require-shims.js file which intercepts node.js require and module.exports in the app when node modules are loaded in the browser. This is somewhat absurd, and is only a stop-gap to let you check out and run the example app without having to install webpack just yet. More about webpack later in this tutorial.

Routes

The example app uses the Express web framework. Routing in the app is primarily handled by Express, with just two routes passed on to React Router: the default route “/”, and “/another-page”:

A nice aspect of React Router is that it’s declarative: routes express the way our components are structured. The Index component, which renders basic layout, wraps around two child components, CommentBox and AnotherPage.

In the webpage, we link to these URLs, “/” and “/another-page”, using the React Router Link component, which lets users transition instantly between pages in the browser:

You can learn more at the React Router docs.

Components

As our routes attest, we are rendering React components called Index, CommentBox, and AnotherPage. Index (index.js) is the index.html from the original tutorial converted into a React component. CommentBox (commentbox.js) and its child components are used for displaying and adding comments. AnotherPage is a simple example component, defined right in routes.js for convenience.

Index and CommentBox, while similar to the original Comment Box app, have been modified to use Redux. Check out my previous tutorial to understand the code changes in CommentBox, and the reducer I extracted to redux-store.js.

State

The state in our example app is a JavaScript object with three properties: data, which is an array of comments, url, which is the form submission URL, and pollInterval, which specifies how long to wait before checking for new comments. On the server we define the object and then pass it to the Redux store:

This configureStore function calls Redux’s createStore with our reducer and initial state.

In the webpage, this state is converted to a JSON string, with script tags escaped for security, and then embedded in a HTML script tag as a variable called window.__INITIAL_STATE__:

The window.__INITIAL_STATE__ variable then populates the Redux store in the browser. Yeah, it’s a bit circular, but relatively straightforward considering we’re creating a webpage that is propelled into reconstructing itself, like a slinky on a treadmill.

Initial Render: Server and Client

Now that our routes, components and state are integrated, we’re ready to go.

First, on the server we call ReactDOMServer.renderToString and use Express to send the generated markup as a response:

In the browser, we parallel this using ReactDOM.render:

And we’re all set! Call ReactDOMServer.render and ReactDOM.render with the same components and initial state, and React will recognize that components don’t need to be re-rendered in the browser.

If you call ReactDOM.render() on a node that already has this server-​rendered markup, React will preserve it and only attach event handlers, allowing you to have a very performant first-​load experience.

With browser JavaScript disabled

The original Comment Box handles submissions using JavaScript. It checks that the input fields are filled out, then makes an Ajax request to send the new comment to the server, and receives JSON with the updated list of comments. What if we take server-side rendering to its logical conclusion, and make the app work even when JavaScript is disabled in the browser? This might be difficult for complex applications, but we can definitely implement it in our example app.

First I adjusted the comment form (in commentbox.js) to submit data without JavaScript by adding action and method attributes. On the server endpoint that receives new comments, I duplicated the client-side validation to ensure the input fields are not empty by wrapping the method to add comments in a conditional block.

And in the response, instead of only sending JSON, we use Express’ req.accepts to determine the kind of response to send. Ajax requests receive a JSON response, while regular HTML form submissions get redirected back to the Comments page.

Using Webpack

As mentioned earlier, require-shims.js is a brittle workaround to enable you to run the example app without webpack. However, as you make your own apps, you will need to use a bundling tool at some point, both to get rid of all the script tags and replace it with one bundle, and more importantly, to pre-compile JSX code because compiling it in the browser is slow.

To start using webpack in the example app, first edit index.js. You’ll see a block of scripts to comment out or delete when using webpack, and one script tag to include. Go ahead and remove the script tags in the upper block, and uncomment the script tag for the webpack bundle:

Then install webpack and babel-loader.

The repository already comes with a webpack.config.js that defines some configuration for webpack. All you’ll need to do is run it:

This should create a bundle.js file in the public/scripts directory. Run the app again (node server.js) and you should be all set! When you open the webpage it will use the bundled JavaScript, including pre-compiled JSX code.

More about Webpack at webpack-howto.

Voila

Now you have a sense of how to render React on the server. It can be intricate to put the moving parts together, but as long as you remember to run the initial render on the server, before rendering the same components with the same state again in the browser, you should be on the right track. As DJ Khaled says: “You smart.”

Further Reading

React Comment box tutorial

More Examples

React Router

Webpack

On Server-Side Rendering

Questions or feedback? Let me know in the comments.

I’m currently looking for projects or other opportunities in web development and product management roles. You can get in touch with me on Twitter, LinkedIn or firasd at gmail


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/m90koA2pHCA/quick-start-tutorial-universal-react-with-server-side-rendering-76fe5363d6e

Original article

QML for the Web

README.md

JavaScript powered QML Engine

Join the chat at https://gitter.im/qmlweb/qmlweb
Build Status
Coverage Status

npm
Bower
GitHub tag

CSS and HTML are boring and lame. And they suck at designing cool, interactive interfaces. Qt came up with a much better answer for its renowned framework: QML, a declarative language perfect for designing UIs (and much more). Here’s a sample of how QML looks like:

import QtQuick 2.0

Rectangle {
   width: 500; height: 200
   color: "lightgray"

   Text {
       id: helloText
       text: "Hello world!"
       anchors.verticalCenter: parent.verticalCenter
       anchors.horizontalCenter: parent.horizontalCenter
       font.pointSize: 24; font.bold: true
   }
}

This project aims at bringing the power of QML to the web browser.

How to use

Add the library to your web page

Using one of the methods below, install the qmlweb JavaScript library:

  • npm:

    npm install qmlweb
    
  • Bower:

    bower install qmlweb
    
  • GitHub releases:

    tar -xzvf v0.0.4.tar.gz
    
  • Manually using gulp (recommended if you cloned from git):

    npm install
    npm run build
    

Next, simply add lib/qt.js to the list of other JavaScript files in your app’s HTML file:

<script type="text/javascript" src="/lib/qt.js"></script>

Auto-load

You may then modify the element to specify what QML file to load when the page is opened.

<html>
  <head>
    <title>QML Auto-load Example</title>
  </head>
  <body style="margin: 0;" data-qml="qml/main.qml">
    <script type="text/javascript" src="/lib/qt.js"></script>
  </body>
</html>

How to use with Gulp

See gulp-qmlweb package.

How to extend

When implementing new features, you may need to get away from QML and create your own QML components from scratch, using directly the engine’s API.

registerQmlType({
  module:   'MyModule',
  name:     'MyTypeName',
  versions: /^1(.[0-3])?$/, // that regexp must match the version number for the import to work
  constructor: function(meta) {
    QMLItem.call(this, meta);

    var self = this;

    // Managing properties
    createSimpleProperty("string", this, "name"); // creates a property 'name' of type string
    createSimpleProperty("var", this, "data"); // creates a property 'data' of undefined type
    this.name = 'default name'; // sets a default value for the property 'name'

    // Signals
    this.somethingHappened = Signal(); // creates a signal somethingHappened

    this.somethingHappened.connect(this, function() {
      console.log('You may also connect to signals in JavaScript');
    });

    // Using the DOM
    function updateText() {
      var text = '';
      for (var i = 0 ; i < self.data.length ; ++i)
        text += '[' + data[i] + '] ';
      self.dom.textContent = text; // Updating the dom
      self.somethingHappened(); // triggers the 'somethingHappened' signal.
    }

    // Run updateText once, ensure it'll be executed whenever the 'data' property changes.
    updateText();
    this.onDataChanged.connect(this, updateText);
  }
});

And here’s how you would use that component in a regular QML file:

import MyModule 1.3

MyTypeName {
  name: 'el nombre'
  data: [ 1, 2, 3 ]

  onSomethingHappened: console.log(data)
}

History

  1. git://anongit.kde.org/qmlweb, see Webapps written in qml not far from reality anymore,
  2. @JoshuaKolden/qmlweb,
  3. @Plaristote/qmlweb,
  4. @labsin/qmlweb,
  5. @arnopaehler/qmlweb.


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/nsgRkdA_LCk/qmlweb

Original article

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑

%d bloggers like this: