Curl vs. Wget

Related: FTP vs HTTP, bittorrent vs HTTP and curl vs libcurl

The main differences as I (Daniel Stenberg) see them. Please consider my bias
towards curl since after all, curl is my baby – but I

contribute to Wget as well.

Please let me know if you have other thoughts or comments on this document.

File issues or pull-requests if you find

problems or have improvements.

What both commands do

  • both are command line tools that can download contents from FTP, HTTP and
    HTTPS
  • both can send HTTP POST requests
  • both support HTTP cookies
  • both are designed to work without user interaction, like from within scripts
  • both are fully open source and free software
  • both projects were started in the 90s
  • both support metalink

How they differ

curl

  • library. curl is powered by libcurl – a cross-platform library with a
    stable API that can be used by each and everyone. This difference is major
    since it creates a completely different attitude on how to do things
    internally. It is also slightly harder to make a library than a “mere”
    command line tool.

  • pipes. curl works more llke the traditional unix cat command, it sends
    more stuff to stdout, and reads more from stdin in a “everything is a pipe”
    manner. Wget is more like cp, using the same analogue.

  • Single shot. curl is basically made to do single-shot transfers of
    data. It transfers just the URLs that the user specifies, and does not
    contain any recursive downloading logic nor any sort of HTML parser.

  • More protocols. curl supports FTP, FTPS, Gopher, HTTP, HTTPS, SCP,
    SFTP, TFTP, TELNET, DICT, LDAP, LDAPS, FILE, POP3, IMAP, SMB/CIFS, SMTP, RTMP
    and RTSP. Wget only supports HTTP, HTTPS and FTP.

  • More portable. curl builds and runs on lots of more platforms than
    wget. For example: OS/400, TPF and other more “exotic” platforms that aren’t
    straight-forward unix clones.

  • More SSL libraries and SSL support. curl can be built with one out
    of eleven (11!) different SSL/TLS libraries, and it offers more control and
    wider support for protocol details. curl supports public key pinning.

  • HTTP auth. curl supports more HTTP authentication methods,
    especially over HTTP proxies: Basic, Digest, NTLM and Negotiate

  • SOCKS. curl supports several SOCKS protocol versions for proxy access

  • Bidirectional. curl offers upload and sending capabilities. Wget
    only offers plain HTTP POST support.

  • HTTP multipart/form-data sending, which allows users to do HTTP
    “upload” and in general emulate browsers and do HTTP automation to a wider
    extent

  • curl supports gzip and inflate Content-Encoding and does automatic decompression

  • curl offers and performs decompression of Transfer-Encoded HTTP, wget doesn’t

  • curl supports HTTP/2 and it does dual-stack connects using Happy Eyeballs

  • Much more developer activity. While this can be debated, I consider three
    metrics here: mailing list activity, source code commit frequency and
    release frequency. Anyone following these two projects can see that the curl
    project has a lot higher pace in all these areas, and it has been so for 10+
    years. Compare on openhub

Wget

  • Wget is command line only. There’s no library.

  • Recursive! Wget’s major strong side compared to curl is its ability to
    download recursively, or even just download everything that is referred to
    from a remote resource, be it a HTML page or a FTP directory listing.

  • Older. Wget has traces back to
    1995, while curl can be

    tracked back no earlier than the end of
    1996.

  • GPL. Wget is 100% GPL v3. curl is MIT licensed.

  • GNU. Wget is part of the GNU project and all copyrights are assigned to
    FSF. The curl project is entirely stand-alone and independent with no
    organization parenting at all with almost all copyrights owned by
    Daniel.

  • Wget requires no extra options to simply download a remote URL to a local
    file, while curl requires -o or -O.

  • Wget supports the Public Suffix List for handling cookie domains,
    curl does not.

  • Wget supports only GnuTLS or OpenSSL for SSL/TLS support

  • Wget supports only Basic auth as the only auth type over HTTP proxy

  • Wget has no SOCKS support

  • Its ability to recover from a prematurely broken transfer and continue
    downloading
    has no counterpart in curl.

  • Wget can be typed in using only the left hand on a qwerty keyboard!

Additional Stuff

Some have argued that I should compare uploading capabilities with
wput, but that’s a separate tool/project and I

don’t include that in this comparison.

Two other capable tools with similar feature set include
aria2 and

axel (dead project?) – try them out!

For a stricter feature by feature comparison (that also compares other similar
tools), see the curl comparison

table

Thanks

Feedback and improvements by: Micah Cowan, Olemis Lang

Updated: February 26, 2016 17:20 (Central European, Stockholm Sweden)


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/QMxS_xKuJT8/curl-vs-wget.html

Original article

Slack voice calling arrives on desktop

Slack Voice Chat Reaction GIF Slack was being cheeky when yesterday it said voice chat would start testing “very, very soon”. Today the new “Calls” feature starting rolling out on Slack for desktop and on the Chrome browser.
It lets you start a private Slack Call or launch a conference call in a channel that anyone can join with a click. And in keeping with Slack’s lighthearted style, once… Read More


Original URL: http://feedproxy.google.com/~r/Techcrunch/~3/c3AIZjI3kQg/

Original article

Show HN: Restdb.io – a plug and play database service

Schema

Collections (tables), fields and relations are declared without any coding. restdb.io automatically provides data forms, navigation, search and a REST API.

Add HTML/javascript pages with your own functionality.

Automagic APIs

The REST API with Cross Origin Resource Sharing support (CORS) reflects the database automatically and plugs right into your app, web page or prototype.

Use webhooks to synchronize and transfer data.

You’re ready

restdb.io databases has simple and mobile friendly user interfaces for working with data. Perfect for working on the go.

Add as many users you like for collaborative work.


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/3NTBM1wOXM0/

Original article

New Online: Rosa Parks, Page Upgrades, Search Functionality

(The following is a guest post by William Kellum, manager in the Library’s Web Services Division.)

This item from the Parks’ collection documents her reflections on her bus arrest, circa 1956-1958.

This item from the Parks’ collection documents her reflections on her bus arrest, circa 1956-1958.

In February, the Library of Congress added the Rosa Parks Papers to its digitized collections. The collection contains approximately 7,500 manuscripts and 2,500 photographs and is on loan to the Library for 10 years from the Howard G. Buffett Foundation. Included in the collection are personal correspondence, family photographs, letters from presidents, fragmentary drafts of some of her writings from the time of the Montgomery Bus Boycott, her Presidential Medal of Freedom, her Congressional Gold Medal and more.

The online presentation includes a video that contains highlights from the collection and a look behind the scenes at how the Library’s team of experts in cataloging, preservation, digitization, exhibitions and teacher training are making the Parks’ legacy available to the world.

To support teachers and students as they explore this one-of-a-kind collection, the Library is offering a Primary Source Gallery with classroom-ready highlights from the Rosa Parks Papers and teaching ideas for educators.

Along with digitized materials like the Rosa Parks Collection, the Library continues to add new born-digital materials to its website. The Library’s Web Archives have recently been updated with content collected during the 2012 and 2014 United States Elections.

February also brings some changes to our overall presentation – we’ve upgraded all of our item detail pages (the page where you view bibliographic data alongside a digital resource, like an image or video). All pages now feature an improved, simplified layout for all screen sizes, larger thumbnails, simplified download links and easier access to “rights and access” information. We’ve also added an overlay so that you can tell when an item has multiple pages, such as in a folder of manuscripts, or an atlas, like this circa 1700 volume with 14 images to view:

atlas

Also new on item pages is our beta Cite This Item widget. Users can click to see the bibliographic data for the item formatted in Chicago, APA and MLA styles.

cite_this_item

Since searching our website is the way most users interact with our content, we’ve added a new search facet (aka filter) to help users find digital content based on whether it’s fully available online or not. Look down the left hand side of a search results page (like this search for photos of “Yoesmite”), and you’ll see a box labelled “Access Condition” – you can use that filter to limit your results to items fully available, or items that only display a thumbnail and that you need to come to our reading rooms to see in their entirety.

access_condition

A few other new things worth noting are now online: “Jazz Singers,” a new exhibit on the art of vocal jazz from the 1920s to the present; The Mexican Revolution and the United States in the Collections of the Library of Congress, an upgraded presentation describing the “complex and turbulent relationship between Mexico and the United States during the Mexican Revolution, approximately 1910-1920” drawn from primary source items in the Library’s collections; and Women’s History Month 2016, an update of our collaborative portal with links to featured content from the Library and our partners at the Smithsonian, National Endowment for the Humanities, National Park Service, and the National Archives.


Original URL: http://blogs.loc.gov/loc/2016/03/new-online-rosa-parks-page-upgrades-search-functionality/

Original article

The DROWN Attack

DROWN is a serious vulnerability that affects HTTPS and
other services that rely on SSL and TLS, some of the essential
cryptographic protocols for Internet security. These protocols
allow everyone on the Internet to browse the web, use email,
shop online, and send instant messages without third-parties
being able to read the communication.

DROWN allows attackers to break the encryption and read or
steal sensitive communications, including passwords, credit
card numbers, trade secrets, or financial data. Our
measurements indicate 33% of all HTTPS servers are
vulnerable to the attack.

What can the attackers gain?

Any communication between users and the server. This
typically includes, but is not limited to, usernames and
passwords, credit card numbers, emails, instant messages, and
sensitive documents. Under some common scenarios, an attacker
can also impersonate a secure website and intercept or change
the content the user sees.

Who is vulnerable?

Websites, mail servers, and other TLS-dependent services
are at risk for the DROWN attack,
and many popular sites are
affected. We used Internet-wide scanning to measure how many
sites are vulnerable:

<!–

–>

<!–

–>

<!–

–>

<!–

–>

Vulnerable
at Disclosure
(March 1)
Still Vulnerable
as of Mar. 1
HTTPS — Top one million domains 25% TK%
HTTPS — All browser-trusted sites 22% TK%
HTTPS — All sites 33% TK%

Operators of vulnerable servers need to take action. There
is nothing practical that browsers or end-users can do on
their own to protect against this attack.

Is my site vulnerable?

Modern servers and clients use the TLS encryption protocol.
However, due to misconfigurations, many servers also still
support SSLv2, a 1990s-era predecessor to TLS. This support
did not matter in practice, since no up-to-date clients
actually use SSLv2. Therefore, even though SSLv2 is known to
be badly insecure, until now, merely supporting SSLv2 was not
considered a security problem, because clients never used
it.

DROWN shows that merely supporting SSLv2 is a threat to
modern servers and clients. It allows an attacker to decrypt
modern TLS connections between up-to-date clients and servers
by sending probes to a server that supports SSLv2 and uses the
same private key.

A server is vulnerable to DROWN if:

  • It allows SSLv2 connections. This is surprisingly
    common, due to misconfiguration and inappropriate default
    settings. Our measurements show that 17% of HTTPS servers
    still allow SSLv2 connections.

or:

  • Its private key is used on any other server that
    allows SSLv2 connections, even for another protocol. Many
    companies reuse the same certificate and key on their web
    and email servers, for instance. In this case, if the email
    server supports SSLv2 and the web server does not, an
    attacker can take advantage of the email server to break TLS
    connections to the web server. When taking key reuse into
    account, an additional 16% of HTTPS servers are
    vulnerable, putting 33% of HTTPS servers at risk.

This tool uses data collected during February 2016.
It does not immediately update as servers patch.

How do I protect my server?

To protect against DROWN, server
operators need to ensure that their private keys are not used
anywhere with server software that allows SSLv2 connections.
This includes web servers, SMTP servers, IMAP and POP servers,
and any other software that supports SSL/TLS. You can use the
form above to check whether your server appears to be exposed
to the attack.

Disabling SSLv2 can be complicated and depends on the
specific server software. We provide instructions here for
several common products:

OpenSSL: OpenSSL is a cryptographic library used in
many server products. For users of OpenSSL, the easiest and
recommended solution is to upgrade to a recent OpenSSL
version. OpenSSL 1.0.2 users should upgrade to 1.0.2g.
OpenSSL 1.0.1 users should upgrade to 1.0.1s. Users of older
OpenSSL versions should upgrade to either one of these
versions. More details can be found
in this
OpenSSL blog post
.

Microsoft IIS (Windows Server): IIS versions 7.0 and
above should have SSLv2 disabled by default. (A small number
of users may have enabled SSLv2 manually and will need to take
steps to disable it.) We still recommend checking whether your
private key is exposed elsewhere, using the form above. IIS
versions below 7.0 are no longer supported by Microsoft and
should be upgraded to supported versions.

Network Security Services (NSS): NSS is a common
cryptographic library built into many server products. NSS
versions 3.13 (released back in 2012) and above should have SSLv2 disabled by
default. (A small number of users may have enabled SSLv2
manually and will need to take steps to disable it.) Users of
older versions should upgrade to a more recent version. We
still recommend checking whether your private key is exposed
elsewhere, using the form above.

Other affected software and operating systems:
Instructions for:
Apache,
Postfix,
Nginx

Browsers and other clients: There is nothing
practical that web browsers or other client software can do to
prevent DROWN. Only server operators are able to take action
to protect against the attack.

What does DROWN stand for?

DROWN stands for Decrypting RSA with Obsolete and Weakened eNcryption.

What are the technical details?

For the complete details, see
our full technical
paper
. We also provide a brief technical summary
below:

In technical terms, DROWN is a new form of cross-protocol
Bleichenbacher
padding oracle attack
. It allows an attacker to decrypt
intercepted TLS connections by making specially crafted
connections to an SSLv2 server that uses the same private
key.

The attacker begins by observing roughly several hundred connections
between the victim client and server. The attacker will
eventually be able to decrypt one of them. Collecting this
many connections might involve intercepting traffic for a long
time or tricking the user into visiting a website that quickly
makes many connections to another site in the background. The
connections can use any version of the SSL/TLS protocol,
including TLS 1.2, so long as they employ the commonly used
RSA key exchange method. In an RSA key exchange, the client
picks a random session key and sends it to the server,
encrypted using RSA and the server’s public key.

Next, the attacker repeatedly connects to the SSLv2 server
and sends specially crafted handshake messages with
modifications to the RSA ciphertext from the victim’s
connections. (This is possible because
unpadded
RSA is malleable
.) The way the server responds to
each of these probes depends on whether the modified
ciphertext decrypts to a plaintext message with the right
form. Since the attacker doesn’t know the server’s private
key, he doesn’t know exactly what the plaintext will be, but
the way that the server responds ends up leaking information
to the attacker about the secret keys used for the victim’s
TLS connections.

The way this information is leaked can take two forms:

  • In the most general variant of DROWN, the attack
    exploits a fundamental weakness in the SSLv2 protocol that
    relates to export-grade cryptography that was introduced to
    comply with 1990s-era U.S. government restrictions. The
    attacker’s probes use a cipher that involves only 40 bits of
    RSA encrypted secret key material. The attacker can tell
    whether his modified ciphertext was validly formed by
    comparing the server’s response to all 240
    possibilities—a moderately large computation, but one
    that we show can be inexpensively performed with
    GPUs. Overall, roughly 40,000 probe connections and
    250 computation is needed to decrypt one out of 900 TLS
    connections from the victim.
    Running the computations for the full attack
    on Amazon EC2 costs about $440.

  • A majority of servers vulnerable to DROWN are also
    affected by an OpenSSL bug that results in a significantly
    cheaper version of the attack. In this special case, the
    attacker can craft his probe messages so that he immediately
    learns whether they had the right form without any large
    computation. In this case, the attacker needs about 17,000
    probe connections in total to obtain the key for one out of 260 TLS
    connections from the victim,
    and the computation takes under a minute on a
    fast PC.

  • This special case stems from the complexity introduced by
    export-grade cryptography. The OpenSSL bug allows the
    attacker to mix export-grade and non-export-grade crypto
    parameters in order to exploit unexpected paths in the
    code.

    This form of the attack is fast enough to allow an online
    man-in-the-middle (MitM) style of attack, where the attacker
    can impersonate a vulnerable server to the victim client.
    Among other advantages, such an attacker can force the
    client and server to use RSA key exchange (and can then
    decrypt the connection) even if they would normally prefer a
    different cipher. This lets the attacker target and break
    connections between modern browsers and servers that prefer
    perfect-forward-secret key exchange methods, such as DHE and
    ECDH.

    We were able to execute this form of the attack in under
    a minute on a single PC.

How can I contact the DROWN research team?

DROWN was developed by researchers at Tel Aviv University,
Münster University of Applied Sciences, Ruhr University
Bochum, the University of Pennsylvania, the Hashcat project,
the University of Michigan, Two Sigma, Google, and the OpenSSL
project:
Nimrod Aviram,
Sebastian Schinzel,
Juraj Somorovsky,
Nadia Heninger,
Maik Dankel,
Jens Steube,
Luke Valenta,
David Adrian,
J. Alex Halderman,
Viktor Dukhovni,
Emilia Käsper,
Shaanan Cohney,
Susanne Engels,
Christof Paar, and
Yuval Shavitt

The team can be contacted at mail@drownattack.com.

Is there a CVE for DROWN?

Yes. The DROWN attack itself was
assigned CVE-2016-0800.

DROWN is made worse by two additional OpenSSL
implementation vulnerabilities. CVE-2015-3197,
which affected OpenSSL versions prior to 1.0.2f and 1.0.1r,
allows a DROWN attacker to connect to the server with disabled
SSLv2 ciphersuites, provided that support for SSLv2 itself is
enabled. CVE-2016-0703, which affected OpenSSL
versions prior to 1.0.2a, 1.0.1m, 1.0.0r, and 0.9.8zf, greatly
reduces the time and cost of carrying out the DROWN
attack.

How easy is it to carry out the attack? Is it practical?

Yes. We’ve been able to execute the attack against OpenSSL
versions that are vulnerable to CVE-2016-0703 in under a
minute using a single PC. Even for servers that don’t
have these particular bugs, the general variant of the attack,
which works against any SSLv2 server, can be conducted in
under 8 hours at a total cost of $440.

What popular sites are affected?

Here are some examples.

Is the vulnerability currently being exploited by attackers?

We have no reason to believe that DROWN has been exploited
in the wild prior to this disclosure. Since the details of the
vulnerability are now public, attackers may start exploiting
it at any time, and we recommend taking the
countermeasures explained above as
soon as possible.

SSLv2 has been known to be insecure for 20 years. What’s the big deal?

Indeed, SSLv2 has long known to be weak when clients and
servers use it to communicate, and so nearly every modern
client uses a more recent protocol. DROWN shows
that merely allowing SSLv2, even if no legitimate
clients ever use it, is a threat to modern servers and
clients. It allows an attacker to decrypt modern TLS
connections
between up-to-date clients and servers by
sending probes to any server that supports SSLv2 using the
same private key.

Does DROWN allow an attacker to steal the server’s private key?

No. DROWN allows an attacker to decrypt one connection at a
time. The attacker does not learn the server’s private
key.

Can DROWN be also used to perform MitM attacks?

Yes. Some variants of the attack can be used to perform
MitM attacks against TLS or QUIC. More details can be found in
sections 5.3 and 7 of
the technical paper.

Does Perfect Forward Secrecy (PFS) prevent DROWN?

Surprisingly, no. The active MitM form of the attack allows an
attacker to target servers and clients that prefer non-RSA key
exchange methods. See sections 5.3 and 7 of
the technical paper.

Do I need to get a new certificate for my server?

Probably not. As the attacker does not learn the server’s
private key, there’s no need to obtain new certificates. The
only action required is disabling SSLv2 as per
the countermeasures explained
above
. If you cannot confidently determine that SSLv2 is
disabled on every device or server that uses your server’s
private key, you should generate a fresh key for the server
and obtain a new certificate.

Do I need to update my browser?

No. There is nothing practical that web browsers or other
client software can do to prevent DROWN. Only server operators
are able to take action to protect against the attack.

I have a firewall that allows filtering of SSLv2
traffic. Should I filter that traffic?

Yes, that’s a reasonable precaution, although it will also
prevent our scanners from being able to
help you identify vulnerable servers. You might consider first
running the test suite to identify vulnerable servers and only
then filtering SSLv2 traffic. You should also use
the countermeasures explained
above
.

Can I detect if someone has exploited this against me?

Possibly. If you run a server and can be certain no one
made a large number of SSLv2 connections to any of your
servers (for example, by examining IDS or server logs), then
you weren’t attacked. Your logs may contain a small number of
SSLv2 connections from the Internet-wide scans that we
conducted over the past few months to measure the prevalence
of the vulnerability.

My HTTPS server is certified PCI compliant, so I already
know I have SSLv2 disabled. Do I still need to take
action?

Yes. Even if you’re certain that you have SSLv2 disabled on
your HTTPS server, you may be reusing your private key on
another server (such as an email server) that does support
SSLv2. We recommend manually inspecting all servers that use
your private key. In addition, you can check whether your
private key is exposed elsewhere on the Internet using the form
above.

I have an old embedded device that doesn’t allow me to
disable SSLv2, and I have to keep it running. What do I
do?

Security against DROWN is not possible for that embedded
device. If you must keep that device running, make sure it
uses a different RSA private key than any other servers and
devices. You can also limit the scope of attack by using a
firewall to filter SSLv2 traffic from outside your
organization. In all circumstances, maintaining support for
SSLv2 should be a last resort.

SSLLabs says I have SSLv2 disabled. That means I’m safe,
right?

Unfortunately, no. Although SSLLabs
provides an invaluable suite of security tests, right now it
only checks whether your HTTPS server directly allows SSLv2.
You’re just as much at risk if your site’s certificate or key
is used anywhere else on a server that does support SSLv2.
Common examples include SMTP, IMAP, and POP mail servers, and
secondary HTTPS servers used for specific web
applications. SSLLabs doesn’t yet check for this kind of
cross-server exposure to DROWN, but our DROWN
check tool
attempts to.

I just disabled SSLv2, but your tool says I’m still vulnerable!

Our tool is based on correlated scan data collected during February,
2016. Due to the high quantity of data, it does not automatically
update as servers disable SSLv2.

You can also download and run our
scanner utility.
This utility only detects SSLv2
support on a single port. It cannot detect the common
scenario
, explained above, where a web server that doesn’t
support SSLv2 is vulnerable because it shares its public key with
an email server that does. We strongly recommend using the online
scanner above, when at all possible.

Why does your tool say I support SSLv2, but nmap says I don’t?

Due to CVE-2015-3197, OpenSSL may still accept
SSLv2 connections even if all SSLv2 ciphers are disabled.

Are you planning to release the code for your
implementation of the attack?

Not in the immediate future. There are still too many
servers vulnerable to the attack.

What factors contributed to DROWN?

For the third time in a year, a major Internet security
vulnerability has resulted from

the way cryptography

was weakened
by U.S. government policies that
restricted exporting strong cryptography until the late
1990s. Although these restrictions, evidently designed to make
it easier for NSA to decrypt the communication of people
abroad, were relaxed nearly 20 years ago, the weakened
cryptography remains in the protocol specifications and
continues to be supported by many servers today, adding
complexity—and the potential for catastrophic
failure—to some of the Internet’s most important
security features.

The U.S. government deliberately weakened three kinds of
cryptographic primitives: RSA encryption, Diffie-Hellman key
exchange, and symmetric
ciphers. FREAK
exploited export-grade RSA,
and Logjam exploited
export-grade Diffie-Hellman. Now, DROWN exploits export-grade
symmetric ciphers, demonstrating that all three kinds of
deliberately weakened crypto have come to put the security of
the Internet at risk decades later.

Today, some policy makers are calling for new
restrictions on the design of cryptography
in order to
prevent law enforcement from “going dark.” While
we believe that advocates of such backdoors are acting out of
a good faith desire to protect their countries, history’s
technical lesson is clear: weakening cryptography carries
enormous risk to all of our security.

Where else can I learn about DROWN?


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/nCkukQoFeWA/

Original article

System Bus Radio

README.md

This program transmits radio on computers without radio transmitting hardware.

Why?

Some computers are intentionally disconnected from the rest of the world. This includes having their internet, wireless, bluetooth, USB, external file storage and audio capabilities removed. This is called “air gapping”. Even in such a situation, this program can transmit radio.

Publicly available documents already discuss exfiltration from secured systems using various electromagnetic radiations. This is documented in the TEMPEST guidelines published by the US National Security Agency and the US Department of Defense. This project simply adds to that discussion.

How to Use It

Compile the problem using:

gcc main.c -Wall -O2 -o main

And run it on an Apple MacBook Air (13-inch, Early 2015):

./main

Then use a Sony STR-K670P radio receiver with the included antenna and tune it to 1580 kHz on AM.

You should hear the “Mary Had a Little Lamb” song playing repeatedly. Other equipment and tuning may work as well. On the equipment above, the author has achieved clear transmission over two meters of open air or one meter through drywall. Different results will be achievable with different equipment.

Please see results for other hardware at [HARDWARE-INFO.md] and add your own or mail to sbr@phor.net

Technical Explanation

Instructions in this program cause electromagnetic radiation to emit from the computer. The emissions are of a broad frequency range. To be accepted by the radio, those frequencies must:

  • Be emitted by the computer processor and other subsystems
  • Escape the computer shielding
  • Pass through the air or other obstructions
  • Be accepted by the antenna
  • Be selected by the receiver

By trial and error, the above frequency was found to be ideal for that equipment. If somebody would like to send me a SDR that is capable of receiving 100 kHz and up then I could test other frequencies.

The actual emissions are caused by the _mm_stream_si128 instruction that writes through to a memory address. Inspiration for using this instruction was provided in:

Guri, M., Kachlon, A., Hasson, O., Kedma, G., Mirsky, Y. and Elovici, Y., 2015. GSMem: data exfiltration from air-gapped computers over GSM frequencies. In 24th USENIX Security Symposium (USENIX Security 15) (pp. 849-864).

Please note that replacing _mm_stream_si128 with a simple x++; will work too. My experience has been that _mm_stream_si128 produces a stronger signal. There may be other ideas that work even better, and it would be nice to improve this to be more portable (not require SSE extensions).

The program uses square wave modulation, which is depicted below:

||
|                                              |
|‾|_|‾|_|‾|_____________|‾|_|‾|_|‾|_____________
|                       |   |   |
||   |   |
                            |   |
                            || CARRIER

Notes on high precision time APIs for Mac:


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/0JwHU6u2XAs/system-bus-radio

Original article

How to Deploy Software

Organize with branches

A lot of the organizational problems surrounding deployment stems from
a lack of communication between the person deploying new code and the rest
of the people who work on the app with her. You want everyone to know the
full scope of changes you’re pushing, and you want to avoid stepping on
anyone else’s toes while you do it.

There’s a few interesting behaviors that can be used to help with
this, and they all depend on the simplest unit of deployment: the branch.

Code branches

By “branch”, I mean a branch in Git, or Mercurial, or whatever you
happen to be using for version control. Cut a branch early, work on it,
and push it up to your preferred code host (GitLab, Bitbucket, etc).

You should also be using pull requests, merge requests, or other code
review to keep track of discussion on the code you’re introducing.
Deployments need to be collaborative, and using code review is a big part
of that. We’ll touch on pull requests in a bit more detail later in this
piece.

Code Review

The topic of code review is long, complicated, and pretty specific to
your organization and your risk profile. I think there’s a couple
important areas common to all organizations to consider, though:

  • Your branch is your responsibility. The companies
    I’ve seen who have tended to be more successful have all had this idea
    that the ultimate responsibility of the code that gets deployed falls
    upon the person or people who wrote that code. They don’t throw code
    over the wall to some special person with deploy powers or testing
    powers and then get up and go to lunch. Those people certainly should
    be involved in the process of code review, but the most important part
    of all of this is that you are responsible for your code. If
    it breaks, you fix it… not your poor ops team. So don’t break it.

  • Start reviews early and often. You don’t need to
    finish a branch before you can request comments on it. If you can open
    a code review with imaginary code to gauge interest in the interface,
    for example, those twenty minutes spent doing that and getting told
    “no, let’s not do this” is far preferable than blowing two weeks on
    that full implementation instead.

  • Someone needs to review. How you do this can
    depend on the organization, but certainly getting another pair of eyes
    on code can be really helpful. For more structured companies, you
    might want to explicitly assign people to the review and demand they
    review it before it goes out. For less structured companies, you could
    mention
    different teams
    to see who’s most readily available to help you
    out. In either end of the spectrum, you’re setting expectations that
    someone needs to lend you a hand before storming off and deploying
    code solo.

Branch and deploy pacing

There’s an old joke that’s been passed around from time to time about
code review. Whenever you open a code review on a branch with six lines of
code, you’re more likely to get a lot of teammates dropping in and picking
apart those six lines left and right. But when you push a branch that
you’ve been working on for weeks, you’ll usually just get people
commenting with a quick


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/QdLuC9A7EGk/deploying-software

Original article

Slack will soon start testing voice and video chat

Slack is gunning for Skype and Google Hangouts with the 2016 product roadmap it revealed today. The biggest change coming: the ability to seamlessly turn a text chat into a voice or video chat will begin testing “very soon”. This builds on Slack’s January 2015 acquisition of Screenhero, when it said these features would eventually be released.

At its customer conference in San Francisco, the company outlined what it plans to do to stay ahead of its workplace chat competitors. It already has 2.3 million daily active users, up from 2 million in December, and wants to give them more options to be productive, collaborative, and transparent.

While that DAU stat might sound low, it’s huge in the enterprise. Considering Slack sees 320 million minutes of active usage per weekday, that breaks down to about 140 minutes of usage per user per weekday.

Slack New Features

Slack VP of Product April Underwood tells me that voice chat on desktop will come first, and then the company will focus on making it work on all its devices and apps. Video will have to wait until after that. Underwood noted that you can already make voice calls via Skype’s Slack integration. But with its own feature, she says the use case will be “If I’m DMing someone in Slack and we want to switch to have a quick voice conversation, it addresses that problem.”

Slack plans to make a full what-you-see-is-what-you-get formatting tool for messages so you can make sure text looks just right in case a co-worker wants to copy and paste it out. Slack will be improving search operators to make it easier for non-power users to find files and other content.

Shared Channels will become a bigger part of the Slack experience. They could help you communicate across siloed teams at big companies. Eventually, Slack says it also wants to empower organizations to interface with outside parties like marketing agencies and technology vendors. Slack is also improving its billing system so large teams can quickly get started.

Slack Search Shared Channels

To make it easier for developers to build more powerful experiences atop its platform, Slack is also building out its Search, Learning & Intelligence (SLI) division. Beyond the roadmap, Slack spent the conference highlighting teams like NASA’s Jet Propulsion Lab, Charity:Water, and medical researchers who are using Slack to stay in touch.

Underwood, who Slack promoted to VP of Product in January, opened the day saying she was just going to tell everyone what Slack plans to build rather than being secretive. “We like to be really open and transparent because we want your feedback” says explained.

Underwood followed the announcements by taking questions from the crowd…though that quickly devolved into attendees asking if their dream features would ever get built.

Slack Billing plan

The imminent release of voice and video chat could make offices noisier, but it will certainly make Slack more of a comprehensive communication solution rather than a tool plugged into a suite of other products. That might convince companies Slack is worth paying for.

Given Slack’s focus on making work searchable, it’s easy to imagine that years down the line, Slack could use voice recognition to create transcripts of your voice or video meetings.

Outside its own product, Slack is fostering an family of third-party apps, integrations, and bots. While others might be able to copy its features, they won’t be able to copy its community, drawn to the network effect of the market leader.’

Slack Growth

In December, Slack announced it had 2 million daily active users and 570,000 paid seats. It leveraged that momentum to get its A-list investors to compile an $80 million Slack Fund for investing in its developer ecosystem.

If Slack’s strategy pans out, it could pull away from the pack and solidify itself as the next staple of the enterprise. Because everyone needs to communicate, Slack could become ubiquitous enough to serve as a social hub and identity layer for other enterprise apps.


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/HWeBEadUvm8/

Original article

The collective insanity of the publishing industry

Unless you’re a writer, I imagine you haven’t been paying quite as close attention to the publishing industry and all its weirdness as I have, and that’s a shame, because it’s been really entertaining.

home-slide-spaceship-cvr-2

not carried by the Big 5

Actually, entertaining isn’t the right word.  It’s been insane, but the kind of insane that’s unreasonably fun to watch from a safe remove.  Like watching a man stop traffic to cross against a green light by shouting, “I’ll bite your car!”  As long as it isn’t your car he’s threatening, it’s sort of funny.

You might imagine that as an author with published works for sale, I am not at a safe remove when it comes to the publishing industry.  That’s sort of true, but only sort-of.

Here’s a superb example of the madness of which I speak, and why I’m not concerned that anyone will be biting my car.

In 2014, there was a drawn-out dispute between Amazon, and Hachette.  The latter is one of the largest publishers in the world, and Amazon is a company that sells things, such as books.  The essence of the dispute was that Hachette—and all the other publishers we affectionately refer to as ‘the Big 5’—wanted more control over the list price of their e-books on Amazon.

That sounds thoroughly reasonable, and it sort of is, but please let me explain because the crazy is in the details.  What was happening was that Amazon was discounting the price of the ebooks, and it may seem like this is something the Big 5 would want to stop, except the markdown was coming off of Amazon’s end.  In other words, if Hachette wanted to charge $15.99 for an ebook, and Amazon marked it down to $9.99, Hachette was still paid their cut of the full price of the book.

More people will buy a book at $9.99 than at $15.99, so essentially, the Big 5 was coming out ahead in this arrangement in every conceivable way.  They collected royalties at an unreasonably high price point while moving the number of units that corresponded to a lower price point.

So of course that had to be stopped right away.

Hachette fought for, and won from Amazon, the return to something called the Agency Model, whereby they set their price and Amazon wasn’t allowed to reduce that price.  So that $15.99 book stayed at $15.99 until Hachette decided to change it.

Soon after that contract was signed, the other Big 5 contracts came due, and they all asked for the same Agency Model arrangement.  Thus, the finest minds in publishing—or one might assume—negotiated themselves out of an arrangement whereby they sold more units at a lower cost without suffering the financial impact that comes with a lower unit cost.

On purpose.

This isn’t even the crazy part.

After securing the right to price their ebooks unreasonably high and having those prices stick, the first thing the collective brain-trust of the Big 5 did was raise their ebook prices even more.  Often, the prices were higher than the price of the print edition, which is just fundamentally insane.

Kindle-icon(We can go back and forth about how even an ebook has editorial and marketing costs to recoup, but not this week.  Besides, even if true that doesn’t justify a higher cost than print.  Print editions have per-copy costs that set the price floor: paper, binding, shipping, etc.  Ebooks have no such per-copy cost, aside from the tiny expense of electronic transmission.)

It should come as very little surprise to you that after jacking up the prices of their ebooks at the start of 2015, the Big 5 sold fewer ebooks.

Now here’s the fun part, the part that just makes me shake my head and giggle and wonder how I can live in such extraordinary times.  After six months of depressed ebook sales, the Big 5 announced that the ebook market was slowing down.

Not: “we priced ourselves out of the market and stopped selling as many books”. No no no.  The ebook market!  Is slowing down!

This was celebrated!

I mean it.  One article after another, from the New York Times on down came news pieces declaring that print was making a comeback at long last, and the long national nightmare was over.

All it took was the biggest publishing companies in the world deliberately murdering their own share of the market.  And it wasn’t even true.

Here is why I can laugh at this from a save remove: I don’t have a contract with a traditional publisher.  If I did, I’d be hopping mad, because what I just described above is an entire industry trying to take away a viable (and lucrative) sales channel for their own authors’ work.  And I can laugh because the ebook market isn’t slowing down.  The sales that would go to that $15.99 book are going to lower-priced books from indie authors and self-published authors, like me.  (Note: I’m both and indie-published author and a self-published author—a hybrid— right now.)gutenbergpress

If the Big 5 are under the impression that they can strangle the ebook market, they’re mistaken.  All they really can do is strangle their corner of it.

If you’re wondering, driving readers toward print and away from ebooks is actually the idea behind this madness.  Given the overhead costs of one versus the other, it makes almost no business sense, except for one detail: the Big 5 can exert a lot more control over print and distribution of paper copies than they can over electronic copies.  So if you’re looking for logic in this scheme, that’s probably where you’ll find it.  A true resurgence in print could mean a revival of physical bookstores and a resumption of Big 5 control over the publishing industry as a whole.  And maybe a pony, a recipe for no-calorie fudge, and a cure for male-pattern baldness.

Here’s how short-sighted this idea is.  The Big 5 raised their ebook prices, created an artificial resurgence in print sales of their books, and thought they proved print-is-not-dead.  (They actually proved the consumer will buy the cheaper option, but okay.)  One might even think they stuck it to Amazon, somehow, by doing this.

The only problem is this: the largest seller of print books right now happens to be Amazon.  Guess who saw an uptick in print sales in 2015?

Like what you’ve read here?  Join my mailing list!


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/fG8IByn4H68/

Original article

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑

%d bloggers like this: