Some Rookie Mistakes in Go

Learning as we Go

Some rookie Go mistakes we made building Teamwork Desk, and what we learned from them

Drawing

We love Go. Over the last year we have written nearly 200,000 lines of Go code in the numerous services that make up Teamwork Desk. We have built nearly a dozen small HTTP services that make up the product. Why Go? Go is a fast (very fast), statically-typed, compiled language, with a powerful concurrency model, garbage collection, superb standard library, no inheritance, legendary authors, multi-core support, and a great community – not to mention that for us writing web apps it has a goroutine-per-request set-up that avoids event loops and callback hell. It has become hugely popular for building systems and servers, particularly micro-services.

Like working with any new language or technology, we fumbled about with it for a bit while we were experimenting in the early days. Go really does have its own style and idioms, especially if you come from an OO language like Java or a scripting language like Python. So we made some mistakes 🙂 and we’d like to share some of them here and what we learned. If you use Go in production, you will recognize all of these. If you are just starting to use Go, then hopefully you might find something here that helps.

1. Revel was not a good choice for us

Just starting with Go? Building a web server? You need a framework right? Well, you might think so. There are advantages to using an MVC framework – primarily the convention-over-configuration approach that gives a set project structure and convention, that can give consistency and lower the bar of entry across projects. What we found was that we preferred the power of configuration over the advantage of convention, especially as Go makes it so easy to write a web app with minimal fuss, and many of our web apps are small services. The nail in the coffin for us was the fact that it is simply not idiomatic. Revel was written from the point of view of trying to introduce a Play or Rails-like framework into Go, instead of using the power of Go and the stdlib and building from there. From the author;

Initially, this was just a fun project to see if I could replicate the magical Play! 1.x experience in much-less-magical Go

In fairness, going with an MVC framework in a new language made a lot of sense for us at the time because it removed the debate about structure and allowed a new team to build on something in a coherent way. Almost every web app I have every written pre-Go was with the help of some flavour of MVC framework. C#? ASP.NET MVC. Java? SpringMVC. PHP? Symfony. Python? CherryPy. Ruby? RoR. We realised over time that we did not need a framework in Go. The standard library HTTP package has what you need, and you typically then add in a multiplexer (like mux) for routing and a lib for middleware (like negroni) for handling things like auth and logging etc., and that’s all you need. Go’s HTTP package design makes it easy to do this. You also come to realise that some of the power of Go is in the toolchain and tools around Go, giving you a wide range of powerful commands that can run against your code. However in Revel, because of the project structure set out, and the fact there is no package main and func main() {} entry (idiomatic and necessary for some go commands), you cannot use these tools. In fact Revel comes with its own command package that mirrors some of the commands like run and build.

Using Revel;

  • Cannot run go build
  • Cannot run go install
  • Cannot use race detector (–race)
  • Cannot use go-fuzz or any other awesome tools that require buildable Go source
  • Cannot use other middleware or routers
  • Hot reload is neat but slow, Revel uses reflection on the source and adds about ~30% to compile times from our experience on 1.4. It also does not use go install so packages are not cached.
  • Unable to move to Go 1.5 or higher because compile time with Revel as compile time even slower. We are ripping Revel out instead to move the core to 1.6.
  • Revel places tests under /test dir, going against Go idiom of placing _test.go file with the file under test, in the same package
  • In order for Revel tests to run, it starts your server, making them integration tests

We found that Revel just strayed too far from the idiomatic way of building Go, and we lost the power of some great parts of the go toolset.

2. Use Panics wisely

If you come from Java or C#, error handling in Go can take a little bit of getting used to. Go has multiple returns from functions so a very typical scenario is a function that returns something and also an error, which would be nil if everything worked okay (nil is the default value for reference types in Go).

func something() (thing string, err error) {  
    s := db.GetSomething()
    if s == "" {
        return s, errors.New("Nothing Found")
    }
    return s, nil
}

We ended up using panic, where really we wanted to create an error and let it be handled somewhere higher up the call stack.

s, err := something()  
    if err != nil {
    panic(err)
}

We just panicked, literally. An error?! OMG, run! But in Go, you come to realise that errors are values, they are completely natural and idiomatic part of calling functions and dealing with the response. A panic will bring down your app, kill it dead. Like a runtime exception dead. Why would you do that just because a function returned an error? Lesson learned. And pre 1.6 the stack dump for a panic dumped all running go routines, making it very difficult to find the original problem. You end up with lots of stuff to wade through that you don’t need.

Even when you do have a genuine non-recoverable error, or you encounter a run-time panic, you probably still don’t want to crash your entire web server, which could be in the middle of lots of other things (you use transactions for your db right?). So we learned to handle these panics, adding a filter in Revel, which recovers the panic and captures the stack trace which is printed to log file and sent to Sentry where we get alerted by email and in Teamwork Chat immediately. The API returns a 500 Internal Server Error to the frontend.

// PanicFilter wraps the action invocation in a protective defer blanket that
// recovers panics, logs everything, and returns 500.
func PanicFilter(rc *revel.Controller, fc []revel.Filter) {  
    defer func() {
        if err := recover(); err != nil {
            handleInvocationPanic(rc, err) // stack trace, logging. alerting            
        }
    }()
    fc[0](rc, fc[1:])
}

3. Be careful reading from Request.Body more than once

After reading from http.Request.Body, the Body is drained and subsequent reading from it will return []byte{} – an empty body. This is because when you read the bytes of an http.Request.Body the reader is at the end of the bytes, it would need to be reset to read again. However, http.Request.Body is an io.ReadWriter and does not have methods such as Peek or Seek available that would help. A way around this is to copy the Body into memory first, then setting the original back to it after reading. Expensive if you have very large requests. Definitely a gotcha and one that still catches us out every now and again!

Here’s a short but complete program that shows this

package main

import (  
    "bytes"
    "fmt"
    "io/ioutil"
    "net/http"
)

func main() {  
    r := http.Request{}
    // Body is an io.ReadWriter so we wrap it up in a NopCloser to satisfy that interface
    r.Body = ioutil.NopCloser(bytes.NewBuffer([]byte("test")))

    s, _ := ioutil.ReadAll(r.Body)
    fmt.Println(string(s)) // prints "test"

    s, _ = ioutil.ReadAll(r.Body)
    fmt.Println(string(s)) // prints empty string! 
}

Here’s the code to copy it off and set it back…if you remember 🙂

content, _ := ioutil.ReadAll(r.Body)  
// Replace the body with a new io.ReadCloser that yields the same bytes
r.Body = ioutil.NopCloser(bytes.NewBuffer(content))  
again, _ = ioutil.ReadAll(r.Body)  

You could create a little util function

func ReadNotDrain(r *http.Request) (content []byte, err error) {  
    content, err = ioutil.ReadAll(r.Body)
    r.Body = ioutil.NopCloser(bytes.NewBuffer(content)) 
    return
}

and call that instead of using something like ioutil.ReadAll

content, err := ReadNotDrain(&r)  

Of course now you have replaced r.Body.Close() with a no-op that does nothing when Close is called on the request.Body. This is the way the httputil.DumpRequest works.

4. There are some ever-improving libraries to help you write SQL

The core part of Teamwork Desk that serves up the web app to the customers deals with MySQL a lot. We don’t use stored procedures, so our data layer in Go consisted of some serious SQL…and some of that code would win a Gold Medal in Olympics Gymastics as it built up a complex query. We started using Gorm and its chainable API to build up our SQL. You can still use raw SQL with Gorm and have it marshal the result to your struct. (Worth nothing that in fact, we find recently we are doing this more and more often, which may hint that we need to revisit how we are really using Gorm and make sure we are getting the best out of it, or if we need to look at more alternatives – not afraid to do that either!).

For some, ORM is a dirty word – people say you lose control, understanding and possibility to really optimize queries – all true. We really just use Gorm as a wrapper to build queries where we understand the output it will give us, not as a full blown ORM. It lets you build up a query using its chainable API, like below, and marshal the result to a struct. It has a huge amount of features that can take away some of the pain of hand-crafting SQL in your code. It does also support Preloading, Limits, Grouping, Associations, Raw SQL, Transactions and more. Worth looking at if you are writing SQL by hand right now in Go.

var customer Customer  
   query = db.
   Joins("inner join tickets on tickets.customersId = customers.id").
   Where("tickets.id = ?", e.Id).
   Where("tickets.state = ?", "active").
   Where("customers.state = ?", "Cork").
   Where("customers.isPaid = ?", false). 
   First(&customer)

5. Pointless pointers are pointless

This was specific to slices really. Passing a slice to a function? Well in Go, arrays are values so if you have a large array, you don’t want to be making a copy of it every time it is passed around or assigned right? True. It can be expensive in terms of memory to pass an array around. But in Go, 99% of the time you are actually dealing with a slice, not an array. A slice can basically be thought of as describing some section of an array (often all of it), consisting of a pointer to the starting array element, the length of the slice and the capacity of the slice.

Each part of a slice only requires 8 bytes so it will never be more than 24 bytes no matter what the underlying array holds or how big that is.

slice

We often passed a pointer to a slice to a function, under the misapprehension that we were saving memory.

t := getTickets() // e.g. returns []Tickets, a slice  
ft := filterTickets(&t)

func filterTickets(t *[]Tickets) []Tickets {}  

If we had a lot of data in t, we thought we had just prevented a large copy of data in memory, by passing it to filterTickets. Understanding slices as we do now, we can can happily just pass that slice by value without memory concerns.

t := getTickets() // []Tickets massive list of tickets, 20MB  
ft := filterTickets(t)

func filterTickets(t []Tickets) []Tickets {} // 24 bytes passed by value  

Of course, not passing by reference also means you avoid the possibility of mistakingly changing what the pointer points to, as a slice itself is a reference type.

6. Naked returns hurt readability and make your code harder to understand (in larger functions)

“Naked returns” is the name given in Go to returning from a function without explicitly stating what you are returning. Huh? In Go, you can have named return values e.g. func add(a, b int) (total int) {}. I can return from that function using just return instead of return total. Naked Returns can be useful and neat for small functions.

func findTickets() (tickets []Ticket, countActive int64, err error) {  
    tickets, countActive = db.GetTickets()
    if tickets == 0 {
        err = errors.New("no tickets found!")
    }
    return
}

It’s pretty clear what is happening here. If no tickets are found then 0, 0, error will be returned. If tickets are found, then something like 120, 80, nil will be returned, depending on ticket count etc. The key point is if you have named returned values in signature, then you can just use return (a naked return) and it will return the values for each named return value in the state they are in when return is called.

However…we had (have…) some big functions. Too big. Like, stupid-big. Any naked returns in a function that is long enough that you have to scroll through, are a disaster for readability and subtle bugs. Especially if there are multiple return points too. Don’t do it. Don’t do either actually 🙂 naked returns or big functions. Here is a made-up example;

func findTickets() (tickets []Ticket, countActive int64, err error) {  
    tickets, countActive := db.GetTickets()
    if tickets == 0 {
        err = errors.New("no tickets found!")
    } else {
        tickets += addClosed()
        // return, hmmm...okay, I might know what this is
        return 
    }
    .
    .
    .
    // lots more code
    .
    .
    .
    if countActive > 0 {
        countActive - closedToday()
        // have to scroll back up now just to be sure...
        return
    }
    .
    .
    .
    // Okay, by now I definitely can't remember what I was returning or what values they might have
    return
}

7. Be careful about scope and short-hand declaration

You can introduce subtle bugs due to scoping in Go when you declare variables with same name using shorthand := in different blocks, called shadowing.

func findTickets() (tickets []Ticket, countActive int64) {  
    tickets, countActive := db.GetTickets() // 10 tickets returned, 3 active
    if countActive > 0 {
        // oops, tickets redeclared and used just in this block
        tickets, err := removeClosed() // 6 tickets left after removing closed
        if err != nil {
            // Argh! We used the variables here for logging!, if we didn't we would
            // have received a compile-time error at least for unused variables.
            log.Printf("could not remove closed %s, ticket count %d", err.Error(), len(tickets))
        }
    }
    return // this will return 10 tickets o_O
}

The trick there is with := shorthand variable declaration and assignment. Normally := will only compile if you are declaring new variables on the left hand side. But it also works if any of the variables on the left-hand side are new. In our case above, err is new so you would expect tickets to just be overriden as it was already declared above in the function return params. But this is not the case because of the block scope – a new tickets variable is declared and assigned and loses it’s scope once that block has finished. To fix this, just declare the variable err outside of the block and just use = instead of := then. A good editor (e.g. Emacs or Sublime, with Go plugins for linting your code, will pick up on this shadowing).

func findTickets() (tickets []Ticket, countActive int64) {  
    var err error
    tickets, countActive := db.GetTickets() // 10 tickets returned, 3 active
    if countActive > 0 {
        tickets, err = removeClosed() // 6 tickets left after removing closed
        if err != nil {
            log.Printf("could not remove closed %s, ticket count %d", err.Error(), len(tickets))
        }
    }
    return // this will return 6 tickets
}

8. Maps and random crashes

Maps are not safe to access concurrently. We had one situation where we had a map available for the lifetime of the app as package-level variable. This map was used for collecting stats for each controller in our app, and of course in Go each http request is its own goroutine. You can see where this is going – eventually different goroutines would attempt to access the map at the same time, be it for read or write. This would cause a panic and our app would crash (we use upstart scripts on Ubuntu to respawn the app when the process stopped, at least keeping it “up” so to speak). It appeared random to us which is always fun. Finding the cause of panics like this was also a little bit more cumbersome pre-1.6 as the stack dump included all running goroutines and that amounted to a lot of logs to sift through.

The Go team did consider making maps safe for concurrent access but decided against as would be unnecessary overhead for most common cases – a pragmatic approach which keeps things simple.
From golang.org FAQ

After long discussion it was decided that the typical use of maps did not require safe access from multiple goroutines, and in those cases where it did, the map was probably part of some larger data structure or computation that was already synchronized. Therefore requiring that all map operations grab a mutex would slow down most programs and add safety to few. This was not an easy decision, however, since it means uncontrolled map access can crash the program

Our code looked something like this

package stats

var Requests map[*revel.Controller]*RequestLog  
var RequestLogs map[string]*PathLog  

And we changed it to use the sync package from the stdlib to embed a reader/writer mutex lock in a struct that also wrapped up our map. We added some helper Add and Get methods to this struct.

var Requests ConcurrentRequestLogMap

// init is run for each package when the app first runs
func init() {  
    Requests = ConcurrentRequestLogMap{items: make(map[interface{}]*RequestLog)}
}

type ConcurrentRequestLogMap struct {  
    sync.RWMutex // We embed the sync primitive, a reader/writer Mutex
    items map[interface{}]*RequestLog
}

func (m *ConcurrentRequestLogMap) Add(k interface{}, v *RequestLog) {  
    m.Lock() // Here we can take a write lock
    m.items[k] = v
    m.Unlock()
}

func (m *ConcurrentRequestLogMap) Get(k interface{}) (*RequestLog, bool) {  
    m.RLock() // And here we can take a read lock
    v, ok := m.items[k]
    m.RUnlock()

    return v, ok
}

No more crashes.

9. Vendor…by the beard of Zeus, vendor

Okay, this is hard to admit. We’re caught, red-handed. Guilty as charged…woops. We deployed code to production without vendoring.

guilty

So just to give you an idea why that is bad, in case you didn’t know. In Go, you get your dependencies by running go get ./... from the root of your project. This pulls them from HEAD on master, for each one. Obviously this is very bad, as unless you keep the exact versions of dependencies on your servers in your $GOPATH and never update them ever (and never rebuild or launch a new server), then breaking changes are inevitable and you lose control over what code you are running in production. In Go 1.4 we vendored using Godeps and their GOPATH trick. In 1.5, we used GO15VENDOREXPERIMENT environment variable. In 1.6, thankfully, finally, /vendor at the root of your project is recognised as the place to put your dependencies, no tools necessary. You can use one of the various vendoring tools to track what versions and make it easier to add/update dependencies (removing .git, updating manifest etc.).

Plenty learned, more to come

That is a small list of some of the basic mistakes we made early on, and things we learned from them. We are just a small team of 5 developers building Teamwork Desk and yet we have learned an incredible amount about Go over the last year, while shipping a huge amount of great features at a breakneck pace. You will see us attending various Go conferences this year including GopherCon in Denver, I will soon be talking about using Go at a local developer meet-up in Cork. We will continue to look to release useful open-source tools in Go, and contribute back to existing libraries. We have a modest offering of some small projects so far (listed below), and we have also had PRs accepted back into Stripe, Revel and several other open-source Go projects.


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/ON6JfpuBGtg/

Original article

10M Concurrent Websockets

10M Concurrent Websockets

The C10M Problem is about how on a modern server, you should be able to easily handle 10M concurrent connections with solid throughput and low jitter. Handling that level of traffic generally requires a more specialized approach than is offered by a stock Linux kernel.

Using a stock debian-8 image and a Go server you can handle 10M concurrent connections with low throughput and moderate jitter if the connections are mostly idle. The server design for this example is just about the simplest websocket server that is useful for anything. It is similar to a push notification server like the iOS Apple Push Notification Service, but without the ability to store messages if the client is offline.

The server accepts websocket connections ports 10000-11000 (to avoid exhaustion of ephemeral ports on the clients during testing) and in the url the client specifies a channel to connect to, such as:

ws://:10000/

After the websocket connection has been setup, the server never reads any data from the connection, it only writes messages to the client. Publishing to the channel is handled by redis, using the PUBLISH/PSUBSCRIBE commands. This is unnecessary for a single server machine, but is nice when you have multiple servers and you need some sort of central place to handle message routing.

Whenever a message is published on a channel, the server will send a message to each connected client subscribed to that channel. To make sure clients are still connected, the server will also send a ping message every 5 minutes. The client can use a missing ping message to detect if it has been disconnected.

func handleConnection(ws *websocket.Conn, channel string) {
	sub := subscribe(channel)
	t := time.NewTicker(pingPeriod)

	var message []byte

	for {
		select {
		case <-t.C:
			message = nil
		case message = <-sub:
		}

		ws.SetWriteDeadline(time.Now().Add(30 * time.Second))
		err := ws.WriteMessage(websocket.TextMessage, message)
		if err != nil {
			break
		}
	}

	t.Stop()
	ws.Close()
	unsubscribe(channel, sub)
}

go get goroutines.com/10m-server

You can run the server like this:

apt-get update
apt-get install -y redis-server
echo "bind *" >> /etc/redis/redis.conf
systemctl restart redis-server
sysctl -w fs.file-max=11000000
sysctl -w fs.nr_open=11000000
ulimit -n 11000000
sysctl -w net.ipv4.tcp_mem="100000000 100000000 100000000"
sysctl -w net.core.somaxconn=10000
sysctl -w net.ipv4.tcp_max_syn_backlog=10000
10m-server

The client connects to a server specified on the command line and makes a number of connections also specified on the command line. It starts on port 10000 and increments the port for every 50k connections.

func createConnection(url string) {
	ws, _, err := dialer.Dial(url, nil)
	if err != nil {
		return
	}

	ws.SetReadLimit(maxMessageSize)

	for {
		ws.SetReadDeadline(time.Now().Add(idleTimeout))
		_, message, err := ws.ReadMessage()
		if err != nil {
			break
		}
		if len(message) > 0 {
			fmt.Println("received message", url, string(message))
		}
	}

	ws.Close()
}

go get goroutines.com/10m-client

You can run the client like this:

sysctl -w fs.file-max=11000000
sysctl -w fs.nr_open=11000000
ulimit -n 11000000
sysctl -w net.ipv4.ip_local_port_range="1025 65535"
sysctl -w net.ipv4.tcp_mem="100000000 100000000 100000000"
10m-client <ip address> <number of connections>

This server was run on an n1-highmem-32 instance on GCE. This is a 32-core machine with 208GB of memory. Sending a ping every 5 minutes at 10M connections was roughly the limit of what the server could handle. This ends up being only about 30k pings per second, which is not a terribly high number. It seems to be limited by the kernel or network settings, as using 8 4-core machines (the same number of cores) could handle at least 5x the pings per second that a single 32-core machine could. Since the connections are mostly idle, the channel message traffic is assumed to be insignificant compared to the pings.

At the full 10M connections, the server’s CPUs are only at 10% load and memory is only half used with the default GOGC=100, so it’s likely that the hardware could handle 20M connections and a much higher ping rate without any fancy optimizations to the server code. The garbage collector is surprisingly performant, even with 100GB of memory allocated to the process.

By using smaller instances, such as n1-highmem-4 instances, with 1.3M connections each and putting them behind Google’s excellent layer-3 load balancer you can more easily scale to whatever the maximum number of connections allowed for the load balancer is, if it’s limited at all.
It’s likely a larger number of concurrent connections could be handled using a user space tcp stack such as mTCP or a direct interface to the network card like DPDK though it’s unclear how hard those would be to integrate with Go since they may require pinning threads to specific cores, for instance.

goroutines is a series of articles related somehow to the Go programming language


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/7btfjnwYZ0E/10m

Original article

The Amish Effect

Amish scholar Don Kraybill calls it a riddle, or a paradox.

How can the Amish be such successful entrepreneurs today, when they end their formal education at eighth grade and forswear so much of the paraphernalia of modern life?

That they succeed is indisputable: The failure rate of Amish startups in the first five years is less than 10 percent, versus 65 percent for businesses in North America overall.

Many Amish retailers cater to mainstream customers, and do so with sophistication. Kraybill likes to cite Emma’s Gourmet Popcorn, which pegs promotions to popular holidays and offers online ordering on a modern, well-designed website.

Bowls of the flavored treat were part of a buffet preceding a talk on Amish business that Kraybill gave recently at Elizabethtown College. Kraybill, who retired from teaching at Elizabethtown last year, remains an active scholar at the college’s Young Center for Anabaptist and Pietist Studies.

Over the past few decades, Lancaster County’s Amish have undergone a “mini-Industrial Revolution,” Kraybill said. High land prices plus a population explosion limited farming opportunities for rising generations, fueling a turn to carpentry, small manufacturing and other enterprises.

Today, there are more than 2,000 Amish businesses in the Lancaster area, Kraybill said. Fewer than one-third of local Amish households still rely on farming as the primary source of income.

Alan Dakey is president of the Bank of Bird-in-Hand. Its single branch sits at the corner of North Ronks Road and Route 340, and a majority of its clientele are Plain-sect members.

Many of the bank’s customers farm but also operate nonfarm side businesses, Dakey said.

Remarkably, the bank has yet to record a single 30-day delinquency on a loan since its December 2013 opening — a tribute to its customers’ frugality and money-management capabilities.

Amish aren’t opposed to borrowing per se, but “they want to use it constructively,” Dakey said.

In his talk, Kraybill identified 12 factors he sees contributing to Amish business success.

While some are integral to the culture, many, in principle, could be adopted by anyone.

1. Apprenticeship: Apprenticeship is a training system that mainstream society has largely abandoned, Kraybill said. But in Amish society, teens learn trades by working alongside their parents or other adults. Kraybill described once watching a 13-year-old fix a piece of hydraulic machinery. He had already spent years in his father’s shop and knew what he was doing. “That’s apprenticeship,” Kraybill said.

2. Limited education: Because Amish finish school with eighth grade, they can’t be drawn off into law, medicine or other professions that require extended formal education. The two basic Amish career tracks are farming and small business, so that’s where the best and brightest end up, bringing their ingenuity and drive with them.

3. Work ethic: Amish are brought up in a culture that values hard work. It’s seen as integral to life, and children are brought up from an early age to pitch in to help their family and community.

4. Smallness: “Bigness spoils everything,” Kraybill said an Amishman once told him. With many small companies instead of a few dominant ones, individual Amish have scope to express their entrepreneurial spirit. There’s little social distance between business owners and employees, and owners stay personally invested in their enterprises.

5. Low overhead: Amish businesses don’t have air conditioning or luxurious offices. If the business has an office, Kraybill said he usually finds it empty, because the owner is out working on the shop floor.

6. Social capital: Information propagates rapidly through Amish communities’ social networks. Job seekers and companies with vacancies can put the word out and find each other easily. Transaction costs are low because everyone shares the same values and trust is high.

7. The paradox of technology: The Amish taboos on technology stimulate innovation and “hacking” as entrepreneurs find workarounds, Kraybill said. The culture distinguishes between using and owning technology — that’s why it’s OK for a business like Emma’s Gourmet Popcorn to contract with a website developer, or for Amish carpenters to journey to job sites in “Amish taxis” driven by their neighbors.

8. Infrastructure: New Amish companies operate within a framework created by their fellow businesspeople. They enjoy access to a well-established network of products and services tailored to the culture and its unique needs and restrictions.

9. Regional markets: The tens of millions of people in the mid-Atlantic region comprise a “phenomenal external market” for the Amish, Kraybill said. There are more than 50 Amish markets between Annapolis and New York City, many catering to urban dwellers hungering for a taste of rural life. Ben Riehl, who owns a stand at the Markets at Shrewsbury in southern York County, said half of his Saturday customers drive up from Maryland, and he estimates they account for half his weekly sales.

10. Niche markets: Gourmet popcorn is a niche product. So are dried flower arrangements, carriage restoration, handmade furniture and horse-drawn farm machinery. Many Amish specialize in organic or free-range farming, Dakey said. Kraybill said he knows an Amish farmer who raises camels, having discovered camel milk commands a premium price.

11. Amish “branding”: For many Americans, the term “Amish” has strong positive associations: honesty, simplicity, old-fashioned virtue. Businesses can partake in those associations simply by being Amish. For Riehl, there’s a big difference between overt image-building and the kind of trust that accrues when Amish business owners serve their customers with integrity: The latter “is a reputation that was earned, not a brand that was bought.” 

12. Payroll costs: Amish employees in Amish businesses are exempt from mainstream companies’ Social Security, health insurance and pension mandates. Though that keeps costs down, the impact is often exaggerated, Amish business owners say. They say they still have to pay into Amish Aid, the community’s mutual-aid fund, and they have responsibility for payroll taxes and benefits for non-Amish employees, so the difference isn’t all that great.


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/YKEUaGBW-Pg/article_ba60c8e4-e6dc-11e5-9cc7-73775e680585.html

Original article

The Open eBooks initiative: How well is it working?

dan_cohen_bio_page_photo_300pxDigital Book World has just shared an interview with Dan Cohen, the executive director of the Digital Public Library of America (DLPA), about the Open eBooks initiative, already covered in TeleRead. There, Cohen gives some (alas very anecdotal) evidence about responses to the initiative so far, and some more insight into its founding motivations and goals.

Unfortunately, Cohen doesn’t share many statistics or solid data on takeup of the Open eBooks initiative. He does say that: “We’ve gotten so many great emails and social media responses thanking President Obama for supporting us and publishers and partners for bringing together resources to make this happen. There are entire schools that have gained access to books.” As for publisher buy-in to the program, Cohen remarks that, “along with the wonderful feedback from students and teachers, we’ve also got incredible feedback from publishers. More publishers want to be involved and have their books in the app as well.” That said, at this point at least there’s no further names or details attached to that statement.

One really heartening item from the interview is what Cohen describes as “issues that ended up not being issues. We researched the availability of devices in low-income households and we discovered that that problem is thankfully starting to disappear.” Although, the problem is “not gone totally,” he adds, “we found in recent surveys that 85 percent of households within the poverty line own a device that’s able to host the app. This is a population where ebooks have started to take off as a supplement rather than replacements for physical books.”

My own very slightly – and very personal, dose of skepticism about the Open eBooks program (and this is nothing like a TeleRead house view), is that the linkage to the White House’s K-12 ConnectED initiative opens it to other influences besides the Obama administration’s very laudable support for early learning and literacy. The Trans-Pacific Partnership is just one instance of the very many inroads and lobbying efforts that Big Media has made into the Obama administration, and U.S. government in general. I would be really glad to learn that the Open eBooks initiative has consistently put poor households’ needs above the interests of media owners, and that Big Five publishers have bent over backwards to do the same. I’d welcome reassurance on that. For now, though, my enthusiasm for the Open eBooks initiative remains qualified.

The post The Open eBooks initiative: How well is it working? appeared first on TeleRead News: E-books, publishing, tech and beyond.


Original URL: http://www.teleread.com/open-ebooks-initiative-well-working/

Original article

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑

%d bloggers like this: