Our Open and Autonomous Salary System

A few of us (read founders) in the company had the ominous responsibility called the… “Yearly Compensation Revision”.

We dreaded it. Every year.

scared

Image credit: Paolo Margari

  • It did not fall into the “creative” tasks bucket. We hardly learned anything from doing the same thing year after year.
  • We had almost no feedback on whether our team trusted us in being fair enough.
  • This sucked a significant amount of time from our other work.

We wondered: What would be the exact opposite of what we were doing?

  • What if we could actually increase transparency and autonomy every time we went through the process?
  • Would it be possible to make the whole process enjoyable? Something that we actually looked forward to?

Those were the questions that intrigued us to a point, that we couldn’t shake the curiosity off any longer. So after many years of being afraid to do it, we finally bit the bullet.

We did two things:

  • We made our salaries transparent. All of our salaries are now open to everyone within the company.
  • We made the compensation revision system democratic. Everyone chooses their preferred salary and then tries to get enough support from the rest of our team to justify their expectation.

If you’re curious how the system works, check out our open playbook and our public spreadsheet template.

The focus of this post series is to share our experience doing this for the first time. We made a ton of mistakes but also learned a lot.

At the end of it all, what I think we’ve succeeded in doing is starting a movement of transparency and autonomy at Multunus. From that perspective, this was a bang of a beginning. :)

Creating Ownership

We waited 7 years to launch this system because of the significant social and financial risks involved.

A key motivation however to open things up is that we’re building a world class consultancy. In a workplace place like ours, there’s little place for ill-informed people to exist.

The whole team needs to be aware of what’s happening in our business. Everyone of us also needs to feel a great sense of ownership in our company. Only then, can all of us confidently represent our company in front of our customers.

A key ingredient to create that sense of ownership: Trust.

Building Trust

One of the key challenges we’ve struggled with is creating a high sense of trust between the founders and the rest of the team. This is a bit of a chicken and egg situation. Which group should go out of its way to demonstrate trust first?

No prizes for getting this right. The founders have the key responsibility of taking proactive significant steps to create a high trust environment. We believe that kind of gesture combined with great doses of patience will result in propagation of trust across the whole team in all directions.

So we made the following decisions to:

  • Increasingly make usually-considered-sensitive-and-confidential information transparent
  • To distribute control across everyone in the company. This means making it the responsibility of everyone in the company to take important business decisions that are usually within the control of the founders or senior executives

The autonomous salary system was the first step in the above direction.

Goals

We had the following goals for the new compensation system we were putting together:

  • Dramatically increase the level of transparency with respect to compensation related matters.
  • Provide total control to the team to choose their own salaries.
  • Train the team to start using our financial data to make well-informed compensation level decisions.
  • Keep the whole process be open – so that everyone could react during any step of the process if they had feedback or wanted changes.
  • Execute the process with as little disruption as possible to our day to day operations.

Risks

To list out the risks of launching this kind of system, we:

  • Researched other companies: We found a lot of very good articles (See References at the end) – especially this one. The two companies that were most useful in providing inspiration and confidence were:
    • Buffer: Both for their radical levels of transparency (their salaries are open to the public!) and also for their simple salary formula (that we used as our foundation).
    • Semco: For Ricardo Semler’s vision of egalitarian workplaces.
  • Surveyed the mood of the team: The idea of introducing this kind of radical transparency and autonomy did indeed induce some amount of fear and doubt – but also curiosity and aspiration.

From the research and surveys, we found two broad types of risks:

  1. Social Risks
  2. Financial Risks

Social Risks

Here’s the key social risks we considered:

  • Perception of unfairness: This is usually caused by high levels of subjectivity and ambiguity in the salary determination process. We introduced a “salary formula” to make the process more objective.
  • Higher pay for those with higher influence and negotiation skills: This is a possibility in an autonomous system. We introduced a democratic system of checks and balances to lower this risk.
  • Embarrassment for those with lower pay: This was a key concern.
    • The good: In the “less transparent” past we had put in significant efforts to maintain compensation fairness across the team. So making the salaries open wasn’t a big deal from that perspective.
    • The not-so-good: But we were concerned that a certain group of folks (playing a specific role) within our team were being paid significantly less than their counterparts. While this was not kept secret per se, the risk of bruised egos in a transparent environment was still a significant risk.
    • The solution:
      • We decided not to reveal the existing salaries of our people. Only the newly revised self-determined would be open to everyone.
      • The root cause of this risk is lack of confidence. The solution for this is to coach people to be comfortable in their own skin and attain self-confidence by creating significant value for our customers and the community. We do that too.
  • Salary is a personal matter for some people: While we do appreciate this fact, our priority is higher trust across the team. We also believe this is a perspective that can change with good coaching.

Financial Risks

Here’s the list of Financial Risks:

  • The operational expenses could spiral out of control – potentially affecting the sustainability of the business.
  • Opening up our books: For us to be truly transparent, we would need to open up our books as well to everyone in the company. This could be a Pandora’s box – bringing in a whole new set of questions, concerns or doubts about the company.

The rest of this post will focus on the Social Risks and how we’ve tried to mitigate them. Part 2 of this series will detail the Financial Risks.

The Salary Formula

The first step we took was to standardize the salary structure using a “Salary Formula” (very similar to that of Buffer). This Formula would serve as the basis for salary calculation for everyone across the the company. This is also a great way to increase fairness.

The Buffer salary formula has 3 components, corresponding “attributes” and “attribute numbers”:

  • The Role and the corresponding Base Salary
  • The Skill Level and the corresponding Multiplier Factor
  • The Leadership component and the corresponding Additive Factor

We added a fourth component:

  • Flexibility Factor and the corresponding multiplier factor. We added this to suit the needs of some of our employees – who preferred to work lesser hours.

image02

[More details on each of these components are available in our Open Playbook]

While the above formula structure was a great start, we still had to determine what attributes and attribute numbers would make sense for us. Specifically:

  • What roles and matching base salaries would make sense for us?
  • What skill levels and matching multiplier factors should we go with?
  • What leadership levels and corresponding numbers should we choose?
  • How would the Flexibility factor work?

3 Circle Org structure

We knew multiple iterations would be needed until we had the final list of attributes and attribute numbers that would work for everyone.

As mentioned earlier, we also had two additional goals:

  • Do these iterations in the open and by keeping everyone in the company involved
  • Ensure that we were making consistent progress and not disrupting our daily operations too much

To achieve both of those goals, we created a 3 Circle Org Structure:

Heads Council Everyone Circles

Heads: We already had a Business Unit “Heads” group – that would meet every week to discuss and plan progress of our company.

Council: We also had some team members in the company who had already been demonstrating some level leadership in the company – so, we invited them to create the “Council”.  

And finally we had everyone else.

Side Note: We prefer to use circles rather than a pyramid – because this is not really about authority and control – but more about efficiency, pragmatism and responsibility. The folks at the center are held more accountable to ensure success of the process than the folks on the outer circles.

We iterated through the process of plugging in numbers for each of the components until we had something that we thought would work for everyone. The Heads would first come up with some values for each of the components in the formula – and then invite the Council members to provide feedback. And when the Heads and the Council were in sync – we were  ready to involve everyone else in the company – and get their feedback.

The Evening of Chaos

We scheduled a 2 hour marathon session one late afternoon to meet with everyone. The goal was to get everyone’s numbers decided, finish the whole process and just move on with our lives.

This however turned out to be much harder than what we’d expected. It was chaos.

Deadlock and key learnings

The goal was to decide on the formula numbers for all roles, skill levels and leadership levels across everyone across the company. We’d assumed (naively) that people would be focused on the whole team and not be tempted to find out how the formula would affect their own numbers.

We had a deadlock between what people felt was good for everyone in the company and what met their own personal salary preferences. After many attempts at changing the numbers over and over again for over 6 hours – everyone just got tired and went home (really late!).

At this point we were at our wit’s end. Things were looking quite sticky.

We however learned the following 3 key things:

  1. It was important to highlight the fact that the compensation is a direct derivative of the value being created by the individual. And that the Role, Skill and Leadership components are tools to be able to gauge that accurately.
  2. It is difficult for people to think about the company and themselves at the same time.
  3. A democratic system requires some level of structure to guide the process and to help people make decisions easily and quickly.

Breaking the deadlock – at the cost of some Transparency and Autonomy

We made the following changes, to implement the above learnings:

  1. Delinked the Company and Individual part (see details below) of the process and made them separate steps
  2. Added a few simple tools to make it possible for everyone to efficiently coordinate and make decisions collectively

image03

image04

The heads got together the next morning and came up with the following changes to separate the Company and Individual level numbers – albeit at the expense of significantly reducing transparency and autonomy:

  • Step 1: Private Review of Company Level Numbers: We removed visibility to the spreadsheet temporarily – so it was visible only to the heads at this point. We then reviewed and revised the Attribute Lists and the Attribute Numbers – keeping the following two goals in mind:
    • Use the data we’d collected from the earlier exercise as a key indicator of the salary preferences of the whole team
    • Ensure that fairness would be maintained across everyone in the company – across roles, skill levels and leadership
    • Ensure that the sustainability of our business would be maintained

The good news: We were already mid-way through the whole process and the formula numbers were already somewhat stabilized – enough data for us to feel confident of feeling the pulse of the team correctly.

  • Step 2: Choosing the Individual Levels for each of the Formula Components: We kept the Company Level formula numbers hidden – and asked the Council members and everyone else in the company to select their (Individual Level) role, skill, leadership and flexibility components – in that order.
    • This ensured that each individual would have to choose their Roles, Skill Levels and Leadership values purely on the basis of the definitions of those components – and not on the compensation that would finally get calculated.
    • This required a leap of faith on behalf of our team, but we had little resistance to the idea from anyone on the team.
  • Step 3: Democracy for the Individual Levels: Once everyone was done choosing their respective individual components – we brought in the first level of democracy:
    • We asked everyone on the team to get upvotes from at least 6 others in the company with a good distribution of roles, skill levels and leadership levels – demonstrating support for their individual decisions.
    • If someone could not get support, then they would need to check in with the others to find out why. They could at this point do what is common sense – either convince the others that the numbers made sense or get convinced to change their own numbers.
    • This was a magical moment – because for the first time in the company’s history, the leadership was not involved in these decisions. It was driven by the whole team.
    • Once everyone was done getting the support needed for their numbers – we locked the spreadsheet to avoid future changes.
  • Step 4: Salaries Revealed: We then revealed the Company Level numbers to everyone and then applied the formula numbers across the team. Suddenly, everyone on the team could see their salaries :).
  • Step 5: Democracy for the Salaries: At this point, we asked everyone to either say “ok” or “not ok” against the numbers that they could see against their name.
    • If there was a “not ok”, then they were also required to state in the open spreadsheet – why they felt the numbers were not appropriate, how much they expected and why it was better for them as well as for the company that an override should be approved for themselves.
    • Once again, if they could get enough support across the team, then the override would be approved.

Conclusion

Figuring this process out was one of the more complicated projects we’ve attempted. But it was also:

  • A satisfying system design project. Our goal was to create something simple and easy to use and we’ve made good progress on that. In fact the process detailed in our our open playbook is more refined, transparent and autonomous than the version detailed above. And that is the version we intend to use going forward.
  • A significant first step in building a great culture. This project has triggered a series of continuous improvements in making our workplace more autonomous and transparent. Every couple of weeks, we make open for discussion one more  “previously-considered-taboo” topic. We’re getting there :).

References

Tags: Business


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/VO8JiEjKEHw/

Original article

OpenSSL Security Advisory

OpenSSL Security Advisory [3rd May 2016]
========================================

Memory corruption in the ASN.1 encoder (CVE-2016-2108)
======================================================

Severity: High

This issue affected versions of OpenSSL prior to April 2015. The bug
causing the vulnerability was fixed on April 18th 2015, and released
as part of the June 11th 2015 security releases. The security impact
of the bug was not known at the time.

In previous versions of OpenSSL, ASN.1 encoding the value zero
represented as a negative integer can cause a buffer underflow
with an out-of-bounds write in i2c_ASN1_INTEGER. The ASN.1 parser does
not normally create “negative zeroes” when parsing ASN.1 input, and
therefore, an attacker cannot trigger this bug.

However, a second, independent bug revealed that the ASN.1 parser
(specifically, d2i_ASN1_TYPE) can misinterpret a large universal tag
as a negative zero value. Large universal tags are not present in any
common ASN.1 structures (such as X509) but are accepted as part of ANY
structures.

Therefore, if an application deserializes untrusted ASN.1 structures
containing an ANY field, and later reserializes them, an attacker may
be able to trigger an out-of-bounds write. This has been shown to
cause memory corruption that is potentially exploitable with some
malloc implementations.

Applications that parse and re-encode X509 certificates are known to
be vulnerable. Applications that verify RSA signatures on X509
certificates may also be vulnerable; however, only certificates with
valid signatures trigger ASN.1 re-encoding and hence the
bug. Specifically, since OpenSSL’s default TLS X509 chain verification
code verifies the certificate chain from root to leaf, TLS handshakes
could only be targeted with valid certificates issued by trusted
Certification Authorities.

OpenSSL 1.0.2 users should upgrade to 1.0.2c
OpenSSL 1.0.1 users should upgrade to 1.0.1o

This vulnerability is a combination of two bugs, neither of which
individually has security impact. The first bug (mishandling of
negative zero integers) was reported to OpenSSL by Huzaifa Sidhpurwala
(Red Hat) and independently by Hanno Böck in April 2015. The second
issue (mishandling of large universal tags) was found using libFuzzer,
and reported on the public issue tracker on March 1st 2016. The fact
that these two issues combined present a security vulnerability was
reported by David Benjamin (Google) on March 31st 2016. The fixes were
developed by Steve Henson of the OpenSSL development team, and David
Benjamin. The OpenSSL team would also like to thank Mark Brand and
Ian Beer from the Google Project Zero team for their careful analysis
of the impact.

The fix for the “negative zero” memory corruption bug can be
identified by commits

3661bb4e7934668bd99ca777ea8b30eedfafa871 (1.0.2)
and
32d3b0f52f77ce86d53f38685336668d47c5bdfe (1.0.1)

Padding oracle in AES-NI CBC MAC check (CVE-2016-2107)
======================================================

Severity: High

A MITM attacker can use a padding oracle attack to decrypt traffic
when the connection uses an AES CBC cipher and the server support
AES-NI.

This issue was introduced as part of the fix for Lucky 13 padding
attack (CVE-2013-0169). The padding check was rewritten to be in
constant time by making sure that always the same bytes are read and
compared against either the MAC or padding bytes. But it no longer
checked that there was enough data to have both the MAC and padding
bytes.

OpenSSL 1.0.2 users should upgrade to 1.0.2h
OpenSSL 1.0.1 users should upgrade to 1.0.1t

This issue was reported to OpenSSL on 13th of April 2016 by Juraj
Somorovsky using TLS-Attacker. The fix was developed by Kurt Roeckx
of the OpenSSL development team.

EVP_EncodeUpdate overflow (CVE-2016-2105)
=========================================

Severity: Low

An overflow can occur in the EVP_EncodeUpdate() function which is used for
Base64 encoding of binary data. If an attacker is able to supply very large
amounts of input data then a length check can overflow resulting in a heap
corruption.

Internally to OpenSSL the EVP_EncodeUpdate() function is primarly used by the
PEM_write_bio* family of functions. These are mainly used within the OpenSSL
command line applications. These internal uses are not considered vulnerable
because all calls are bounded with length checks so no overflow is possible.
User applications that call these APIs directly with large amounts of untrusted
data may be vulnerable. (Note: Initial analysis suggested that the
PEM_write_bio* were vulnerable, and this is reflected in the patch commit
message. This is no longer believed to be the case).

OpenSSL 1.0.2 users should upgrade to 1.0.2h
OpenSSL 1.0.1 users should upgrade to 1.0.1t

This issue was reported to OpenSSL on 3rd March 2016 by Guido Vranken. The
fix was developed by Matt Caswell of the OpenSSL development team.

EVP_EncryptUpdate overflow (CVE-2016-2106)
==========================================

Severity: Low

An overflow can occur in the EVP_EncryptUpdate() function. If an attacker is
able to supply very large amounts of input data after a previous call to
EVP_EncryptUpdate() with a partial block then a length check can overflow
resulting in a heap corruption. Following an analysis of all OpenSSL internal
usage of the EVP_EncryptUpdate() function all usage is one of two forms.
The first form is where the EVP_EncryptUpdate() call is known to be the first
called function after an EVP_EncryptInit(), and therefore that specific call
must be safe. The second form is where the length passed to EVP_EncryptUpdate()
can be seen from the code to be some small value and therefore there is no
possibility of an overflow. Since all instances are one of these two forms, it
is believed that there can be no overflows in internal code due to this problem.
It should be noted that EVP_DecryptUpdate() can call EVP_EncryptUpdate() in
certain code paths. Also EVP_CipherUpdate() is a synonym for
EVP_EncryptUpdate(). All instances of these calls have also been analysed too
and it is believed there are no instances in internal usage where an overflow
could occur.

This could still represent a security issue for end user code that calls this
function directly.

OpenSSL 1.0.2 users should upgrade to 1.0.2h
OpenSSL 1.0.1 users should upgrade to 1.0.1t

This issue was reported to OpenSSL on 3rd March 2016 by Guido Vranken. The
fix was developed by Matt Caswell of the OpenSSL development team.

ASN.1 BIO excessive memory allocation (CVE-2016-2109)
=====================================================

Severity: Low

When ASN.1 data is read from a BIO using functions such as d2i_CMS_bio()
a short invalid encoding can casuse allocation of large amounts of memory
potentially consuming excessive resources or exhausting memory.

Any application parsing untrusted data through d2i BIO functions is affected.
The memory based functions such as d2i_X509() are *not* affected. Since the
memory based functions are used by the TLS library, TLS applications are not
affected.

OpenSSL 1.0.2 users should upgrade to 1.0.2h
OpenSSL 1.0.1 users should upgrade to 1.0.1t

This issue was reported to OpenSSL on 4th April 2016 by Brian Carpenter.
The fix was developed by Stephen Henson of the OpenSSL development team.

EBCDIC overread (CVE-2016-2176)
===============================

Severity: Low

ASN1 Strings that are over 1024 bytes can cause an overread in applications
using the X509_NAME_oneline() function on EBCDIC systems. This could result in
arbitrary stack data being returned in the buffer.

OpenSSL 1.0.2 users should upgrade to 1.0.2h
OpenSSL 1.0.1 users should upgrade to 1.0.1t

This issue was reported to OpenSSL on 5th March 2016 by Guido Vranken. The
fix was developed by Matt Caswell of the OpenSSL development team.

Note
====

As per our previous announcements and our Release Strategy
(https://www.openssl.org/policies/releasestrat.html), support for OpenSSL
version 1.0.1 will cease on 31st December 2016. No security updates for that
version will be provided after that date. Users of 1.0.1 are advised to
upgrade.

Support for versions 0.9.8 and 1.0.0 ended on 31st December 2015. Those
versions are no longer receiving security updates.

References
==========

URL for this Security Advisory:
https://www.openssl.org/news/secadv/20160503.txt

Note: the online version of the advisory may be updated with additional details
over time.

For details of OpenSSL severity classifications please see:
https://www.openssl.org/policies/secpolicy.html


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/4ytJgPFnaIY/20160503.txt

Original article

Go 1.7 freeze announced

The Go 1.7 freeze has begun.

Important pending CLs can still be reviewed but really need to be completed and merged by the end of the week, or else postponed.

The remaining bug fix work should be focused on regressions since Go 1.6, especially the kind of crash/unavoidable problem that we would issue a point release for.

Open Go 1.7 bugs for problems that are not new since Go 1.6 should in general be postponed (moved to Unplanned milestone): if Go 1.6 behaved that way, it’s probably OK for Go 1.7 to continue to behave that way. That’s even more true if Go 1.5 or earlier also behaved that way.

As noted in past emails to golang-dev and on golang.org/wiki/Go-Release-Cycle, the constraints above are stricter than in past cycles. An explicit goal is to ship the first beta on time, by May 31, instead of many weeks late as has been our past practice. (If the past pattern held, this release’s first beta would be seven weeks late, or one week before the scheduled release date.)

Thanks.

Russ


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/CSqNuqyhAjI/

Original article

Age of Learning, a quiet giant in education apps, raised $150M at a $1B valuation from Iconiq

mice Some startups raise a lot of money with much fanfare before they’ve ever shipped a product, but some grow under the radar, building something that clicks, and then slowly amassing users and revenues before most even realise they’ve arrived. Now, one of the latter — an education startup called Age of Learning — has moved into the billion dollar valuation club on… Read More


Original URL: http://feedproxy.google.com/~r/Techcrunch/~3/98uIgcJpNEI/

Original article

When innovating stops making sense

shutterstock_280436669 If you listened carefully to business news last week you could hear the sound of a giant tree falling in a quiet forest. It happened when Rovi bought TiVo for $1.1 billion. It was a merger of convenience, a way for TiVo to get out of its slump and die a gracefully without much shareholder pain. The buyer, Rovi, is a meta-data provider to set-top boxes while TiVo was the original set-top box,… Read More


Original URL: http://feedproxy.google.com/~r/Techcrunch/~3/B0BoFIVesRk/

Original article

Giphy launches a keyboard for iOS called Giphy Keys

giphykeys With all of Giphy’s integrations with chat platforms, it was only a matter of time before the betaworks-backed GIF platform launched its own keyboard. World, welcome Giphy Keys into the mix. The Giphy Keys third-party keyboard launches on iOS today, and lets users send GIFs from any app they want. Even Snapchat. Here’s how it works: Once you’ve downloaded the Giphy Keys app… Read More


Original URL: http://feedproxy.google.com/~r/Techcrunch/~3/UhAuKzKOuZ8/

Original article

Superdesk – An End-To-End Platform for News

Contact us

superdesk@sourcefabric.org

Prague
Sourcefabric z.ú. Salvátorská 10, 110 00 Praha 1
phone: +420 222 362 540

Berlin
Sourcefabric GmbH, Prinzessinnenstraße 20
Aufgang A, 10969 Berlin
phone: +49 30 6162 9281

Toronto
Sourcefabric North America, Centre for Social Innovation
720 Bathurst St. Suite 203
Toronto, Ontario M5S 2R4


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/cmTEbuoQTDU/

Original article

Show HN: Go Actor Model, ultra fast distributed actors for Golang

README.md

GAM is a MVP Actor Model framework for Go.

Design principles:

Minimalistic API
In the spirit of Go, the API should be small and easy to use.
Avoid enterprisey JVM like containers and configurations.

Build on existing technologies – There are already a lot of great tech for e.g. networking and clustering, build on those.
e.g. gRPC streams for networking, Consul.IO for clustering.

Pass data, not objects – Serialization is an explicit concern, don’t try to hide it.
Protobuf all the way.

Be fast – Do not trade performance for magic API trickery.

Ultra fast remoting, GAM currently manages to pass 800k+ messages between nodes using only two actors, while still preserving message order!

:> node1.exe
2016/04/30 20:33:48 Host is 127.0.0.1:55567
2016/04/30 20:33:48 Started EndpointManager
2016/04/30 20:33:48 Starting GAM server on 127.0.0.1:55567.
2016/04/30 20:33:48 Started EndpointWriter for host 127.0.0.1:8080
2016/04/30 20:33:48 Connecting to host 127.0.0.1:8080
2016/04/30 20:33:48 Connected to host 127.0.0.1:8080
2016/04/30 20:33:48 Getting stream from host 127.0.0.1:8080
2016/04/30 20:33:48 Got stream from host 127.0.0.1:8080
2016/04/30 20:33:48 Starting to send
2016/04/30 20:33:48 50000
2016/04/30 20:33:48 100000
...snip...
2016/04/30 20:33:50 950000
2016/04/30 20:33:50 1000000
2016/04/30 20:33:50 Elapsed 2.4237125s

2016/04/30 20:33:50 Msg per sec 825180 <---

Why Actors

batman

  • Decoupled Concurrency
  • Distributed by default
  • Fault tolerance

For a more indepth description of the differences, see this thread Actors vs. CSP

Hello world

type Hello struct{ Who string }
type HelloActor struct{}

func (state *HelloActor) Receive(context actor.Context) {
    switch msg := context.Message().(type) {
    case Hello:
        fmt.Printf("Hello %vn", msg.Who)
    }
}

func main() {
    props := actor.FromInstance(&HelloActor{})
    pid := actor.Spawn(props)
    pid.Tell(Hello{Who: "Roger"})
    console.ReadLine()
}

State machines / Become and Unbecome

type Become struct{}
type Hello struct{ Who string }
type BecomeActor struct{}

func (state *BecomeActor) Receive(context actor.Context) {
    switch msg := context.Message().(type) {
    case Hello:
        fmt.Printf("Hello %vn", msg.Who)
        context.Become(state.Other)
    }
}

func (state *BecomeActor) Other(context actor.Context) {
    switch msg := context.Message().(type) {
    case Hello:
        fmt.Printf("%v, ey we are now handling messages in another behavior", msg.Who)
    }
}

func NewBecomeActor() actor.Actor {
    return &BecomeActor{}
}

func main() {
    props := actor.FromProducer(NewBecomeActor)
    pid := actor.Spawn(props)
    pid.Tell(Hello{Who: "Roger"})
    pid.Tell(Hello{Who: "Roger"})
    console.ReadLine()
}

Lifecycle events

Unlike Akka, GAM uses messages for lifecycle events instead of OOP method overrides

type Hello struct{ Who string }
type HelloActor struct{}

func (state *HelloActor) Receive(context actor.Context) {
    switch msg := context.Message().(type) {
    case actor.Started:
        fmt.Println("Started, initialize actor here")
    case actor.Stopping:
        fmt.Println("Stopping, actor is about shut down")
    case actor.Stopped:
        fmt.Println("Stopped, actor and it's children are stopped")
    case actor.Restarting:
        fmt.Println("Restarting, actor is about restart")
    case Hello:
        fmt.Printf("Hello %vn", msg.Who)
    }
}

func main() {
    props := actor.FromInstance(&HelloActor{})
    actor := actor.Spawn(props)
    actor.Tell(Hello{Who: "Roger"})

    //why wait?
    //Stop is a system message and is not processed through the user message mailbox
    //thus, it will be handled _before_ any user message
    //we only do this to show the correct order of events in the console
    time.Sleep(1 * time.Second)
    actor.Stop()

    console.ReadLine()
}

Supervision

Root actors are supervised by the actor.DefaultSupervisionStrategy(), which always issues a actor.RestartDirective for failing actors
Child actors are supervised by their parents.
Parents can customize their child supervisor strategy using gam.Props

Example

type Hello struct{ Who string }
type ParentActor struct{}

func (state *ParentActor) Receive(context actor.Context) {
    switch msg := context.Message().(type) {
    case Hello:
        props := actor.FromProducer(NewChildActor)
        child := context.Spawn(props)
        child.Tell(msg)
    }
}

func NewParentActor() actor.Actor {
    return &ParentActor{}
}

type ChildActor struct{}

func (state *ChildActor) Receive(context actor.Context) {
    switch msg := context.Message().(type) {
    case actor.Started:
        fmt.Println("Starting, initialize actor here")
    case actor.Stopping:
        fmt.Println("Stopping, actor is about shut down")
    case actor.Stopped:
        fmt.Println("Stopped, actor and it's children are stopped")
    case actor.Restarting:
        fmt.Println("Restarting, actor is about restart")
    case Hello:
        fmt.Printf("Hello %vn", msg.Who)
        panic("Ouch")
    }
}

func NewChildActor() actor.Actor {
    return &ChildActor{}
}

func main() {
    decider := func(child *actor.PID, reason interface{}) actor.Directive {
        fmt.Println("handling failure for child")
        return actor.StopDirective
    }
    supervisor := actor.NewOneForOneStrategy(10, 1000, decider)
    props := actor.
        FromProducer(NewParentActor).
        WithSupervisor(supervisor)

    pid := actor.Spawn(props)
    pid.Tell(Hello{Who: "Roger"})

    console.ReadLine()
}

Networking / Remoting

GAM’s networking layer is built as a thin wrapper ontop of gRPC and message serialization is built on Protocol Buffers

Example

Node 1

type MyActor struct{
    count int
}

func (state *MyActor) Receive(context actor.Context) {
    switch msg := context.Message().(type) {
    case *messages.Response:
        state.count++
        fmt.Println(state.count)
    }
}

func main() {
    remoting.StartServer("localhost:8090")

    pid := actor.SpawnTemplate(&MyActor{})
    message := &messages.Echo{Message: "hej", Sender: pid}

    //this is the remote actor we want to communicate with
    remote := actor.NewPID("localhost:8091", "myactor")
    for i := 0; i < 10; i++ {
        remote.Tell(message)
    }

    console.ReadLine()
}

Node 2

type MyActor struct{}

func (*MyActor) Receive(context actor.Context) {
    switch msg := context.Message().(type) {
    case *messages.Echo:
        msg.Sender.Tell(&messages.Response{
            SomeValue: "result",
        })
    }
}

func main() {
    remoting.StartServer("localhost:8091")
    pid := actor.SpawnTemplate(&MyActor{})

    //register a name for our local actor so that it can be discovered remotely
    actor.ProcessRegistry.Register("myactor", pid)
    console.ReadLine()
}

Message Contracts

syntax = "proto3";
package messages;
import "actor.proto"; //we need to import actor.proto, so our messages can include PID's

//this is the message the actor on node 1 will send to the remote actor on node 2
message Echo {
  actor.PID Sender = 1; //this is the PID the remote actor should reply to
  string Message = 2;
}

//this is the message the remote actor should reply with
message Response {
  string SomeValue = 1;
}

For more examples, see the example folder in this repository.


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/IOnrB2pKRxQ/gam

Original article

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑

%d bloggers like this: