6 min read

Five Demon Bag

Five Demon Bag
It's surprisingly hard to get a high quality screencap from this movie. All credit to 20th Century Fox

Wow, it's been a whole year since the last round of annual reviews.

It seems like only yesterday that I was trawling through hundreds of disparate pieces of information trying to draw a cohesive picture together, then struggling to find the right words to communicate that picture.

Thanks to the inevitable march of time, here we are again.

I was so sure last time that I'd do things throughout the year that would make next time easier.

Then someone went and changed the goalposts.

Can See Things No One Else Can See

It's not unusual for an organisation to re-evaluate their performance evaluation framework, and then change it to maximise or minimise certain desirable outcomes.

And that's exactly what Atlassian did.

The whole thing is still called OneCycle, but it's a bit different now.

In our old process, the first layer consisted of three dimensions:

  • Values - How well you follow the Atlassian values
  • Role - How well you perform at your role
  • Team - How well you support and elevate your team

These dimensions have been replaced with a set of four pillars. The pillars differ between Individual Contributors (IC's, aka Engineers) and People Managers, but I'm only going to focus on the IC pillars here because they are the ones I care about the most.

The first pillar is Project Impact, which is intended to evaluate whether or not the person has made meaningful contributions to the projects and other initiatives that drive and deliver change within the organisation.

The second pillar is Craft Excellence, which is intended to evaluate whether or not the person is skilled and experienced in their craft of choice. For the people I'm responsible for, this directly evaluates their skill at engineering, and includes things like actual coding, design, reviewing other's code, documentation and so on.

The third pillar is Direction, which is intended to evaluate whether or not the person is contributing to the direction of their immediate team. This covers off things like improving processes within the local group, driving innovation, etc.

The last pillar is Organisational Impact, which is intended to evaluate whether or not the person is contributing to the direction of the organisation as a whole (i.e. cross-team impact). Things like supporting and driving Foundation (aka charity) events, building reusable components and rolling them out, etc.

Underpinning every single pillar are the Atlassian Values. What this means is that it doesn't matter if you are amazing at creating Project Impact if you're alienate your teammates and burn yourself out while you do so.

The pillars describe the what, the values describe the how.

Not every pillar has an equal weighting for each IC role, so there is a bit of nuance to the framework, but that's okay. Nuance is good. It creates flexibility.

The new pillars attempt to capture the things that are most important to the organisation and ensure that everyone takes them into account during the year if they want to do well.

Can Do Things No One Else Can Do

Layered on top of the pillars are the ratings themselves.

In our old process, each dimension had ratings of its own, but they all boiled down to a single overarching rating for the entire year, which was one of the following:

  • Exceptional - You nailed it! Smashed it out of the park
  • Great - You did what was expected
  • Off - You had some growth areas that held you back

Now they are five ratings, which are only applied at the overarching level. Each pillar doesn't technically have a rating of its own, but you could, if you were sufficiently motivated, give each person a rating for each pillar to help calculate the overarching rating.

The highest rating is Greatly Exceeding Expectations (GE). This is sort of equivalent to the old Exceptional rating, except even more intense. In order to get this rating, you really need to have smashed it out of the park, adding huge amounts of value and doing all sorts of things that were not expected of you.

The next rating is Exceeding Expectations (EE), which represents you doing a pretty fantastic job, but not quite hitting the mark on everything, or maybe not doing all of the extra, unexpected stuff.

Right in the middle we have Meets Expectations (ME), which is your classic rating indicating that you're doing what's expected of you, maybe a little bit more, maybe a little bit less. There are gaps, but there are great things as well, and they probably even out.

Below that is Meets Most Expectations (MM), which means that you're falling behind a little bit. You're still delivering value, but you've got more gaps than you have greatness when everything is put into balance. You've almost certainly got meaningful growth areas that need to be resolved in order to improve your rating.

Last is Does Not Meet Expectations (DN), which is exactly what it says on the can. You're not delivering enough value based on the expectations of your role and you have some serious growth areas that need to be addressed.

It's expected that most of the organisation is probably going to be in either ME or EE, with a small percentage in GE, MM and DN. After all, the goal at Atlassian is to hire high performing people, so our rating distribution should represent that.

The ratings are subsequently used in salary, bonus, and equity calculations, with more of each of those things being applied to those who do better vs those who are struggling to deliver value.

What More Can A Guy Ask For?

I'm actually not sure how I feel about the new OneCycle, with its pillars and its ratings.

On the upside, the change came with a decent explanation as to why it was being done. The idea was to focus our reward and recognition structure on the things that are most important to the company and to provide managers with more flexibility during the yearly evaluation.

The underlying goal is to raise the overall performance of the company, which is the same reason why we've recently done some reallocation of resources and an assortment of company restructures.

Atlassian is positioned as a growth company, and the current economic climate is making that more difficult, so changing things with the intent to maintain our previously high level of growth is critically important.

On the downside, we rolled the change out in the middle of the year, which felt like a bit of a dick move because it shifted the goal posts for anyone who had already aligned their performance to the previous model. In my opinion it would have been much better to wait until the end of the review period before making a sweeping change like this, or perhaps to announce it but not apply it until the new review period.

The other downside is that I only just learned how to use the last model, and now I have to learn how to use a new one. I know, I know, change is the nature of the game, but it still adds stress to an already stressful time of year. We did get a bit of a taste during a dry run earlier in the year, so it's not like it came out of nowhere, but I still think we could have done better.

Upsides and downsides aside, the other reason I'm not sure how I feel about the change is that the old model just resonated with me better.

I don't know if it's because I'm more familiar with it, or because the provided guidance was better or what, but I liked talking about things in terms of Values, Role and Team. It felt right.

I'm somewhat indifferent about the old ratings vs the new ratings. The language used in the old ones felt softer and more nurturing, but I do enjoy the clarity of the new ratings and the fact that I have more flexibility.

Wind, Fire, All That Kind Of Thing!

The new OneCycle process is probably as sound as the previous one, once I get over that learning curve and let go of my fondness for the old one that is.

Atlassian is the first job I've had where I've needed to perform structured and comprehensive performance reviews for people where the results have transparent and meaningful impact on their remuneration.

That's great! But the stress involved in performing the evaluations and the pressure that it puts on the people being evaluated does leave me questioning the overall value.

Don't get me wrong, helping people to grow by identifying areas for improvement or emphasising strengths is critically important. It's literally one of the main reasons that management exists as a discipline.

But while tying growth into compensation makes sense from a meritocracy point of view, I worry that it shifts the perspective of the people involved.

Instead of trying to grow purely to become more awesome, the goal becomes to grow for reward or recognition.

Instead of trying to help people become more awesome, the goal becomes to evaluate them and put them in an appropriate bucket.

It's a different mindset and it encourages different behaviours.

And I just don't know if it encourages the right mindset.