4 min read

Spark of Madness

Spark of Madness
I miss Robin Williams :(

ShipIt, Atlassian's company-wide innovation event, has come and gone yet again.

That's a good thing, because innovation is important and ShipIt is the devil on your shoulder, noisily questioning why you aren't making more time for innovation. Demented voice laced with judgement.

Well, that's how it works for me anyway.

Good Morning, Data Scientists!

I'm lucky enough to work for a company for whom data is a first-class citizen. To that end, we have what I assume is a data lake; a big old collection of data from across the organisation with a bunch of analysis and visualisation tooling sitting on top.

I have absolutely no idea how it all works. It might as well be magic.

What I do know is that when I want to query some of the data, I can write SQL queries and then easily create visualisations from the resulting data sets. I do this sort of thing semi-frequently, especially when I need to build a dashboard to clearly show project success criteria.

In writing the SQL though I often run into issues because it's not just standard old ANSI SQL.

It's Spark SQL. Well, technically it's Spark in general and I don't know how to use the rest of its capabilities, but the root point still stands.

Look, there really is a lot that I don't know about our data lake.

Anyway, for this ShipIt, someone in the data team came up with the bright idea to create a guide to Spark SQL. Something that would sit alongside the official documentation and provide tips and tricks for effectively working with Spark in an Atlassian context.

A repository of deliciously arcane secrets.

Naturally I wanted in.

Artificial Intelligence

It's surprisingly important to me, to have data that backs up my position. I don't like making claims that I can't justify with hard numbers.

I blame my ill-fated foray into OKRs for this tendency. Doing that project really reinforced in my head that it you can't measure something when you have a goal, you probably don't understand it well enough to know if you ever accomplish it.

As a result, I really want to make the most of the massive wealth of information that is available inside the Atlassian data lake. I'm pretty sure that everything that I could ever want is in there, if only I could find it and bend it to my will.

But there is a gap between desire and capabilities, because I know enough to be dangerous when it comes to data analysis and visualisation, but not enough to be confident or effective.

Imagine my pleasure when I saw a project being proposed for ShipIt that would help me fill that gap.

Not only do I get to participate in an innovation ritual, but I also get to learn something that will be immediately valuable!

But it's not just that.

I've said it a tonne of times before, but ShipIt is an opportunity for me to meet people completely disconnected from my day to day. To see different points of view, different patterns and practices and just absorb more about Atlassian in general.

In this particular case, the project team was made up entirely of data scientists, which is a group I don't have a huge amount to do with but which I respect greatly.

It never hurts to have friends in data places.

Night At The Data Lake

At the end of ShipIt, I didn't feel like I'd made a meaningful contribution.

Some of the past projects that I've participated in have been very collaborative, with lots of discussion about what we want to accomplish and how we're going to get there. Typically, this allows me to pick up a piece of work and run with it, which is exactly what I want from my innovation time. I don't want to have to be the manager or organiser, I get enough of that in my day job.

This project didn't really follow that model. There was a modicum of discussion, but not much in the way of identifying pieces of work that fit into a greater narrative. I was the odd man out (i.e. the only member who wasn't a data scientist or data engineer) so I didn't have any secrets of my own to contribute, and as a result, felt a bit lost.

I thought that maybe I would be able to provide some sort of testing or validation service, to take the resulting tips and tricks, try it out and then mutate the documentation around them to be more usable.

And I did do some of that, but it just didn't feel like enough. I didn't feel effective.

But it wasn't like the time spent was a complete loss. I learned more about Spark SQL and the team did manage to create a useful guide by the end. In fact, it's useful enough that I've already returned to it at least once since.

More importantly, I learnt more about what makes me happy as far as my ShipIt participation is concerned, which will help me set myself up for success next time. For example, I need to spend more time understanding candidate projects, team makeup and process before I commit.

It's hard to find time to do that sort of thing in the lead up to ShipIt, but this experience has definitely reinforced that I need to try harder or suffer the consequences.

What Dreams May Come

ShipIt projects don't always have to break the world to be a valuable use of time, and I think this one is a good example of that.

It might not have been the most fulfilling experience I've ever had, but I don't consider it a failure. Far from it.

I walked away wiser than when I entered, got to meet some new people, and made a thing that would not have existed otherwise.

The innovation devil on my shoulder is sated, chittering quietly in its slumber, chaotic whispering barely audible.

Now if only I could do something for the rest of the voices.