Thursday, February 23, 2012

Lightweight, Agile-Esque Project Management for Visual Effects and Animation

Rodney Mullen, the patron saint of Agility.

Introduction

I'm helping coach a small development team through an internal product release cycle, using a project management methodology that is structured and lightweight. I'd like to describe how we work, because it's a methodology that I've seen have success on more than one project with different teams, team sizes, and product types. From the outset, I'll emphasize that what I'm describing differs from Scrum or any other specific 'branded' Agile methodology - we're inspired by these schools, but not identical to them.

I intend to focus primarily on the 'what and how', rather than the 'why'.

Formal Agile Reading

The Agile Manifesto is the founding document of the Agile Development idea, and worth a read.

Mike Cohn's Agile Estimating and Planning is an excellent guide to the full-blown Agile development methodology. This was where I started, with guidance from Kevin Tureski.

It's important also to note that there's a lot of intelligent and experience-based criticism of Agile (you'll find no lack of it in a Google search). I thought this was one of the most informative and well-written articles about the pitfalls of Agile: Game Development in a Post-Agile World. This article excellently emphasizes the point that a particular development methodology won't magically turn a poor team into a strong one, and wisely cautions against getting caught up in buzzwords and obsession with process over insight.

Axioms

We assert a few things to be true as justification for the method. They are obviously debatable, but I think they're mostly uncontroversial.
  • Artists/Developers/Engineers are poor at providing precise completion-time estimates for any particular task. The inaccuracy is largely independent of experience.
  • Artists/Developers/Engineers are good at providing relative difficulty estimates (is Task A twice as hard as Task B?). This estimation ability does increase with experience.
  • Users are better at requesting small enhancements and feature additions to existing software than they are at accurately imagining software which does not yet exist.

Overview

The team consists of Stakeholders (users), Developers, a Project Manager, and a Coach. The deliverable has a Release Cycle which is a fixed period of time (3 months, in our current case). The development takes place in Iterations which are also a fixed period of time, between 2 and 4 weeks (we use 2 weeks). The deliverable is collaboratively defined, by the Stakeholders with help from the whole team, in terms of User Stories, which take a specific form. The development cycle adheres to milestones like clockwork - iterations are not allowed to run long, and the release cycle is also firmly fixed. The heartbeat of the release cycle looks like this:
  • Release Kick-Off. (Beginning of First Month)
  • Release User Story Writing. (Beginning of First Month)
    • Iteration 0 Kick-Off. (Monday Morning, Week 0)
      • Iteration 0 Story Tasking (Monday Afternoon, Week 0)
      • Developer Standup. (Tuesday Morning, Week 0)
      • Developer Standup. (Wednesday Morning, Week 0)
      • ...
      • Developer Standup. (Thursday Morning, Week 1)
      • Iteration 0 Demo. (Friday Afternoon, Week 1)
    • Iteration 1 Kick-Off. (Monday Morning, Week 2)
      • Iteration 1 Story Tasking (Monday Afternoon, Week 2)
      • Developer Standup. (Tuesday Morning, Week 2)
      • Developer Standup. (Wednesday Morning, Week 2)
      • ...
      • Developer Standup. (Thursday Morning, Week 3)
      • Iteration 1 Demo. (Friday Afternoon, Week 3)
    • ...
    • Iteration N Kick-Off. (Monday Morning, Week (N*2))
      • Iteration N Story Tasking. (Monday Afternoon, Week (N*2))
      • Developer Standup. (Tuesday Morning, Week (N*2))
      • Developer Standup. (Wednesday Morning, Week (N*2))
      • ...
      • Developer Standup. (Thursday Morning, Week (N*2+1))
      • Iteration N Demo. (Friday Afternoon, Week (N*2+1))
  • End of Release Demo (End of Third Month).

Detailed Description of each Stage

Release Kick-Off

The Release Kick-Off is attended by everybody - Stakeholders, Coach, Project Management, and Developers. In this meeting, which is probably the longest meeting in the entire process, a coarse, broad vision for the deliverable is defined and agreed upon. In my experience, this broad sketch of what's being built (or added to an existing deliverable) can take multiple meetings and involves a lot of negotiation. The important thing, at this stage, is to keep the descriptions coarse, like a sketch. Suppose we imagine that the deliverable is a web-based dailies tool with media browsing features. The broad vision might then consist of a list of features such as, "works in web-browser", "uses inlined web video for immediate lower-quality playback", "links to rendered frames for high-quality playback", "launches RV when high-quality playback is needed", "looks the same in different browsers on different operating systems", "has robust search tools", "media search can be refined"... you get the idea.

The deliverable is being defined at the level you'd expect to find on a brochure, but not in any more detail - there's no implementation statements such as "written in HTML5 and javascript", there's no specific description of user interface components, no button and slider layouts, no software flow charts, just coarse features. Once this coarse definition of what the release will contain has been agreed upon, the Project Manager, Coach, and Developers help the Stakeholders turn the features into formal User Stories, which will provide the structure for feature prioritization and iteration targets.

Release User Story Writing

There are entire books written about how to construct meaningful User Stories. User Stories are also the most easily mocked aspect of the various Agile development methodologies, and I'll honestly say that constructing them can, at times, feel tedious. Nonetheless, they're really important, and without them the other parts of the process become less productive and less focused. The Release User Story writing is ideally attended by everybody, but as Stakeholders are often very busy, the translation of coarse features into formal user stories is often done off-line by the Project Management, Coach and Developers, and then presented to the Stakeholders for refinement and approval. (We pretty much always do this, in fact). This is a departure from the formal Agile methodologies, I think - but a necessary reality in the production world.

A good User Story describes a small nugget of functionality, from the perspective of the person using that functionality to accomplish a task, and can be trivially adapted into an Acceptance Test or even an automated Unit Test, in the case of non-interactive features. We use a formal structure for our user stories:
As a Type Of User, I need to Do Some Action, so that I may Accomplish Some Goal.
Some example user stories, in our hypothetical web-based dailies media browser:
As a Production Assistant, I need to create playlists based on overnight renders & comps, so that I may prepare what will be shown in dailies.

As a Compositor, I need to be able to play my shot in the current cut, so that I can make sure I'm preserving continuity.

As a Production Coordinator, I need to enter notes associated with movies that are being viewed by a Supervisor, so that I can keep track of what Artists are being asked to do.

We try to make user stories as small and atomic as possible - so, for example, we might make a separate user story for being able to save playlists to a text file, separate from the user story about being able to create playlists. Loading playlists from a saved text file would be a third user story, and so on. Each of the stories we've given here can readily be turned into an Acceptance Test. The Production Assistant sits down with the tool, tries to create a playlist from media, and if they succeed, the story has been fulfilled. Then, if they go to save their created playlist to a text file, and that succeeds, the second "saving a playlist" story has been fulfilled, and so on. These are objective, verifiable actions. The Project Management, Coach and Developers have to work with the Stakeholders to make sure these stories are concise and not vague. A bad story would be something like,
As a Supervisor, I need the web interface to the playback tool to be easy to use, so that I can get through dailies without getting bogged down.
While it's conceivable that you could sit down with a particular Supervisor and ask them to give the tool a try, and then ask them, "so was it easy to use?", this is clearly too vague and too large a feature specification. To nail it down, we might try to reframe a specific action in objective terms, like, ".. I can find several versions of a particular shot's comp with two or less actions", and ".. I can have my search results automatically filtered based on my job title and my current assigned tasks, so that I don't have to sift through a large amount of choices and I can find things quickly". We could distill this vague, "easy to use" statement into probably a dozen specific stories, and those could then be prioritized by the stakeholders in a meaningful way, and regression tested by the developers without subjective assessments.

At the beginning of the Release Cycle, some of the stories will necessarily be less well defined than others, because the product (or new features) as a whole doesn't exist yet. The Agile philosophy allows for a team to change its coarse along the way based on changing production needs and in reaction to the product as it starts to become implemented. The way that we incorporate this wisdom is to focus our energy on the highest priority features, and the ones that are going to be worked on first - understanding that we'll refine and tighten them as we go along. Production realities mean that we probably won't have access to the Stakeholders for more than a few hours at most, so we have to prioritize our story emphasis.

Iteration Kick-Off

Each Iteration begins with a kick-off meeting that everyone from the team attends. The goal of the meeting is to (re)prioritize the User Stories to define what will be worked on by the developers in the iteration. We begin each of these meetings by reading out loud the broad goals of the project as defined in the Release kick-off, just to remind everybody what we're doing and to keep our long-term focus. Once a few iterations are under the team's belt, they'll have a pretty decent idea how many stories (or story points, described later) they can get done in an iteration, which helps limit the scope of the meeting. This meeting usually takes us about an hour - for our small project, we have four Stakeholders from different departments, and the conversation is usually about which stories represent features that are absolute showstoppers, and which ones we could theoretically live without, if we had to. We come out of this meeting with usually 4-6 stories that our team will attempt in the Iteration, though obviously this would change based on size of the team and technical difficulty of the tasks.

Towards the beginning of the Release Cycle, this meeting can be difficult and abstract because the Users might say, "the most important story (feature) is the one about being able to play movies in the current sequence edit", while the developers realize that they haven't even implemented a single movie playback. We address this in two ways - to a limited degree, we allow for "Developer Stories", which are user stories told from the point of view of the Developer, and talk about necessary architectural needs. We try to minimize the number of Developer Stories because they are not part of the direct User Experience, but sometimes they're inevitable. The second (and more interesting) way that we address the problem of fulfilling high-level feature requests in the absence of low-level frameworks is by creating Stubs - placeholders for future implementation. So, if the story states that the user can select movies and create a playlist sequence from them, we might implement an interface where Movies are represented by simple text-rectangle placeholders, and the representation of them playing in a sequence is just a repeating print-out of the moviefile names. While this may seem backwards, I've honestly found it to be the most valuable part of the process - good, top-down API design happens almost for free when you work from the User Stories downwards, and it naturally lends towards a frequent-refactoring development model that produces clean, well-segmented code.

These Iteration prioritization sessions are the soul of how the Stakeholders are able to stay tightly informed about the project and steer the project to make sure their needs are met. Sometimes, because of the psychotic pace of film production, the needs may change during the release. More commonly, as the Stakeholders see demonstrations of features from previous iterations, their understanding of their needs gets more informed and they may shift the priority of certain stories around, and add new stories as needed. Our current project experiences a fair amount of this - we have gotten changes in each iteration so far.

Iteration Tasking

The tasking meeting is attended by the Project Management, Coach, and Developers only. In this meeting, the user stories for the iteration are assigned a relative difficulty rating, and then the developers break up each of the user stories into discrete technical tasks. How we handle Iteration Tasking is one of the areas where I feel like we depart from the more formal Agile methodologies. The assignment of relative difficulty to stories is done with "Story Points". This is similar to a technique called "Planning Poker", but we've streamlined and simplified it a bit.

Story Points

Each story is assigned, by the developers, a number of points that represent its difficulty. In our industry, developers tend to have very highly trained and specific skill sets, so we let the stories be rated by the developer whose expertise naturally applies. The shorthand for this process is to rate things "Easy" (1 pt), "Medium" (5 pts), or "Hard" (13 pts), but we allow for more granularity. We use the numbers, [1, 2, 3, 5, 8, 13]. These points do NOT correspond to time estimates, they are gut assessment of relative difficulty. One way that we help calibrate this clearly subjective process is by defining what a minimally "easy" (1 pt) story might be - for example a trivial story in our media browser case might be, "displays the file name of the currently playing movie". Similarly, we try to get a sense of what the hardest possible story that's theoretically doable in an iteration would be, perhaps, "skips frames during movie playback to maintain synchronized playhead with audio" (13 pts). Though we're not making explicit time estimates, we can blur the lines a bit when we say something is just too large for an iteration.

Whenever we have a story that's 13 points, we talk a fair bit about whether it can be broken into smaller pieces. It often can be, and we also often find that of those sub-stories, the Users only care (prioritize) some of them, and not others. Any story which seems bigger than 13 points automatically gets drilled into to figure out how to break it up and rephrase it. There's no specific method for this, and this is one of those areas where an experienced, knowledgeable development team and project management help use their intuition to make good decisions. In practice, impossibly large, unbreakable stories are pretty rare - I've never encountered one, but I've definitely been in conversations about how to break up big stories.

We sometimes re-write stories to make them slightly smaller, slightly bigger, or more specific as the development gets more refined. In doing so, we will re-rate their point values. In my current project, this seems to happen in about 10% of the stories or so.

Task Index Cards

Each story is written out on a piece of A4 paper (8.5"x11") and put onto a large pin-board in our development room, with the stories arranged by iteration in order of priority, from left to right. Beneath each story, the developers with relevant expertise write out the discrete development steps they'll take to approach implementation of the story, as short sentences on index cards. These cards are intended to be light-weight - a shorthand reminder for the developer to do something. They should NEVER be entered into any sort of formal tracking system, or made concrete in any way. Developers should feel like they can change their task definitions daily, if need be. Our mental model for these tasks is that they should represent somewhere between a few hours and slightly more than a day's worth of work. Example tasks might be:
"Implement Media Placeholder Base Class"
"Make Playback Method Pure Virtual"
"Add Doxygen Notes to BlahBlahBlah.h"
As the coach on the project, one of the things that I, and the Project Management, will do is to ask developers to create index cards for their tasks, even if the developer considers them fleeting or trivial. We also try, within an iteration, to create index cards for tasks that were completed in the previous day and had not been anticipated.

More formal Agile approaches will assign time estimates to these tasks, or otherwise attempt to count them up in some way. I have found that this is a waste of time, and produces meaningless and unnecessary data. The bulk of our development methodology is fairly regimented, and I think it's important for these tasks to be fluid and graceful. Trying to enumerate them is tedious and misleading. Not enumerating these tasks means that it's difficult to compute "burn-down" within a single iteration - in other words, it's often difficult to objectively determine that the team will or will not hit certain stories within a particular iteration. This is, in my opinion, exactly why the iterations are kept short. I know Kevin will disagree with me on this one!

Tracking Stories and Tasks

We use project management software to track our stories, but we do not create a secondary manifestation of our index-card tasks - again, we want to make sure that we let developers feel agile and quick with their tasks.

Here's a picture of our Story & Task board for our current iteration - the green pages are stories, the white index cards are tasks, and the from left to right the stories go from highest to lowest priority. The tasks have been artfully blurred to protect privacy!


Daily Standups

Every morning of the iteration, the Project Management, Coach, and Developers have a standup meeting in which each Developer answers the following questions:
  • What Did I Do Yesterday?
  • What Will I Do Today?
  • Is Anything In My Way?
For each task completed, the index card is crossed off with a red marker. This activity quickly gains a Pavlovian feeling of catharsis. Tasks which had not been anticipated, but are already complete, get an index card added and crossed off. Tasks which are no longer correct or meaningful are removed. Any new tasks get new index cards. If there are indeed obstacles for any developer, it is the job of the Project Management to make sure they are addressed quickly.
The meeting is called a standup because ideally, everybody remains standing as a way of keeping the discussions short. At my previous job, we used an actual egg timer to limit each developer's time to three minutes. There is an almost pathological tendency for developers to start talking about algorithmic difficulties, and for other developers to begin brainstorming, It is the job of the Coach and the Project Management to mute this and suggest the conversation continue after the standup. We have to do this fairly frequently. It is often humorous because sometimes the entire room gets sidetracked, if the problem is interesting. A timer with a bell helps manage this.

Iteration Demo

At the end of the two week iteration, the entire team meets and the project is demonstrated to the Stakeholders. The Project Management will read out the stories that were addressed in the iteration, and indicate which stories the team feels that they've addressed. Depending on the infancy of the feature, either the Stakeholders or the Developers will go through each story and attempt the action described, and determine whether the story has been addressed. We often show stories that are "almost" done as well, though we don't get to check them off the list until they're completely fulfilled. In my experience, the Stakeholders will agree and approve that a Story has been fulfilled about 75% of the time. Sometimes there are slight miscommunications, for which we are very glad to have only gone off track for two weeks (and they're usually not complete misses, just minor changes), and sometimes a feature we thought was complete just doesn't quite perform under the sizzling pressure of the demo. But most of the time, the presented stories are accepted and a cheerful amount of beer is then victoriously consumed.

Velocity

At the end of each iteration, the team is awarded the story points for each story they've gotten Stakeholder acceptance for. The total number of points [1, 2, 3, 5, 8, or 13] for the story is added to a total for the iteration, which is normalized by the number of man-weeks that the iteration consumed. We don't usually work overtime, so we just divide by the number of developers. We do not give extra weight to developer man-weeks based on experience. This normalized "points per iteration per man week" measurement is called "velocity", and it represents one of the great agile awesomenesses: This number tends to stabilize after the first few iterations and remains fairly constant for a release cycle. Once we've gotten closer to the end of the release cycle (perhaps halfway through), the team is able to count up the total number of story points represented by all the desired features for the release, and can estimate, based on their velocity, whether they'll likely accomplish everything that's requested. If they aren't, it gives the project's investors early opportunity to decide if the project can reduce its feature list, acquire more resources, or (hopefully not) be suspended.

There is definitely a grain of truth to the observation that the velocity can become a sort of self-fulfilling prophecy for teams, and the stabilization is an artefact of that. Developers will base their story point estimations for new stories on their old stories, and they'll tend to work a bit harder if they feel like they're in danger of getting too few points at the demo. Conversely, there's probably a bit of relaxation that happens when a team has already completed its expected velocity before the end as well. The goal of the Project Management and the Coach is to help keep the story point estimates honest, and in my experience, none of this has been a significant problem.

Release Demo

At the end of the last iteration in the release, the demo represents a formal release, which we associate with a code publish, a code branch, and a formal install. At this point, if the project has been following the steps above with care, there are not usually any surprises, and the last few iterations can focus more on performance, reliability, testing, and documentation. Champagne is essential.

Final Thoughts

I love working this way. I think the most important thing the process does is to help a strong team keep focus. I love that it creates an easy sense of team belonging and unity, and I love that it gets rid of the "Us vs. Them" tension that often arises between Production and R&D at VFX facilities. The Stakeholders, despite writing no code, feel like the product is really theirs, and will intuitively act as cheerleaders for the end result. The regularity of the milestones and daily standups produces a sense of constant movement, which I think relieves a lot of stress on Developers. The process also highlights situations where the feature requests are too ambitious, and extra resources are needed - it prevents a big deadline miss that nobody sees coming.

Some difficulties with the method are its inability to directly address large-scale architectural development needs, especially those which might be many weeks to implement. In my experience, it's always possible to tether these developments to incremental user-visible features, especially with good stub design. I find that the systems naturally develop strong top-down APIs and modular code, out of the need to always be working against a user story. Another serious problem of the method is that it can create a false sense of progress and accomplishment when going down the wrong road. On some level, any project will require a visionary and good knowledge of the problem domain to act as an overriding compass. These iterations and standups can easily produce the sensation of progress without indicating that a larger strategic or directional decision is entirely incorrect. I've only personally seen this happen once, but it's important to note.

I hope you find this useful!

2 comments:

  1. Nice writeup! I wasn't aware of the velocity metric until now.

    ReplyDelete
  2. With regards to assigning and tracking time estimates at the task level, my view is not disagreement, but rather "it depends ... do whatever works best for your situation". First, with a stable, experienced team with expert-level skills that has "found its groove", uses short iterations and is not taking on anything particularly risky, tracking at that level probably is a waste of time. No disagreement there at all.

    Where tracking at a more granular level IS useful is when the team is just starting to work together or is working in unfamiliar or risky territory, and it is important to understand if they are progressing at the rate they predicted / guessed they would. A burndown chart is just one of several indicators that can help you understand whether you are likely on track, likely ahead of plan or likely in trouble. The sooner you realize you're in trouble, the more options you have for getting out of trouble. To quote Andrew Stanton: "Be wrong as fast as you can."

    Great write-up Chris. Thanks for sharing.

    ReplyDelete

Note: Only a member of this blog may post a comment.