Many sources of stress on projects come from forgetting what our roles are. Scrum championed a simple set of roles with the development team, the Scrum Master, and the Product Owner. The first problem is the people affected by agile projects who fall into any of these categories, many of which are important. The second problem comes from forgetting that the only roles with authority, the Scrum Master and the Product Owner are the least important people on the whole project.
When creating something of value, the first people we care about are those who will get value from the product we create. I call these Consumers. These are the users and those who are affected by the work of the users. For a call center application, it would be the customer service representative as well as the person calling who has to wait for the service rep to look up information in the slow system.
Without caring about the Consumers, the product has no value.
The second category are those whose work goes into creating the product. It may be people who create layouts and graphics, people who develop the application, people who examine the application to make sure it performs as needed and people who train the consumers in using the application. I call these people the Creators.
Without the Creators, there will be no product.
But in order to create the product, someone usually has to put money on the line. I call this the Sponsor(s). The sponsor is the person who can really decide that, “yes, we will let five people work on this for a year”. If the Creators work for free, they are also the Sponsors. Otherwise, the sponsor is the person who signs their paycheck.
Without the Sponsor, the Creators will starve.
It’s worth noting that many Product Owners, Scrum Masters, Architects and Project Managers fall into none of these roles. The product owner is seldom an actual Consumer of the product, and in very few cases does he pay the salary of the Creators. Instead, he talks to the Consumers and helps the Creators understand what to create. In the same way, a good Scrum Master can ask good questions of the Creators that will help them avoid impediments and work better.
I call everyone who doesn’t Consume the product, Create the product or Pay for the product a Helper. When you facilitate a meeting, write a report or take the requirements from the Consumers to the Creator, you are helping. If you’re doing your job right.
The funny thing is this: Most people with authority in most organizations have Helper roles. But nothing is worse than a “Helper” you don’t need, but who insists that you do what they say.
I am a Helper, and this makes me nervous. If everybody is a Helper, nothing gets done. At best, I can make others better able to do their job. At worst, I distract from real progress.
As I’m working with smaller and more agile projects, I’m increasingly seeing the classic way that Scrum is executed as more of an impediment to agility than a helper.
This is especially the case when it comes to the classic Sprint Planning as described in the Scrum Guide:
“For example, two-week Sprints have four-hour Sprint Planning Meetings”
In the Sprint Planning Meeting part 1: “Development Team works to forecast the functionality that will be developed during the Sprint.”
In the Sprint Planning Meeting part 1: “Product Owner presents ordered Product Backlog items”
“Work planned for the first days of the Sprint by the Development Team is decomposed to units of one day or less by the end of this meeting”
I’ve seen many sprint planning meetings struggle for the same reasons again and again:
The user stories described by the product owner doesn’t fit the team’s way of working
The team dives into too many details on each user story to be able to break it down to the level required
The team blames the product owner for not providing enough details to the user stories
Most of the design discussions are considered to be over once the sprint starts
The forecasting/commitment to future velocity becomes a heated negotiation
If your project experienced these sorts of Sprint planning meetings, I would expect that the reaction of the project was to add meetings (“backlog grooming”), documentation and checkpoint prior to starting a new sprint. These activities would probably resulted in the product owner (team) spending less amount of time with the development team.
Scrum’s Sprint planning is assuming a situation where the product backlog is detailed for a considerable amount of time and where the ideal is that the product owner spends their time adding more details to the product backlog all the time.
The resulting projects have huge rigid backlogs describing the details for several months into the future. They communication between the users and developers is limited to the acceptance criteria that the product owner writes down before each sprint planning. They spend a considerable amount of the sprint planning the rest of the sprint. Deviations from the sprint backlog are considered problematic.
I think this is misguided. I think this is why we left waterfall in the first place.
In order for Scrum to work better, we have to abandon the idea that the product owner comes to the planning with a perfect set of stories, we have to abandon the sprint backlog detailing the work and design for several weeks and we probably should be very careful with what estimates we ask for.
Instead I would suggest the following approach to planning a sprint:
The product owner and the team comes into the room informed by their current understanding of the value the system can deliver
The product owner describes the current most important gaps in the value available to stakeholders
The team already knows their current trajectory and together with the product owner, they can describe “what’s the next meaningful thing we could demonstrate to closing these gaps” as a script for the next demonstration
The team isn’t asked to estimate their work, but the product owner, project managers and others are free to make qualified guesses based on the team’s past performance
Keep it short and frequent!
Scrum was developed in the time where it had to match the perception of projects that did huge batches of planning and design. In response, it does smaller batches of planning and design. But “give a man an inch and he’ll take a yard”. The smaller batches leads to frustration over lack of details and the sprints become more and more plan driven and the connection between the users and the developers more and more document-driven.
After having worked with Scrum for a number of years, I still witness sprint reviews where the team’s demonstration of the product is confusing and the value produced in the sprint is unclear. The demo may consist of just a bunch of different functions and screens without any meaning. Or maybe the team is just talking about what happens behind the curtains in the database. Or maybe the demo just doesn’t display the value that the team was supposed to give to the stakeholders. Most teams have okay demos most of the time, but every now and then, it’s a complete train wreck.
If you’ve experienced a sprint like this, you probably noticed some problems from the very beginning. The sprint planning may have been chaotic and the work during the sprint may have felt purposeless. Chances are that the team was most of the time discussing technical terms that didn’t make much sense to the product owner.
If your team learns to be clear about the sprint goal, you can avoid anemic demos, unstructured planning and undirected work. “But wait”, you may say, “we had a sprint goal and our demo still sucked”. You may think you have a clear sprint goal, but very few teams know what a sprint goal looks like. You may have one, but you might have gotten there by accident.
Here is what a sprint goal looks like: At the end of this period of time, we will demonstrate something to our stakeholders. What we will demonstrate will tell a compelling story that demonstrates real value. We can create the first draft plan of what the demo will look like already at the sprint planning, and we can use this description both to verify our understanding and plan our work.
Let’s say that the product owner says “the goal for the next sprint is to verify the payment solution.” What would a plan for this sprint look like?
At a sprint planning meeting, after the product owner has described the goal, the team plans its work. Then they come back to the product owner to verify that their plan matches the goal. This is what a good plan may sound like: “We will set up so that all users on the web site has a random set of items in their shopping cart. Then we will go to the checkout page. Here, we will see that the shopping cart is displayed in a reasonable way. When we click the payment button, the user will be redirected to the test site for payment provider. We’ll input credit card details and pay. The user will be redirected back to the web site and the web site will display the success or the failure of the payment. We’ll also show the order along with the payment status in a early mockup of the order list page”
The product owner may agree to this sprint plan. If the team knows their technologies well, it is now easy to break this down into tasks, such as “create a shopping cart model”, “display shopping cart page”, “retrieve the payment status from the payment provider” and “store the payment status in the database”.
This demo script will guide the team both during the construction and during the actual sprint review. During the construction a team member is now in a position to solve the sprint goal in the simplest way possible. By focusing on how the team will demonstrate value instead of what technical tasks may or may not be required (e.g. “construct a new order facade service” – whatever that may mean!) we can dramatically cut down on wasteful and convoluted design.
Agile methods emphasize “adapting to change over following a plan”. The same holds true for a demo script. The purpose of the script is not to create a perfect plan (which is of limited value), but to get a clear picture of what we need to create and how we will demonstrate that we have indeed delivered real value.
When a software architect gets a good idea or learns something new, he has a problem. The main job of the architect is to ensure that the right information in present inside the heads of the people who should build the application. Every new piece of information in the architect’s head represents a broader gap between his brain and that of the rest of the team.
The classical ways of adressing this gap is for the architect to write huge documents or sets of wiki pages. When they realize that there’s not sufficient time set aside in the project schedule for the developers to read all this information, the architect may present the material to developers who sit and nod their heads. But what did they really understand?
Instead of the read-listen-and-nod approach, I prefer an approach that I sometimes call “dragging the information throught the heads of the team and looking at what comes out in the other end.” I provide as little processed information as possible, but instead give the team a structured workshop to uncover and structure the information by asking me or business stakeholders. The outcomes of the workshop should be some tanglible results presented by the team. This result is always different from what I had in mind. Sometimes the difference shows a critical misunderstanding, which allows me to go more in depth in this area. Sometimes the difference represents a trivial misunderstanding or difference in opinion and the architect has the difficult task of accepting a small disgreement without distracting the team. Sometimes, the team has discovered something much smarter than the original idea of the architect.
I find it most useful to do workshops in small groups of three people per group. Each group should produce something that they can show to the whole team afterwards. Here are some examples of workshops that I run:
Divide in groups of three with the users/business and the developers represented in each group. Each group should discuss and fill in a template for the vision of the product being created: “For some user who performs some business function the name of system is a type of system which gives a capability related to the task. Unlike most interesting alternative our solution has an important advantage“. The groups get 10 minutes before a debrief with the whole team.
Each group then brainstorms a list of users, consumers and others affected by the system and write these on sticky notes. This should be about 20-30 roles. The whole team decides on a few interesting users and the groups then write down for some these: What characterizes the user, what tasks do they perform and what do they value?
Based on the list of tasks that stakeholders perform, we create a sketch of a usage flow. I like to refine the documented usage flow with a small task group which takes a few hours to prepare a description of the flow of interaction between the system and external actors
Groups of three go through the usage flow to come up with Actors (users and systems), Domain concepts (classes) or Containers (deployment diagram) mentioned or implied in the usage flow and write these on sticky notes. After showing the Actors, Concepts or Containers to the whole group, each workgroup then organizes these on flipcharts to create a Context Model, a Domain Model and a Deployment Model.
Many of these workshops can also be run with distributed groups over video conference and screen sharing.
I like to collect all of these artifacts (vision, users, usage flow, context model, domain model and deployment model) in a PowerPoint presentation so it can be easily showed by the team to external stakeholders. Sometimes someone on the team feel that photographed flipcharts with sticky notes are too informal and decide to draw something in Visio or another fancy tool. This is just a plus.
By asking the team to produce something and present it, rather than explaining the architecture to the team, I ensure that the information is really in their heads and not just my fooling myself by my own understanding.
Do you ever feel it’s hard to get real progress in a sprint towards the business goal? Do you feel the feedback from a iteration picks on all the details you didn’t mean to cover this sprint? Do you feel like sprint planning meetings are dragging out? Then a Rainbow Sprint Plan may be for you.
Here is an example of a Rainbow Sprint plan:
A customer wants cheap vacations
The customer signs up for daily or weekly notifications of special flight offers
Periodically the System checks which customers should get notifications
The System checks for offers that matches the customer’s travel preference by looking up flights with the travel provider system
The System notifies customer of any matching offers via SMS
Variation: The System notifies customer of any matching offers via email
The customer accepts the offer via SMS
The System books the tickets on behalf of the customer
The system confirms the booking by sending an SMS to the customer
The customer can at any point see their active offers and accepted offers on the system website
The customer enjoys a cheap vacation!
What you can see from this plan:
Use case overview: The plan gives a high-level picture of the next release. We can see how the work we are doing is fitting together and how it ends up satisfying a customer need. This is a requirement technique that is basically Use Cases as per Alistair Cockburn’s “Writing Effective Use Cases“. I’ve been writing use cases at this level for the last three years and found it to be a good way to understand requirements. The trick of good use cases is to stay at a the right level. In this example, each step is some interaction between the system and a user or the system and another system. How this communication is handled is something I find best to leave for an individual sprint.
Iterative completion: Each step has a color code:
Black: The team hasn’t started looking into this
Red: We have made something, but it’s a dummy version just to show something
Orange: We have made something, but we expect lots of work remaining
Yellow: We’re almost done, we’re ready to receive feedback
Green: Development is complete, we have done reasonable verification and documentation
So the plan accepts that we revisit a feature. As we get closer to the next release, things will move further and further into the rainbow. But we can choose whether we want to get everything to orange first, or whether we will leave some things at red (or even black) while bringing other steps all the way to green.
Demonstration script: When we get to the end of the sprint and demonstrate what we’ve created, this plan gives a pretty good idea of what the demo will look like: We will sign up the customer to a dummy signup page (red), we will register some flights in another dummy page (red), trigger the actual scheduling code (orange), then we will see that an SMS is received on an actual phone (yellow). Then we will simulate an SMS response (orange), see that they system made some communication to a dummy system (red), and send “ok” back as an SMS to the customer (orange). This will focus the team around a shared vision of what to do in this sprint.
I have been thinking in terms of a Rainbow Plan in my last projects, but I’ve never used the term before. I think the plan addresses three of the most common problems that I see in Scrum implementations:
The team doesn’t see where it’s going, because user stories are too fine grained to get the big picture. User story mapping and use cases address this, and rainbow plans put it into a sprint-context
The team dives into technical details during sprint planning. With rainbow plans, the sprint plan becomes the demo plan which coincides with the requirements.
The project has a purely incremental approach, where each feature should be completed in a single sprint. This means that it’s hard to keep the big picture and the product owner is forced to look for even small bugs in everything that’s done in a sprint. With rainbow plans, the team agrees on the completeness of each feature.
May you always become more goal oriented and productive in your sprints.
I’m working on this idea, and I don’t know if it appeals to you guys. I’d like your input on whether this is something to explore further.
Here’s the deal: I’ve encountered teams who, when working with SOA technologies have been dragged into the mud by the sheer complexity of their tools. I’ve only seen this in Java, but I’ve heard from some C# developers that they recognize the phenomenon there as well. I’d like to explore an alternative approach.
This approach requires more hard work than adding a WSDL (web service definition language. Hocus pocus) file to your project and automatically generating stuff. But it comes with added understanding and increased testability. In the end, I’ve experienced that this has made me able to complete my tasks quicker, despite the extra manual labor.
The purpose of this blog post (and if you like it, it’s expansions) is to explore a more bare-bones approach to SOA in general and to web services specifically. I’m illustrating these principles by using a concrete example: Let users be notified when their currency drops below a threshold relative to the US dollar. In order to make the service technologically interesting, I will be using the IP address of the subscriber to determine their currency.
Step 1: Create your active services by mocking external interactions
Mocking the activity of your own services can help you construct the interfaces that define your interaction with external services.
Spoiler: I’ve recently started using random test data generation for my tests with great effect.
The Publisher has a number of Services that it uses. Let us focus on one service for now: The GeoLocationService.
Step 2: Create a test and a stub for each service – starting with GeoLocationService
The top level test shows what we need from each external service. Informed by this and reading (yeah!) the WSDL for a service, we can test drive a stub for a service. In this example, we actually run the test using HTTP by starting Jetty embedded inside the test.
Validate and create the XML payload
This is the first “bare-knuckled” bit. Here, I create the XML payload without using a framework (the groovy “$”-syntax is courtesy of the JOOX library, a thin wrapper on top of the built-in JAXP classes):
I add the XSD (more hocus pocus) for the actual service to the project and code to validate the message. Then I start building the XML payload by following the validation errors.
In this example, I get a little help (and a little pain) from the JOOX library for XML manipulation in Java. As XML libaries for Java are insane, I’m giving up with the checked exceptions, too.
Spoiler: I’m generally very unhappy with the handling of namespaces, validation, XPath and checked exceptions in all XML libraries that I’ve found so far. So I’m thinking about creating my own.
Of course, you can use the same approach with classes that are automatically generated from the XSD, but I’m not convinced that it really would help much.
Stream the XML over HTTP
Java’s built in HttpURLConnection is a clunky, but serviceable way to get the XML to the server (As long as you’re not doing advanced HTTP authentication).
Spoiler: This code should be expanded with logging and error handling and the validation should be moved into a decorator. By taking control of the HTTP handling, we can solve most of what people buy an ESB to solve.
Create the stub and parse the XML
The stub uses xpath to find the location in the request. It generates the response in much the same way as the ws client generated the request (not shown).
Spoiler: The stubs can be expanded to have a web page that lets me test my system without real integration to any external service.
Validate and parse the response
The ws client can now validate that the response from the stub complies with the XSD and parse the response. Again, this done using XPath. I’m not showing the code, as it’s just more of the same.
The real thing!
The code now verifies that the XML payload conforms to the XSD. This means that the ws client should now be usable with the real thing. Let’s write a separate test to check it:
Yay! It works! Actually, it failed the first time I tried it, as I didn’t have the correct country name for the IP address that I tested with.
This sort of point-to-point integration test is slower and less robust than my other unit tests. However, I don’t find make too big of a deal out of that fact. I filter the test from my Infinitest config and I don’t care much beyond that.
Fleshing out all the services
The SubscriptionRepository, CurrencyService and EmailService need to be fleshed out in the same way as the GeolocationService. However, since we know that we only need very specific interaction with each of these services, we don’t need to worry about everything that could possibly be sent or received as part of the SOAP services. As long as we can do the job that the business logic (CurrencyPublisher) needs, we’re good to go!
Demonstration and value chain testing
If we create web UI for the stubs, we can now demonstrate the whole value chain of this service to our customers. In my SOA projects, some of the services we depend on will only come online late in the project. In this case, we can use our stubs to show that our service works.
Spoiler: As I get tired of verifying that the manual value chain test works, I may end up creating a test that uses WebDriver to set up the stubs and verify that the test ran okay, just like I would in the manual test.
Taking the gloves off when fighting in an SOA arena
In this article, I’ve showed and hinted at more than half a dozen techniques to work with tests, http, xml and validation that don’t involve frameworks, ESBs or code generation. The approach gives the programmer 100% control over their place in the SOA ecosystem. Each of the areas have a lot more depth to explore. Let me know if you’d like to see it be explored.
Oh, and I’d also like ideas for better web services to use, as the Geolocated currency email is pretty hokey.
An “Agile” project is one that actively seeks to incorporate changes as the project progresses, rather than assuming that the plans from the beginning of the project will work for the whole project duration. Not all organizations want to adopt “agile” as their project metaphor. And some organizations that do adopt methods such as Scrum do it without becoming as “agile” as Scrum promises. Instead of criticizing these organizations of “agile heresy”, I would instead like to offer some useful experience from Scrum, even if the word “agile” doesn’t appeal to you.
Track your progress with small, well-defined milestones: The Product backlog of Scrum is essentially an ordered list of work items. Good backlog elements are either completed or not completed. Partially completed milestones are not counted. A more Agile project will let the product backlog change throughout the project, while a more rigid project may set down the whole backlog at the beginning of the project.
Using a product backlog that is complete ‘up front’ makes your project less agile, but using a product backlog lets you track progress better than most traditional project plans.
Demonstrate progress to stakeholders: The earlier a project gets feedback on the work it has completed, the better it is able to anticipate and deal with misunderstandings. The expectations of stakeholders is often misunderstood until everyone can actually see what is being constructed. Scrum requires Sprint reviews at regular intervals of a couple of weeks to demonstrate progress. A project with less communication with the stakeholders may have fewer and less regular reviews, but every review you do have will reduce your risk.
Communicate daily within the team: Just as there will be misunderstanding between the project and the stakeholders about the proper outcomes, there will be misunderstandings within the team about the proper strategies to complete the project. Scrum requires a daily standup meeting to enforce communication within the team. Other teams may be geographically distributed, dislike the ritual of the standing meeting or work on non-overlapping tasks and decide that they need less communication.
Regardless of the form or frequency of communication within the team, the project should evaluate whether they are making repeated mistakes because of lack of shared knowledge and awareness. And whether their rituals waste or preserve the time of the team.
Make decision making explicit: In Scrum, the decisions about what the team should work on rests with a single individual: The Project owner. Many organizations cannot invest the authority that this role implies with a single individual, or cannot find an individual with both the business understanding and the technical knowledge to make confident evaluations.
Regardless of whether the authority rests with a single individual, a project needs to make decisions about what it should create and in what order. Identifying who needs to be involved in these decisions will make the project run more smoothly.
You don’t need to “drink the Agile cool aid” to benefit from the experience of Scrum over the last 20 years. And many projects that profess being Agile just be using the rituals from Scrum within an old mindset. You will not get the same benefits from Scrum as a truly agile team, but that doesn’t mean it’s not right for you.
I hate giving promises for things I can’t control. I can promise that I will attend a party or that I will set aside time to help you with your problem. I cannot promise that the party will be fun or that your problem will be solved. Giving promises on effort is honest, giving promises on outcomes is dishonest.
A team that commits to an estimate is promising something they cannot control. A team that is blamed for giving an estimate that is too low can easily avoid that particular mistake next time around. And given the law that work expands to fill available time, the estimate will never be too high. The result is cost spiraling up.
What if we assumed that estimates would never be particularly reliable. And if we assumed that forcing a team to commit to an estimate is unreasonable behavior? How would we act in a world where that’s true?
Let’s imagine we live in a world where the product owner cannot ask for an estimate. What can the product owner do instead?
The product owner can say “stop working on this story if you’ve spend more than 40 hours. Or if you think you will end up spending more than 40 hours.”
Such a limit can be viewed as a budget.
The product owner can make a bet on how many user stories the team will complete by looking at what they have done before. If the consequence of the betting wrong are severe, the product owner can be cautious about the number of user stories. (Bonus question: Is the team is a better or worse position to know the consequences of betting wrong?)
Such a bet can be viewed as a forecast.
The team, on the other hand can commit to working according to the agreed-upon rules. They can commit to do things in the order set by the product owner. They can commit to doing the best job they can within the time budget the product owner has allocated.
As a user, I can add social security number, so patient logs have social security numbers
As a developer, how would you react if you were given this user story? Would you throw it back in the face of the product owner, or would you try and understand it?
How about the following dialogue?
Developer: “What are we hoping to achieve with this story?”
Customer: “We hope that the patient logs will have social security numbers. Duh.”
Developer: “Sorry, I was unclear: What problem do you experience now that we don’t have the social security number?”
Customer: “Oh! We need the social security number when we bill the customer.”
Developer: “I see. So what happens now?”
Customer: “Well, since the patient has left the hospital, the billing department has to phone or send postal mail to get the information. That’s a lot of work.”
Developer: “What places in the application can we add this information?”
Customer: “We’ll add it to the patient journal.”
Developer: “Who updates the patient journal, and when?”
Customer: “Whops. The doctor updated the journal after the patient has left. We better add it when secretary checks the patient out. So that would be in the appointment system”
Developer: “What other places could we enter this information?”
Customer: “Well, we could have a field for the social security number when the patient first requests the appointment”
Developer: “So, we have considered two options: The appointment system at checkout or the appointment system when the appointment is booked. What other ways could we get the same information?”
Customer: “We could collect it from the governmental web service, I suppose.”
Developer: “What would you like us to try first?”
Customer: “Since the patient normally fills in the appointment request themselves, let’s try that out first.”
Developer: “Let’s see if I get you correctly….”
In order to have the required information to send an invoice, as the billing department, I want the patient to enter their social security number when they request an appointment
What you’ve just witnessed is a series of questions that can be organized like this:
Goal: What are we hoping to achieve? How will the user story change the world?
Reality: How are things working now?
Options: What else could we do instead? In addition?
Will: What will we do? What is our plan of action
The GROW mnemonic is a tool from professional coaching. That is: Talking to people about the problems and ambitions in their professional and private lives. It helps guide a novice coach through a coaching conversation and focus on the problems of the person being coached. Or, in this case: On the business value we want to achieve.
At the end of the day, as a developer you will be judged not on whether you followed the orders you were given, but whether you understood and delivered what was really needed. By asking your users, your product owner or your domain expert what they really, really want (Goal), it will be easier for you achieve. And if you don’t have access to the relevant people, these questions can still help you guide your thinking about the requirements.
You can read more about the GROW model at What-is-coaching.com and many other websites devoted to helping people get to the best of their ability.
A big thank you goes to Antti Kirjavainen for suggesting the patient journal example and for coaching me in the process of writing this article.