mobile it

 

 

Scrum smells, pt. 6: Unknowns and estimateshttps://www.mobileit.cz/Blog/Pages/scrum-smells-6.aspxScrum smells, pt. 6: Unknowns and estimates<p>Today, I'd like to share some of the ideas and estimation approaches that helped us in past projects. The tricky part in long and short-term planning is how to predict the unknowns that will influence us in the future. As I wrote earlier, there are several things that usually come up and may not be visible in the product backlog when you are planning something.</p><h2>The unknowns</h2><p>In projects related to mobile app development, we usually encounter the following unplanned activities:</p><ul><li>Defect fixing</li><li>Backlog refinement activities</li><li>Collaboration on UI/UX design</li><li>Refactoring</li><li>New user stories</li></ul><p>Defect fixing is quite obvious and we have spoken about it already. You can't usually foresee what bugs will appear.</p><p>Backlog refinement activities include understanding the backlog items, analyzing the underlying technical and usability aspects, and making the backlog items meet the definition of ready. </p><p>The UI/UX design process is not just a simple decision about colors and shapes. The controls used and the screen layouts and flows usually have a large impact on how the application needs to be built, and we witness over and over again that a seemingly small aspect of the design idea can have a vast impact on the complexity of the actual implementation. So in order to keep the cost/benefit ratio reasonable, we have learned that it is necessary that the developers collaborate closely with the designers in order to prevent any unpleasant surprises. You can read more about this topic in <a href="/Blog/Pages/design-system-1.aspx">this blog series</a>. </p><p>Refactoring existing code and infrastructure setup is a must if we want to develop a product that will be sustainable for longer than a few weeks. It can also have the potential of making the dev team more effective.</p><p>New user stories are interesting. You invest a lot of time into the backlog refinement and it just looks perfect, everything is thought through and sorted. Fast forward two months into the future and you discover (with new knowledge from that past two months) that you need to simplify some stories while others have become obsolete, but more importantly, you realize that you need to introduce completely new features that are vital for app's meaningfulness. You couldn’t see this before you had the actual chance to play around with the features from the past couple of months and gather feedback from users, analyze the usage stats or see the economical results.</p><h2>Estimates</h2><p>Having most of the stuff in the backlog estimated for its complexity (size) is vital for any planning. But as we have all probably learned the hard way, estimates are almost always anything but precise. We, therefore, did not find any value in trying to produce exact estimate values (like 13.5 man-days of work), but we rather use the approach of relative estimation while using pseudo-Fibonacci numbers: 0, 1, 2, 3, 5, 8, 13, 20, 40, 100.</p><p>It is important to understand that these are dimensionless numbers. They are not hours, man-days, or anything similar. It is an abstract number used solely to set a benchmark and compare other items against each other.</p><p>So what does that mean? At the beginning of the project we pick an item in the backlog that seems to be of a common size and appears neither small nor big, a number between the 5-8 range. That will be our benchmark and all other stories are then compared to it. How much more difficult (or easy) is this or that item compared to our benchmark?</p><p>Over time, we usually found out that the initial benchmarks and estimates were completely off. But that is OK, it's a learning process. It is important to review the estimates after the actual development and from them. Was that user story really an 8? Were these two items as similar as we initially thought? If not, how would we estimate them now and why? That also means that from time to time it's necessary to revisit all the already estimated items in the product backlog. </p><p>It usually is not necessary to go into deep details with stuff that is several sprints ahead. As the team gains experience with the product domain, the developer's gut feelings get more relevant and precise. That means useful estimates can be done quite swiftly after the team grasps the particular feature's idea. Sure, some stuff in the backlog will be somewhat underestimated, some overestimated. But with long-term planning and predictions it usually suffices because statistically, the average usually gets quite reliable.</p><p>The outcome of all this is a backlog where every item is labelled with its size. It becomes clear what items are meaningfully defined. The development team has an idea about the technical solution (meaning that the size is reasonable) and what items are completely vague or for which the team members lack key business or technical information. Those are usually the items with estimates labels of “40”, “100”, or even “??”.</p><p>If such inestimable stories are buried in the lower parts of the backlog and the product owner does not even plan to bring them to the market for a long time from now, that's fine. But do any of these items have a high value for the product and do we want to bring it to the market soon? If that's the case, it sends a clear message to the product owner: back to the drawing board, let's completely re-think and simplify such user stories and expect that some team capacity may be needed for technical research. </p><p>So after all this hassle, the upper parts of the backlog will have numbers that you can do math with.</p><h2>Quantifying unexpected work</h2><p>The last piece of the puzzle requiring predictions and plans is to quantify how much of the unexpected stuff usually happens. Now, this might seem like a catch-22 situation - how can we predict the amount of something that we can't predict by its definition? At the beginning of the development, this is indeed impossible to solve. But as always, agile development is empirically oriented - over time we can find ways to get an idea about what is ahead based on past experience. As always, I am not preaching any universal truth. I am just sharing an experience that my colleagues and I have gathered over time and we find useful. So do we do it? </p><p>It's vital to visualize any team's work in the product and sprint backlog as transparently as possible. So it's also good to include all the stuff that are not user stories, but the team knowingly needs to put some effort into them (like the known regressions, researches, refactorings, etc.) into the backlog too. If it's possible to estimate the size upfront, let's do it. If it's not, either cap the maximum capacity to be invested or re-visit and size the item after it's been done. This is necessary in order to gather statistics. </p><p>Just to be clear - let's not mistake such unexpected work with a scope creep. I assume that we don't suffer from excessive scope creep, the unexpected work is indeed solely highly valuable and necessary work that was just not discovered upfront.</p><p>So now we have a reasonably transparent backlog, containing the originally planned stories and also the on-the-go incoming items. We have most of it labelled with sizes. In the next part of this series, we'll try to make some statistics and conclusions on top of all this. </p>#scrum;#agile;#project-management;#release-management
Scrum smells, pt. 3: Panic-driven bug managementhttps://www.mobileit.cz/Blog/Pages/scrum-smells-3.aspxScrum smells, pt. 3: Panic-driven bug management<p>Bugs create a special atmosphere. They often cause a lot of unrest or outright panic. But does it have to be that way?</p><p>Nearly every developer out there has come across the following scenario: The development team is working on the sprint backlog when suddenly the users report an incident. The marketing manager comes in and puts pressure on the development team or their product owner to urgently fix the bug. The team feels guilty so some of the developers stop working on whatever they've been doing and focus on fixing the bug. They eventually succeed, and now the testers shift their focus as well to verify the fix as soon as possible, so the developers can release a hotfix. The hotfix is deployed, sprint passes by, and the originally planned sprint backlog is only half-done. Everyone is stressed out.</p><p>A similar situation is often created by a product owner: He finds a defect in functionality, created two sprints ago, but demands an immediate repair.</p><p>Is this all really necessary? Sure, some issues have a great impact on the product or service, and then this approach might be justifiable, but rather often this kind of urgent defect whacking is a process that is more emotional than rational. So how to treat bugs systematically?</p><h2>What are bugs and bug fixes?</h2><p>A defect, incident, or simply a “bug” is effectively any deviation of the existing product from its backlog. Any behavior that is different from the one agreed upon between the dev team and a product owner can be called a bug. Bugs aren’t only defects in the conventional meaning (e.g., crashes or computational errors); a technically correct behavior in conflict with a boundary set by a user story can also be considered a defect.</p><p>Some bugs are related to the product increment being implemented in the current sprint. Other bugs are found retrospectively: They are related to the user stories developed in past sprints. These fall into two categories:</p><ol><li>Regressions: When a subsequent development broke a formerly functional part of the code. </li><li>Overlooked bugs: They were always there, but no one had noticed.</li></ol><p>Conversely, a bug fix is something that adds value to the current product by lowering the above-mentioned deviation. It requires a certain amount of effort and it raises the value of the present product. At the end of the day, a bug is just another unit of work, and we can evaluate its cost/benefit ratio. It is the same as any other backlog item.</p><h2>A bit of psychology</h2><p>Scrum teams and stakeholders tend to approach both defect categories differently. They also treat them differently than the “regular” backlog items.</p><p>In my experience, there are two important psychological factors influencing the irrational treatment of defects.</p><p>First of all, there's often a feeling of guilt when a developer is confronted with a bug. The natural response of most people is to try to fix the error as soon as possible so that they feel they are doing a good job. Developers naturally want to get rid of such debts.</p><p>Another factor is how people perceive gains and losses. People are evolutionarily averse to losses because the ability to obtain and preserve resources has always been key to survival. There have been studies concluding that on average, people perceive a loss four times as intensely compared to a gain of the same objective value: If you lose 5 dollars, it is four times as painful compared to the gratification of finding 5 dollars lying on the ground. You need to find 20 dollars to have a comparable intensity of feeling as when you lose the mentioned 5. The bug/defect/incident is perceived as a loss for the team's product, especially if it's a regression. A small bug can therefore be perceived as much more important than a newly delivered valuable feature.</p><p>Don't get me wrong—I am not saying that bugs are not worth fixing or that they don't require any attention. That is obviously not true. One of the key principles of scrum is to deliver a functional, <em>potentially releasable</em> product increment in every sprint. That means that a high development quality is fundamental and teams should always aim at developing a debt-free product. Nonetheless, bugs will always have to be dealt with.</p><h2>Bugs caused by newly added code</h2><p>When working on a sprint backlog, the team needs to set up a system to validate the increment they’ve just developed. The goal is to make sure that at the end of the sprint, a feature is free of debt, and can be potentially released. Our experience shows that during a sprint backlog development, the team should focus on removing any bugs related to the newly developed features as quickly as possible in order to keep the feedback/verification loop as short as possible. This approach maximizes the probability that a newly developed user story is done by the end of the sprint and that it is potentially releasable.</p><p>Sometimes there are just too many bugs and it becomes clear that not everything planned in the sprint backlog can be realistically achieved. The daily scrum is the opportunity to point this out. The development team and the product owner together can then concentrate their efforts on a smaller amount of in-progress user stories (and related bugs). It is always better to make one user story done by the end of the sprint than to have ten stories halfway finished. Of course all bugs should be recorded transparently in the backlog.</p><p>Remember, a user story is an explanation of the user's need that the product tackles, together with a general boundary within which the developed solution must lie. A common pitfall is that the product owner decides on the exact way for developing a (e.g., defines the exact UI or technical workflow) and insists on it, even though it is just her personal preference. This approach not only reduces the development team's options to come up with the most effective solution but also inevitably increases the probability of a deviation, thus increasing the number of bugs as well.</p><h2>Regressions and bugs related to past development</h2><p>I think it's important to treat bugs (or rather their fixes) introduced before the current sprint as regular backlog items and prioritize them accordingly. Whenever an incident or regression is discovered, it must go into the backlog and decisions need to be made: What will be the benefit of that particular bug fix compared to other backlog items we can work on? Has the bug been introduced just now or have the users already lived with it for some time and we just did not know it? Do we know the root cause and are we able to estimate the cost needed to fix it? If not, how much effort is worth putting into that particular bug fix, so that the cost/benefit ratio is still on par with other items on the top of the backlog?</p><p>By following this approach, other backlog items will often be prioritized over the bug fix, which is perfectly fine. Or the impact of the bug might be so negligible that it's not worth keeping it in the backlog at all. One of the main scrum principles is to always invest the team's capacity in stuff that has the best return on invested time/costs. When the complexity of a fix is unknown, we have good experience with putting a limit on the invested capacity. For instance, we said that at the present moment, this particular bug fix is worth investing 5 story points for us. If the developers managed to fix the issue, great. If not, it was abandoned and re-prioritized with this new knowledge. By doing this, we mitigated the situations when developers dwell on a single bug for weeks, not being able to fix it.</p><p>I think keeping a bug-log greatly hinders transparency, and it’s a sign that a product owner gives up on making decisions that really matter and refuses to admit the reality.</p><h2>Final words</h2><p>I believe all backlog items should be approached equally. A bug fix brings value in a similar way as a new functionality does. By keeping bug fixes and new features in one common backlog and constantly questioning their cost/benefit ratio, we can keep the team going forward, and ensure that critical bugs don't fall through.</p>#scrum;#agile;#project-management;#release-management
Scrum smells, pt. 4: Dreadful planninghttps://www.mobileit.cz/Blog/Pages/scrum-smells-4.aspxScrum smells, pt. 4: Dreadful planning<p>In a few of our past projects, I encountered a situation that might sound familiar to you: Developers are getting towards the end of a sprint. The product owner seems to have sorted the product backlog a bit for the sprint planning meeting - he changed the backlog order somewhat and pulled some items towards the top because he currently believes they should be added to the product rather soon. He added some new things as well because the stakeholders demand them. In the meantime, the team works on the development of the sprint backlog. The sprint ends, the team does the end-of-sprint ceremonies and planning we go.</p><p>At the planning meeting, the team sits down to what seems to be a groomed backlog. They go through the top backlog items with the product owner, who explains what he has prioritized. The team members try to grasp the idea and technical implication of the backlog items and try their best to plan them for development. But they find out that one particular story is very complex and can't be fitted within a sprint, so they negotiate with the product owner about how to meaningfully break it down into several smaller pieces. Another item has a technical dependency on something that has not been done yet. The third item has a functional dependency - meaning it won't work meaningfully unless a different story gets developed. The fourth item requires a technology that the developers haven’t had enough experience with. Therefore, they are unable to even remotely tell how complex it is. And so on it goes - the team members dig through the “prepared” backlog, try to wrap their heads around it, and finally find out that they can't work on every other story for some reason.</p><p>One possible outcome is that such items are skipped, and only the items that the team feels comfortable with are planned into the sprint backlog. Another outcome is that they will want to please the product owner and “try” to do the stuff somehow. In any case, the planning meeting will take hours and will be a very painful experience.</p><p>In both cases, the reason is poor planning. If there ever was a planned approach by the product owner towards the backlog prior to the planning meeting, it was naive, and now it either gets changed vastly, or it gets worked on with many unknowns - making the outcome of the sprint a gamble.</p><h2>What went wrong?</h2><p>One might think all the planning occurs exclusively at the planning meeting. Why else would it be called a planning meeting? Well, that is only half true. The planning meeting serves the purpose for the team to agree on a realistic sprint goal, and discuss with the product owner what can or cannot be achieved within the upcoming sprint, and create a plan of attack. Team members pull the items from the top of the backlog into the sprint backlog in a way that gets to that goal in the best possible way. It is a ceremony that actually starts the sprint, so the team sets off developing the stuff right away.</p><p>In order to create a realistic sprint plan that delivers a potentially releasable product increment with a reasonable amount of certainty, there has to be enough knowledge and/or experience with what you are planning. The opposite approach is called gambling.</p><h2>Definition of ready</h2><p>It is clear that the backlog items need to fulfill some criteria before the planning meeting occurs. These criteria are commonly referred to as a “definition of ready” (DoR). Basically, it is a set of requirements set by the development team, which each backlog item needs to meet if the product owner expects it to be developed in upcoming sprints. In other words, the goal of DoR is to make sure a backlog item is immediately actionable, the developers can start developing it, and they can be realistically finished within a sprint.</p><p>We had a good experience with creating DoR with our teams. However, we also found that this looks much easier at a first glance than it is in practice. But I believe it is definitely worth the effort, as it will make predictions and overall workflow so much smoother.</p><p>DoR is a simple set of rules which must be met before anyone from the scrum team can say “we put this one into the sprint backlog”. They may be dependent on the particular product or project, and they can be both technical and business-sided in nature, but I believe there are several universal aspects to them as well. Here are some of our typical criteria for determining if backlog item satisfies the DoR:</p><ul><li>Item has no technical or business dependencies.</li><li>Everyone from the team understands the item's meaning and purpose completely.</li><li>We have some idea about its complexity.</li><li>It has a very good cost/benefit ratio.</li><li>It is doable within one sprint.</li></ul><p>There are usually more factors (such as a well-written story definition, etc.), but I picked the ones that made us sweat the most to get them right.</p><h2>Putting backlog refinement into practice</h2><p>This is a continuous and never-ending activity, which in my opinion has the mere goal of getting the DoR fulfilled. As usual, the goal is simple to explain, but in practice not easy to achieve. Immature teams usually see refinement activities as a waste of time and a distraction from the “real work”. Nonetheless, our experience has proven many times that if we don't invest sufficient time into the refinement upfront, it will cost us dearly in time (not so much) later in the development.</p><p>So, during a sprint, preparing the ground for future sprints is a must. The development team must take this t into account when planning the sprint backlog. Refinement activities will usually occupy a non-negligible portion of the team's capacity.</p><p>The product owner and the team should aim at having at least a sprint or two worth of stuff in the backlog, which meets the DoR. That means there needs to be a continuous discussion about the top of the backlog. The rest of the scrum team should challenge the product owner to make sure nothing gets left there just “because”. Why is it there? What is its purpose and value in the long term?</p><p>Once everyone sees the value, it is necessary to evaluate the cost/benefit ratio. The devs need to think about how roughly complex it will be to develop such a user story. In order to do that, they will need to work out a general approach for the actual technical implementation and identify its prerequisites. If they are able to figure out what the size roughly is, even better.</p><p>However, from time to time, the devs won't be able to estimate the complexity, because the nature of the problem will be new to them. In such cases, our devs usually assigned someone who did research on the topic to roughly map the uncharted area. The knowledge gained was then used to size the item (and also later on, in the actual development). This research work is also tracked as a backlog item with it's intended complexity, to roughly cap the amount of effort worth investing into it.</p><p>Now with the approximate complexity established, the team can determine whether the item is not too large for a sprint. If it is, then back to the drawing board. How can we reduce or split it into more items? In our experience, in most cases, a user story could be further simplified and made more atomic to solve the root of the user's problem. Maybe in a less comfortable way for him, but it is still a valuable solution - remember the Pareto principle. The product owner needs the support of the devs to know how “small” a story needs to be, but he must be willing to reduce it, and not resist the splitting process. All of the pieces of the “broken down” stories are then treated as separate items with their own value and cost. But remember, there always needs to be a user value, so do vertical slicing only!</p><p>Then follows the question: “Can't we do something with a better ratio between value and cost instead?” In a similar fashion, the team then checks the rest of the DoR. How are we going to test it? Do we need to figure something out in advance? Is there anything about the UI that we need to think about before we get to planning? Have we forgotten anything in dependencies?</p><p>Have we taken all dependencies into account? <strong>Are we able to start developing it and get it done right away?</strong></p><h2>Let the planning begin!</h2><p>Once all the questions are answered, and both the devs and the product owner feel comfortable and familiar with the top of the backlog, the team can consider itself ready for the planning meeting.</p><p>It is not necessary (and in our case was also not common) for all devs to participate in the refinement process during a sprint. They usually agreed on who is going to be helping with the refinement to give the product owner enough support, but also to keep enough devs working on the sprint backlog. At the planning meeting, the devs just reassure themselves that they have understood all the top stories in the same way, recap the approach to the development, distribute the workload and outline a time plan for the sprint.</p><p>The sprint retrospective is also a good time to review the DoR from time to time, in case the team encounters problematic patterns in the refinement process itself.</p><p>Proper and timely backlog refinement will prevent most last-minute backlog changes from happening. In the long run, it will save money and nerves. It is also one of the major contributors to the team's morale by making backlog stuff easier to plan and achieve.</p>#scrum;#agile;#project-management;#release-management
The Last Scrum Guide Updatehttps://www.mobileit.cz/Blog/Pages/2020-scrum-guide-update.aspxThe Last Scrum Guide Update<p>​​The updated scrum guide has been out for more than a year, so many have already written about it. But I would like to take this opportunity to share a few thoughts about the scrum guide as such and also comment on some of the features in the latest update. </p><h2>Is scrum guide underrated?</h2><p>The scrum guide is in my experience often overlooked by teams or organizations. I dare to believe I might know some of the reasons. When I was first learning about scrum and I was trying to live by it with my colleagues, we needed a lot of guidance. We needed a hands-on approach by our more agile-seasoned colleagues to actually help us set the basics and actually start working in the framework. We needed to discuss the nitty-gritty details of the daily work and all the various aspects of backlog management. We sought after organizational how-tos and best practices to wrap our heads around this interesting, yet somehow elusive concept of scrum. </p><p>Scrum is easy to understand but difficult to master. It is mainly a cultural shift compared to traditional project management and development approaches, so why is it so hard to get right? Sometimes the actual cultural shift is the culprit, but it is also demanding for organizational and planning discipline. </p><p>The actual nuts and bolts are often what teams struggle with and that's where they seek a lot of help in the earlier phases of their maturing. The scrum guide however does not give many answers in this regard. And that's why a lot of people don't find much practical use in reading that document. It seems to me it is rarely a go-to place for teams or scrum masters when searching for scrum answers.</p><p>But the scrum guide was never meant to be a detailed guide. And that is a good thing. Its purpose is to set the boundaries just where they are absolutely necessary, outline the philosophy, and leave the space for interpretation where the scrum team is supposed to bring in its brains and creativity. It tries to be as precise as possible without being taxative. And for this very reason, it needs to be minimalistic. (If only our legislators approached their work this way…)</p><p>In my opinion, the more the team matures, the less it should try to seek exact how-tos about living in scrum, but the more it should return to the bare basics - to the scrum guide.</p><p>That helps to gain some distance from the daily routines and undesired habits and look at their doing from the above to get rid of unnecessary habits and rather focus on the actual philosophical value than a process. Matured teams have a pretty good idea about the philosophy of scrum and want to live by it. Going back to the guide can be an enlightening experience for them. Approach the scrum guide as it's been triple distilled before it got its current shape.</p><h2>Recent updates</h2><p>At the end of 2020, the scrum guide received yet another update. Although some of the changes seem minor or sound just like a wording change, to me it seems like a good step towards helping to clear up several common misconceptions. You can find the changelog easily on the internet, so it is not my aim to cover the update extensively. I want to talk about a few particular changes, which I find the most interesting.</p><h3>1. Ditched the 3 questions for dailies</h3><p>Everyone knows the 3 questions that are the core of the daily - what did I do, what will I do, and do I have any impediments, right? So the guide got rid of them. Why is it a good thing? Because the ultimate team's purpose of the daily is to evaluate sprint backlog's achievability and to make decisions upon this evaluation. Decisions to make the team create the most value within the sprint (and to still meet the sprint goal). Maybe by dropping some sprint backlog items. Maybe by re-assigning backlog items between the developers. Maybe by helping each other out. </p><p>These questions were sometimes obscuring the actual goal of the daily. As if it was forbidden to talk about anything else other than that. Does it mean that these 3 questions should not be used from now on? No, it doesn't. It certainly is necessary that the devs share their progress and sync on it. If the team finds them useful, it will use them. The guide just suggests that these questions are not the pivotal point of the daily and give more freedom to the team to tailor the daily to their needs.</p><h3>2.Stressing the core scrum values and principles</h3><p>The guide now tries to be clearer about the core scrum values and principles of empiricism. It reminds us that being transparent, inspecting, and adapting based on the findings is a vital part of the process. This is as opposed to extensive planning and attempting to achieve perfection for the first time.</p><p>Commitment, openness, and respect are the values that shape the actual outcome. The often-used practices like estimating, tracing velocities, and plotting burn-downs are icing on the cake. But the team should be able to bring high value even without them.</p><h3>3. Team unification</h3><p>Formerly, scrum the guide defined a developer team and a scrum team. Scrum team = development team + scrum master + product owner. From now on, there is just a scrum team. </p><p>To me this is not just a cosmetic change. Sometimes the teams gained a dynamic of the product owner being an outsider. A customer for the team. Subliminally perceived as a competitor or a hindrance to the development team. In some teams, even the scrum master was seen as an outsider.</p><p>Team unification is an attempt to get rid of the us-and-them mentality. When perceived right, all the team members have a common goal to solve problems to create high business value. And the devs can (and should) be in close touch with the product owner to work out how to keep achieving that continuously. It needs to be clear that to the developers, the product owner is not an enemy, but someone who they can use to get answers, opinions, and business insights from, to be efficient. They can negotiate with him about the possible approaches to solving particular problems. When there is a good idea to achieve something in a more efficient or simpler way than originally intended, devs should understand that both them and the product owner will benefit from it. And the product owner should understand that too. They are teammates.</p><h3>4. Commitments for scrum artifacts</h3><p>The hierarchy of artifacts and their commitments is now clearer. A product has its goal. That determines the product backlog's priority. A sprint has a goal, which defines the sprint backlog. And finally, a product increment must meet a definition of done. This is nothing really revolutionary or entirely new, but more clearly formulated, something that most people intuitively already sensed and used.</p><p>In general I see this update as an evolution rather than a revolution. Or better said - another cycle of distillation. Cheers!</p> <br>#scrum;#agile;#project-management
Relative Estimateshttps://www.mobileit.cz/Blog/Pages/relative-estimates.aspxRelative Estimates<p>​​​​ In my past articles related to <a href="/Blog/Pages/scrum-smells-6.aspx">project</a> and <a href="/Blog/Pages/scrum-smells-4.aspx">sprint planning</a>, we touched on the concept of relative estimates. Those articles were focused more on the planning aspect and the usage of the estimates and less on the actual process of estimation. So let's talk about estimation techniques my colleagues and I found useful. </p><h2>Exact estimate</h2><p> I already touched on this <a href="/Blog/Pages/scrum-smells-5.aspx">before</a>, there is a huge misunderstanding in what makes a feature development estimate exact. People intuitively think that an exact estimate is a precise number with no tolerance. Something like 23.5 man-days of work. Not a tad more or less. </p><p> How much can we trust that number? I think we all feel that not much unless we know more about how the estimate was created. What precise information did the estimator base his estimate on? What assumptions did he make about future progress? What risks did he consider? What experience does he have with similar tasks? </p><p> We use this knowledge to make our own assessment on how likely it is that the job's duration will vary from the estimate. What we do is make our own estimation of a probable range, where we feel the real task's duration is going to be. </p><p> It is quite a paradoxical situation, isn't it? We force someone to come up with precise numbers so that we can do our own probability model around it. Wouldn't it be much more useful for the estimate to consider this probability in the first place? </p><p> That also means that (in my world) a task estimate is never an exact number, but rather a qualified prediction of the range of probability in which a certain job’s duration is going to land. The more experience with similar tasks the estimator has, the narrower the range is going to be. A routine task that one has already done hundreds of times can be estimated with a very narrow range. </p><p> But even with a narrow range, there are always variables. You might be distracted by someone calling you. You mistype something and have to spend time figuring it out. Even though those variables are quite small and will not likely alter the job's duration by an order of magnitude, it still makes an absolutely precise estimate impossible. </p><h2>Linear and non-linear estimates</h2><p> On top of all that, people are generally very bad at estimating linear numbers due to a variety of cognitive biases. I mentioned some of them here [link: Wishful plans - Planning fallacies]. So (not just) from our experience, we proved that it is generally better to do relative estimates. </p><p> What is it? Basically, you are comparing future tasks against the ones that you already have experience with. You are trying to figure out if a given task (or user story or job or anything else for that matter) is going to be more, less, or similarly challenging compared to a set benchmark. The more the complexity increases, the more unknowns, and risks there generally are. That is the reason why relative estimates use non-linear scales. </p><p> One of the well-known scales is the pseudo-Fibonacci numerical series, which usually goes like 0, 1, 2, 3, 5, 8, 13, 20, 40, 100. An alternative would be T-Shirt sizes (e.g. XS, S, M, L, XL, XXL). The point is that the more you move up the scale, the bigger is the increase in difference from the size below. That takes out a lot of the painful (and mostly wildly inaccurate) decision-making from the process. You're not arguing about if an item should be sized 21 or 22. You just choose a value from the list. </p><h2>Planning poker</h2><p> We had a good experience with playing planning poker. Planning poker is a process in which the development team discusses aspects of a backlog item and then each developer makes up his mind as to how “big” that item is on the given scale (e.g. the pseudo-Fibonacci numbers). When everyone is finished, all developers present their estimates simultaneously to minimize any mutual influence. </p><p> A common practice is that everyone has a deck of cards with size values. When ready, a developer will put his card of choice on the table, card facing down. Once everyone has chosen his card, all of the cards are presented. </p><p> Now each developer comments on his choice. Why did he or she choose that value? We found it helpful that everyone answers at least the following questions: </p><ul><li>What are similarly complex backlog items that the team has already done in the past?</li><li>What makes the complexity similar to such items?</li><li>What makes the estimated item more complex than already done items, which were labeled with a complexity smaller by one size degree?</li><li>What makes the estimated item less complex than already done items, which were labeled with a complexity higher by one size degree?</li></ul><p> A few typical situations can arise. </p><h3>1) Similar estimates</h3><p> For a matured team and well-prepared backlog items, this is a swift process, where all the individual estimates are fairly similar, not varying much. The team can then discuss and decide together as to what value it will agree on. </p><h3>2) An outlying individual estimate</h3><p> Another situation is that all individual estimates are similar, but there is one or two, which is completely different. This might have several causes. Either that outlying individual has a good idea, that no-one has figured out or he misunderstands the backlog item itself. Or he has not realized all the technical implications of the development of that particular item. Or he sees a potential problem that the others overlook. </p><p> In such situations we usually took the following approach. People with lower estimates explain the work they expect to be done. Then the developers with higher estimates state the additional work they think needs to be done in comparison to the colleagues with lower estimates. By doing this, the difference in their assumptions can be identified and now it is up to the team to decide if that difference is actually necessary work. </p><p> After the discussion is finished, the round of planning poker is repeated. Usually, the results are now closer to the first case. </p><h3>3) All estimates vary greatly</h3><p> It can also happen, that there is no obviously prevailing complexity value. All the estimates are scattered across the scale. This usually happens, when there is a misunderstanding in what is actually a backlog item's purpose and its business approach. In essence, one developer imagines a simple user function and another sees a sophisticated mechanism that is required. </p><p> This is often a symptom of a poorly groomed backlog that lacks mutual understanding among the devs. In this case, it is usually necessary to review the actual backlog item's description and goal and discuss it with the product owner from scratch. The estimation process also needs to be repeated. </p><p> Alternatively, this can also happen to new teams with little technical or business experience of their product in the early stages of development. </p><h2>It's a learning process</h2><p> Each product is unique, each project is unique, each development environment is different. That means the development team creates their perception of complexity references anew when they start a project. It is also a constant process of re-calibration. A few backlog items that used to serve as a benchmark reference size at the beginning of a project usually need to be exchanged for something else later on. The perception of scale shifts over time. </p><p> The team evolves and gains experience. That means the team members need to revisit past backlog items and ask themselves if they would have estimated such an item differently with the experience they have now. It is also useful, at the end of a sprint, to review items that in the end were obviously far easier or far more difficult than the team initially expected. </p><p> What caused the difference? Is there any pattern we can observe and be cautious in the future? For instance, our experience from many projects shows that stuff that involves integrations to outer systems usually turns out to be far more difficult in comparison to what the team anticipates. So whenever the devs see such a backlog item, the team knows it needs to think really carefully about what could go wrong. </p><h2>Don't forget the purpose</h2><p> In individual cases, the team will sometimes slightly overestimate and sometimes slightly underestimate. And sometimes estimates are going to be completely off. But by self-calibrating using retrospective practices and the averaging effect over many backlog items, the numbers can usually be relied on in the long run. </p><p> Always bear in mind that the objective of estimating backlog items is to produce a reasonably accurate prediction of the future with a reasonable amount of effort invested. This needs to be done as honestly as possible given the current circumstances. We won't know the future better unless we actually do the work we're estimating. </p>​​​​<br><br>#scrum;#agile;#project-management;#release-management
Marginal Utility and Product Managementhttps://www.mobileit.cz/Blog/Pages/marginal-utility.aspxMarginal Utility and Product Management<p>​​​​​​Today I'd like to browse into the waters of economics. There are certain concepts that I believe are highly relevant to the way we build software products. You might be wondering, how do these two fields come together, but bear with me, they are more connected than we often realize. Understanding some basic ideas of behavioral economics can help us steer software projects as it provides us with a different perspective on the prioritization and value-maximization process, which I write about frequently.</p><h2>Utility</h2><p>Before we get to the marginal utility we need to understand some basic concepts. What is a utility anyway? It is an economic term that refers to how much satisfaction a person will get from consuming a particular good or service. The important part to understand is it's the total utility one gets from consumption of certain goods. This is given by the sum of satisfaction one gets from consuming all the individual parts of it.</p><p>​Let's look at an example. Assume you are eating a pizza. The total utility you get from that pizza is how much satisfaction it provides for your particular needs when eating all the single slices. In this case the need you satisfy is typically just hunger (need to eat and/or cravings).</p><p>How can we measure utility? Unfortunately, most of the time it is not directly possible. You can't objectively measure how much satisfaction eating a pizza can bring. On top of that it is highly individual. The satisfaction (utility) you get from eating pizza will probably differ greatly from the satisfaction that the same pizza will bring to Bob, who is allergic to gluten and ate lunch a few minutes ago.</p><p>But what we can do is to compare the value that different goods or services can bring to you. In a given moment you can compare the utility of a pizza compared to that of a steak. The one you'd rather eat is the one with the higher utility to you. You can also compare it to the satisfaction a particular water would bring. Or a car. Or a house. You can assign some abstract dimensionless value to each of the potential goods (similar as we do it with story points) and those can be compared.</p><p>It is obvious that utility, even for a single individual, varies greatly depending on the context. You might say a house would be of a much higher value than a bottle of water. But would you still insist on this if you were parched and lost somewhere in a desert? The famous line <em> <q>My kingdom for a horse</q></em> is a good demonstration of this phenomenon.</p><p>Utility is context dependent and that is also why people are willing to pay different money for the same good when the situation changes. Buying used skis is usually going to cost you less in summer than in winter because people don't demand them as much. Trying to sell ice cream on a beach in summer is an easier job than doing the same in winter time. Context matters for utility.</p><h2>Marginal utility</h2><p>Knowing that, what is a marginal utility? This concept measures how much utility will the next unit of a good or service bring. The total utility is then the sum of all marginal utilities you get from individual parts of that good or service.</p><p>Back to that pizza. Eating a first slice will give you much satisfaction because you went from starving to having something to eat. That was the marginal utility of the first slice.</p><p>The second slice will probably be almost equally satisfying because you are still hungry. But will the 8th slice be as satisfying? Probably not because your hunger is gone by the time you get to it. So gradually we usually tend to get lower and lower marginal satisfaction of each additional unit of a particular good we consume.</p><p>An interesting fact is that marginal utility can also be negative. How much satisfaction would eating the 24th slice of pizza bring you? It would likely not be something you would voluntarily eat (unless you're attending some who-eats-the-most-pizza contest). That one extra slice of pizza could make you sick and therefore cause a negative experience. The marginal utility of the 24th slice would be negative and the total utility of the whole meal would start getting lower.</p><h2>How does all this connect with software development?</h2><p>We're building a software product that serves people. Their motivation to use it stems from the fact that they get some satisfaction out of it. Their needs get fulfilled when something that was once difficult gets easier thanks to our product.</p><p>When building a software product, we have limited resources to fulfill those needs. So we all know we should maximize the value of the time (or other resources) we invest into building it. That is no surprise.</p><p>How can the economical concept help us? I have seen a natural tendency by the product owner (and the rest of the development team including the stakeholders) to attempt to define backlog items as the final form of a software feature; a form that immediately brings users the <em> <q>fullest</q></em> and most <em> <q>perfect</q></em> possible behavior.</p><p>I won't go into deeper detail here, but if you are interested please check one of my <a href="/Blog/Pages/product-development.aspx">previous posts</a> on this topic .</p><p>Understanding the marginal utility concept can help us overcome that psychological barrier of not wanting to reduce individual backlog items to atoms. We want to bake the whole pizza with prosciutto and mozzarella on top of it, serve it with water and wine on a table with a nice view of the sunset because that is what we imagine the product is supposed to look like eventually.</p><p>But let's view it from a different perspective. Why do users want our software product? If we use the pizza analogy, what is the <em>first slice of pizza</em> that we can give them? The slice that will not make users feel fulfilled yet and at the same time cause them to not die of hunger anymore. We can apply the same approach to the whole product backlog preparation and simplification of the individual items.</p><p>Frequently a user story as it is written solves multiple users' needs at once without us realizing it. Remember the email example from the post I mentioned above? A typical user story could say that when you view an email, it gets marked as read, it gets rich-text formatted, embedded images get displayed etc. That is how you would imagine such a feature to ultimately behave.</p><p>What happens, when we apply the marginal utility concept to that story? What is the <em>first slice of pizza</em> that we can develop and decide about the rest later? What aspect of that story has the highest <em>marginal utility</em> for the user? Most likely the ability to actually read the text in that email. The <em>marginal utility</em> of being able to read the content is typically greater than the one of rich-text formatting, image display, or marking messages as read. Suddenly it becomes obvious that the story can be broken down into more atomic slices.</p><p>By asking this question over and over again usually helps to make up the mind as to how worthwhile it is to invest effort into something. It also makes negotiations with stakeholders easier because it somewhat materializes the cost/benefit concept.</p><h2>Know the users</h2><p>As stated before, utility is highly individual and context-dependent. That is why the user stories start with <em> <q>As Bob the truck driver, I'd like to have...</q></em>. A software feature for one person will likely have a totally different value (utility) compared to another. That is the reason it is vital to know well who we are building the product for. To know their real-life problems.</p><p>Only then can we evaluate the marginal utility of our product's features relevantly <em>for them</em>. The capability to view images embedded into emails is going to give wildly different satisfaction to a clerk receiving simple instructions in the text compared to a gallery owner wanting to receive previews of paintings from artists.</p><p>When we know the audience we can reasonably decide how we can split backlog items into more atomic ones and identify what <em>the next slice of pizza</em> should be. That's also why getting real-world feedback as early as possible is so crucial for relevant decision-making. Describing utility highly contextual to a particular person is the reason why <em>user</em> stories are called <em>user</em> stories in the first place.</p><h2>Final words</h2><p>Managing the backlog well is a difficult job. We need to keep it meaningfully prepared, juggle with the item's values, compare the costs and returns. That's why it's good to take a chance and look at the same thing from a perspective by applying some basic economic concepts to it. I hope this sparked some inspiration for your own development process.</p> <br>#agile;#development;#project-management;#scrum
Scope and Time Fixationhttps://www.mobileit.cz/Blog/Pages/scope-and-time-fixation.aspxScope and Time Fixation<p>​When planning for a milestone such as a release or a development finish, there are several approaches we can choose from. A traditional (and often intuitively chosen) approach is to try to predict and fix all the project imperatives. What does this mean?</p><p>As a countermeasure to reduce the number of unexpected events at the beginning of the project, it is common to make a detailed plan months in advance and expect the development teams to keep it. </p><p>Let's say we want to do a major release of our software. The product owner and the stakeholders make their expectations about a release date, the content (scope) it’s going to include, and the cost. In the best case scenario, their expectation is based upon a discussion with the development team.</p><p>The development proceeds and the release milestone approaches. More often than not, the team encounters unpredicted problems and some of the features are delayed. As the release date approaches, it becomes clear that the release plan is endangered. The team starts to get anxious. Then, shortly before the release date, it is evident that not all of the planned scope is going to be finished. As a result, the date gets postponed by managers with the hope that the remaining scope will be finished on time.</p><p>Moreover, additional requirements from the stakeholders arrive and the product owner wants to include them in the release to gain positive perception. The release has already been postponed once, so it would be great to overperform this time. As it usually happens in development, fresh bug reports arrive. Let's include corresponding fixes in the release too. A vicious cycle begins.</p><p>The point is that the more a release keeps getting postponed, the higher is the temptation to include additional content to make up for previous disappointment. Usually, a release is postponed multiple times and gets delivered weeks or months later with a great deal of nervousness.</p><p>What helped our teams in such situations?</p><h2>Back to the basics</h2><p>The first thing we hear when you learn about project management approaches, is how the combination of time, cost, and scope work together. The traditional approach is that in the planning phase of a project, it is decided what functions we will create, how much it will cost, and how long it will take. As I covered in my <a href="/Blog/Pages/scrum-smells-7.aspx">previous posts</a>, I believe it is a futile effort to attempt to make an exact determination of <em>all of the three project imperatives at once</em> as reality rarely follows them in the long run.</p><p>When we define a fixed scope, fixed timeline, and fixed cost, there is virtually no room for flexibility if something goes wrong. From my observations, the effect is usually that in the (probable) case of unexpected problems, it is the quality of the delivered scope that gets sacrificed. In other words, it is an attempt to make the scope somehow flexible by re-interpreting its definition.</p><p>This usually leads to the project manager trying to negotiate a later time for a handover (or release). There is a tendency to check off as many items of the scope list as possible, so that it looks good in reports and on paper the project can proceed to the next phase (and possibly allowing for a payment milestone). But under the hood, shortcuts were taken and oftentimes the pressure to <em>deliver something</em> comes at the cost of the software being defective and half-functional. </p><p>In scrum terminology the scope items are not done. There is a technical and/or business debt. Contrary to the definition of done, there is still known work to be done on these items.</p><p>The costs for paying off the debt are considerably higher than the effort necessary in getting it right at the first go. In many cases, it also means that <em>nothing</em> can be released due to numerous defects. The whole package remains unacceptable. </p><p>Agile approaches try to keep one of the three imperatives (scope, time and costs) flexible. We have a given team with a certain development capacity. That team consumes a predictable budget over a given period of time. What options do we have?</p><ul><li>Team capacity (staffing): affects the throughput over time. Changing it influences the rate at which budget gets consumed and scope gets developed. In this article I assume that the team is naturally using its resources effectively and attempts to improve its performance over time as it matures.</li><li>Development duration (time): affects how much scope can be delivered with a given capacity and lets us cap the invested budget.</li><li>Scope: changing scope will affect how long a given team consumes the budget.</li></ul><p>So if we have a specific budget we need to invest, we can set a suitable team, calculate how long the budget will last, and keep the scope flexible. Or the other way around, if we want a precise scope to be developed, the timeline becomes the bumper. Also, the team size needs to be chosen adequately. We need to be able to supply it with high value business requirements meeting the definition of ready.</p><h2>How to combine time and scope</h2><p>Unless we are in a very rare case when the product aspects are well predictable (process-like activities), we must accept the fact that we need to prioritize the project imperatives. In my experience trying to juggle both scope and time at once usually leads to a lot of confusion and it often sabotages any rhythm the team may have. I believe prioritizing one over the other is the way to go.</p><p>On our projects we usually keep the team capacity at a reasonable level and then prioritize time over scope. That means we set a particular milestone in the calendar and concentrate the activities into making it happen. The scope is the flexible element. </p><p>Let's use an example. The product owner wants to make a release of an update on the market. He deems that it’s worth doing it a month from now because that will bring a considerable improvement to the user and will create a positive perception.</p><p>The product owner and the team make their preliminary prediction about what backlog items they expect to deliver. The team spends that month working on them. As usual, they discover considerable complications with some backlog items along the way and major defects are also reported from the users. Then it becomes clear that not all of the originally expected backlog items will get done by the release date.</p><p>The development team, together with the product owner re-prioritize the backlog to incorporate these new findings. It is decided that the resolution of certain bugs would bring considerable value. Problematic backlog items are put off, so that the team does not spend effort on something that will probably not be finished. The team focuses on the sole goal of making the product releasable at the given time. They are not obsessed with the necessity of delivering absolutely <em>everything</em> that they originally thought. Functionality that will not be included in this release will be contained in the successive one.</p><p>Figuratively speaking we need to draw a horizontal line in the backlog. There are two options for where to draw it. It's position is either fixed and the exact development duration will be adjusted, or time is set and the position of the line in the backlog can be adjusted along the way - it floats.</p><p>By keeping the backlog constantly refined, well prioritized, and the backlog items as atomic as possible, we are maximizing the chance that what falls over the line isn't vital.</p><p>In the example above we used a release as a milestone. But a milestone can be any other critical event - such as consumption of the project's allocated budget.</p><h2>It sounds easy, but...</h2><p>This idea is nothing revolutionary. But putting it into practice usually includes avoiding a few frequent mistakes:</p><ul><li>Avoid stakeholders from expecting strict time and scope fixation. It takes effort to explain that there always needs to be an approach to minimize risk of critical stuff falling over</li><li>Avoid stakeholders from putting you into a situation where <em>everything is equally important</em>. It is the job of the product owner (with the help of the scrum master) to educate them about this.</li><li>Don't be tempted to postpone the release milestone just a little so that you can fit in that one extra valuable item. It's better to set a regular schedule for releases and be safe in the knowledge that it won't take long before the next release.</li><li>Don't juggle with both scope and timeline at once. Prioritize one over the other.</li><li>Don't cling onto past predictions. The situation always evolves and rather constantly evaluates if the prediction is still valid or not.</li><li>Don't get caught up in details. Use the dailies to take a step back and see how the team is progressing towards a goal and if there are any new obstacles in the path.</li></ul><p>Psychology plays a major role when things don't follow the budget - scope - time triangle. But it would be naive not to proactively take steps for when it happens. Deciding on a general strategy - whether to fix time or scope - takes a lot of weight off the team's shoulders and makes development much more predictable and manageable. So always think about what can be sacrificed from the original predictions if things go wrong.</p><br>#agile;#project-management;#release-management;#scrum
Our Minimalistic Approach to 1st and 2nd Level Supporthttps://www.mobileit.cz/Blog/Pages/Our-Minimalistic-Approach-to-1st-and-2nd-Level-Support.aspxOur Minimalistic Approach to 1st and 2nd Level Support<p>​At one of our projects, we're working in a scrum team counting around 10 people. The product has gradually been rolled out to more users and is getting traction. This is good because it provides us with a tangible metric of our product's reach and it gives us more opportunities to validate our ideas in the real world as we see how users react to them.</p><p>Some of the users are also proactively providing feedback about things that are cumbersome, hard to use, or outright buggy. So far we (with a major contribution from our product owner) have managed to gather and process the feedback in an organic way. </p><p>Over time this activity has grown to a considerable effort and has started to consume precious time and we've started to feel like things are slipping through. We need to find a more robust way to handle the user requests.</p><p>The goals we want to achieve by having a link to the users are far from unique. To name a few:</p><ol><li>Collect defect/incident reports</li><li>Collect feature requests</li><li>Help users with troubleshooting</li><li>Have a feeling for the general user's sentiment</li><li>Respond to and inform users</li></ol><p>We don't want to have just a <q>passive</q> way of collaboration with users (meaning only to collect input) but we'd like to be able to respond in a relevant way to their reports or questions. We want them to know what they can expect from us and likewise let them know they're heard and that their contribution is welcome.</p><p>Unsurprisingly, we discovered the majority of users' requests were repeats of the same thing. Users either did not understand a certain feature or a general expectation was misaligned with the product itself.</p><p>Many of the user issues could be resolved quite easily by answering directly and potentially providing further explanation if needed. It became obvious that we need to focus on improving UX here and there. Or bring a new feature because users are repeatedly struggling with certain actions.</p><p>But in order to draw useful conclusions from the numerous inputs and to convert them into prioritized backlog items, we needed more <q>human processing power</q> and a way to somehow standardize the inputs.</p><p>This led us to introduce a combination of a few new concepts.</p><h2>Introduction of bug reports</h2><p>We implemented a bug report function. We wanted to keep it as simple for the user as possible. So after clicking the <em>report bug</em> icon the only thing the user needs to write is an answer to the question <q>What did you expect to happen?</q> and to click a confirmation button.</p><p>We attach a lot of metadata to this and the bug report is then visible in our product's back-office web. This alleviates the need to read through emails and keeps the form uniform to some degree. It also automatically collects crash logs.</p><p>So far we have decided not to integrate this to JIRA for a reason I will explain in a minute.</p><h2>Introduction of support levels</h2><p>The customer for whom we are building the product is a large corporation and one of their business activities is software services. That means they have a department full of people experienced in 1<sup>st</sup> level support activities. So it appeared as a logical option to collaborate with them on this as it was preferred to use in-house personnel.</p><p>This idea posed a few challenges nonetheless:</p><ol><li>How to define competencies?</li><li>How to bring them to a level of knowledge necessary for stand-alone work?</li><li>How to help them with unexpected things?</li><li>Find a way to report problems they can't resolve on their own.</li><li>How to keep them up-to-date with the latest fixes and features?</li></ol><p>We decided to follow the 3-level support model. In our development process it meant the following:</p><h2>1<sup>st</sup> level: </h2><p>Roughly 4 people from the help desk department were selected to join us part-time. Their competence within our development process is:</p><ul><li>Answering general users' questions.</li><li>Resolving user issues that don't require a change of system data or the system itself.</li><li>Passing unresolvable issues to 2<sup>nd</sup> level support.</li><li>Collecting data about frequent bugs.</li><li>Bringing ideas to mitigate the most frequent user complaints/questions.</li></ul><h2>2<sup>nd</sup> level: </h2><p>For the time being, we have decided to bring one extra person who will tackle this alongside our product owner. </p><ul><li>Resolving incidents stemming from bad system data via our system's back-office web.</li><li>Resolving issues that 1<sup>st</sup> level support lacks knowledge of.</li><li>Adding new articles to the knowledge base for the 1<sup>st</sup> level support</li><li>Passing issues to 3<sup>rd</sup> level support</li></ul><h2>3<sup>rd</sup> level: </h2><p>This is the actual scrum team. So the job is to prioritize and implement what comes through the support pipeline.</p><h2>Setting up new flows</h2><p>In order to keep the new 1<sup>st</sup> and 2<sup>nd</sup> level support colleagues in the loop we invited them to our asynchronous communication channels. This way we could quite flexibly answer their questions without the need for yet another regular meeting and it seems it helped their learning process. As of now, they are already quite knowledgeable and their questions are getting more and more <q>advanced</q>. We feared the number of questions could overwhelm us, but it seems we managed it and the ad-hoc communication is effective.</p><p>We also invited all of them to the sprint review meetings, so that they are aware of the current situation and know when the bugs reported by them are done. This is also a perfect place for them to express any general observations and opinions about how they perceive priorities of items reported by them compared to other backlog items.</p><p>For the most frequently occurring questions or problems, we created a simple knowledge base that the 1<sup>st</sup> level colleagues can use while troubleshooting user issues. 2<sup>nd</sup> or 3<sup>rd</sup> level support people occasionally add new articles if a question gets repeated.</p><p>We've been using JIRA as a tool for keeping our backlog since the beginning of the project. Our fear was that introduction of formalized support levels could bring large overhead, so we decided to keep the tools as simple as possible. We don't need any sophisticated helpdesk system. </p><p>Therefore we decided to create two new issue types in JIRA and right in our project space. We called them (no surprise here) a <em>1<sup>st</sup> level ticket</em> and a <em>2<sup>nd</sup> level ticket.</em> Those tickets got a very basic state flow: <em>Open ​→​ In progress →​ Closed + 2 more: Waiting for a reply and Reply obtained</em>.</p><p>We decided to use the original JIRA project space for the 3 support levels. The main reason was to let everyone inspect the backlog if necessary and to check how things are progressing. The second reason was to avoid complexity in our flows.</p><p>By doing this, 1<sup>st</sup> and 2<sup>nd</sup> level support people can also comment on any existing issue in the backlog in case they have relevant additional information. In order to keep things organized we simply created a few new quick filters to view <em>only the 1<sup>st</sup> level tickets</em>, the <em>2<sup>nd</sup> level tickets</em>, or to <em>hide them</em> and view the backlog just as it was before the 1<sup>st</sup> and 2<sup>nd</sup> level people joined in.</p><p>Both of the new issue types also received a button to conveniently convert the issue to a <q>higher</q> one. Meaning that from <em>1<sup>st</sup> level ticket</em> we create a <em>2<sup>nd</sup> level ticket</em> with a click of a button. And from the <em>2<sup>nd</sup> level ticket</em>, we create a <em>bug</em>​ or a <em>user story</em> in the same way. In addition, we created two new boards to display just the respective issues type and their current states.</p><h2>Time will tell</h2><p>So far this model has worked well for us. We get a feeling of safety knowing there is no negative sentiment accumulating among the users we wouldn't know about and could explode later. The test of time will tell if we need to modify the process. Many more users are expected to start using the product. As with anything in such an environment, we will certainly need to adapt eventually. We just don't yet know when. I will come back with an update after we're wiser again.</p>​<br>#agile;#project-management;#scrum
Scrum smells, pt. 5: Planning fallacieshttps://www.mobileit.cz/Blog/Pages/scrum-smells-5.aspxScrum smells, pt. 5: Planning fallacies<p>As the scrum godfathers said, scrum is a lightweight framework used to deal with complex problems in a changing environment. Whether you use it for continuous product development or in a project-oriented mode, stakeholders always demand timelines, cost predictions, roadmaps, and other prophecies of this sort. It is perfectly understandable and justifiable - in the end, the project or product development is there to bring value to them. And financial profit is certainly one of these values.</p><p>Many of us know how painful the inevitable questions about delivery forecasts can be. When will this feature be released? How long will it take you to develop this bunch of items? Will this be ready by Christmas? We would, of course, like to answer them in the most honest way: "I don't have a clue". But that rarely helps, because even though it is perfectly true, it is not very useful and does not help the management very much. For them, approving a project development based on such information would be writing a blank check.</p><p>I've seen several ways in which people approach such situations. Some just give blind promises and hope for the best, while feeling a bit nervous in the back of their minds. Others go into all the nitty-gritty details of all the required backlog items, trying to analyze them perfectly and then give a very definitive and exact answer, while feeling quite optimistic and confident that they have taken everything into account. Some people also add a bottom line "...if things go as planned".</p><h2>If things go as planned</h2><p>Well, our experience shows that all these approaches usually generate more problems than benefits because the impact of that innocent appendix "...if things go as planned" proves to be massive and makes the original plan fall far from reality. It actually stems from the very definition of the words project and process. A process is a set of actions, which are taken to achieve an expected result, and this set is meant to be repeated on demand. On the other hand, a project is a temporary undertaking that aims to deliver a unique outcome or product. While the process is meant to be triggered as a routine and its variables are well known and defined, a project is always unique.</p><p>So, a project is something that people do for the first time, to achieve something new. And when we do something for the first time, there are two kinds of unknowns involved: the known unknowns (knowledge we consciously know we are lacking) and the unknown unknowns (stuff we don't know and we don't even realize it). Based on the nature and environment of the project and our experience in this field, we can identify some of the unknowns and risks to a certain degree. But I don't believe that there will really be a project where all the potential pitfalls could be identified unless you actually implement the project - only then you will know for sure. If we'd like to identify all risks and analyze future problems and their potential impact, we need to try it out in real life. Only then could we be certain about the outcomes, approving or disapproving our initial expectations.</p><p>I am trying to express that uncertainty is part of every project. That means that when planning a project, we need to take that into account. So when setting up a project and trying to get a grasp of the costs, timeline, and scope, we must understand we're always dealing with estimates and planning errors. So instead of trying to pretend it doesn't exist and requiring (or providing) a seemingly "exact and final" project number, I think a more constructive discussion would be about the actual scale of the error. </p><h2>Cognitive biases</h2><p>While the above is generally logically acceptable to rational and experienced people, why do we tend to ignore or underestimate the risks at the beginning? I believe it's got something to do with how our minds work.</p><p>There is a phenomenon called the <strong>planning fallacy</strong>, first described by psychologists in the 1970s. In essence, they found that people tend to (vastly) underestimate time, costs, and risks of actions while (vastly) overestimating the benefits. The researchers measured how probable were various subjects to finish various tasks within the timeframes the subjects have estimated. Interestingly, over half of the subjects often needed more time to finish the task than was their catastrophic-case estimate.</p><p>The actual thinking processes are even more interesting. Even with past experience of solving a similar problem and a good recollection of it, people tend to think they will be able to solve it quicker this time. And that people genuinely believe their past predictions (which went wrong in the end) were too optimistic, but this time they believe they are making a realistic estimate.</p><p>There is also something called an <strong>optimism bias</strong>. Optimism bias makes people believe that they are less likely to experience problems (compared to others). So even though we can have a broad range of experience with something, we tend to think things will evolve in an optimistic way. We tend to put less weight on the problems we may have already encountered in similar situations, believing this was "back then" and now we are of course more clever, and we won't run into any problems this time. People tend to think stuff is going to go well just because they wish for it.</p><p>Another interesting factor is our tendency to take credit for whatever went well in the past, overestimating our influence, while naturally shifting the reasons for negative events to the outside world - effectively blaming others for what went wrong or blaming bad luck. This might not be expressed out loud, but it influences our views regardless. This stems from a phenomenon called <strong>egocentric bias</strong>.</p><h2>Combining psychology with projects</h2><p>So it becomes quite obvious that if we combine the lack of relevant experience (a project is always a unique undertaking up to a certain degree, remember?) with the natural tendency to wish for the best, we get a pretty explosive mixture.</p><p>We need to understand that not just the project team itself, but also the stakeholders fall victim to the above-mentioned factors. They also wish for a project to go as they planned and managers rarely like sorting out any problems that stem from a project in trouble that doesn't evolve as expected.</p><p>Yes, I have met managers who naturally expect considerable risks and don't take positive outcomes for granted. Managers who understand the uncertainties and will constructively attempt to help a project which slowly deviates from the initial expectations. When we have a manager who addresses risks and issues factually and rationally, it is bliss.</p><p>But what if that's not the case? Many managers try to transfer the responsibility for possible problems to the project teams or project managers while insisting that the project manager must ensure "project goes as estimated". Usually, their way of supporting a project is by stressing how important it is to deliver stuff in time and that the team must ensure it no matter what. And that all the features need to be included, of course.</p><p>Now when you combine the fuse in the form of pressure from stakeholders with this explosive mix, that's when the fireworks start.</p><p>So how to increase the chance of creating a sane plan, keep the stakeholders realistically informed, while maintaining a reasonably peaceful atmosphere in the development team? I think we can help it by gathering certain statistics and knowing we are constantly under the effect of cognitive biases. We'll look at this in the next part of this series.</p>#scrum;#agile;#project-management;#release-management
Scrum smells, pt. 7: Wishful planshttps://www.mobileit.cz/Blog/Pages/scrum-smells-7.aspxScrum smells, pt. 7: Wishful plans<p>​​​​ In the preceding parts of the planning series, we were just preparing our ground. So today, let's put that into practical use and make some qualified predictions. </p><p> You're planning an initial release of a product and you know what features need to be included so that it gets the necessary acceptance of users. Or your stakeholders are asking you how long it will take to get to a certain feature. Or you have a certain budget for a project and you're trying to figure out how much of the backlog is the team capable of delivering for that amount of money. </p><h2>Measuring velocity</h2><p> There is a useful metric commonly used in the agile world called development velocity (or team velocity). It basically says, what is the amount of work that a particular team can do within one sprint on a certain product in a certain environment? </p><p> In essence, it's just a simple sum of all the work that the team is able to do during a sprint. It is important to count only the work that actually got to the state where it meets the definition of done within that particular sprint. So when a team does work worth 50 story points within a sprint, that's the team's velocity in that given sprint. </p><p> Nonetheless, we must expect that there are variables influencing the “final” number. Estimates are not precise, the team might have its members sick or on vacation and so on. That means that the sprint velocity will vary between the sprints. So as always, the longer we observe and gather data, the more reliable numbers we can get. Longer-term statistical predictions are usually more precise than short-term ones. </p><p> So over time, we can calculate averages. I found it useful to calculate rolling averages over several past sprints because the velocity usually evolves. It smooths out local dips or highs caused for instance by the parallel vacation of several team members. Numbers from the beginning of a project will probably not relate very much to values after two years of the team maturing. The team gets more efficient, makes better estimates, and also the benchmark for estimates usually changes somewhat over the course of time. </p><p> That means that we will get an average velocity that represents the typical amount of work that a given team is able to do within one sprint. For instance, a team that finished 40, 65, 55, 60, 45, and 50 story points in subsequent sprints will have an average velocity of slightly over 50 story points per sprint over that time period. </p><p> Note: If you're a true geek, you can calculate standard deviation and plot a chart out of it. That will give you a probability model. </p><h2>Unexpected work's ratio</h2><p> Now the last factor we need to know in order to be able to create meaningful longer-term plans is the bias between the known and unknown work. </p><p> I'll use an example to explain the logic that follows. So let's say we have 10 user stories at the top of our product backlog, worth 200 story points. The development team works on them and after 4 sprints it gets them done. But when retrospectively examining the work that was actually done within those past 4 sprint backlogs, we see that there was a lot of other (unpredicted) stuff done apart from those original 20 stories. If we've been consistent enough and have most of the stuff labeled with sizes, we can now see their total size. Let's say 15 unexpected items got done in a total size of 75 story points. </p><p> That means we now have an additional metric. We can compare the amount of unexpected work to the work expected in the product backlog. In this particular example, our ratio for the past 4 sprints is 75:200, which means that for every expected story point of work, there came almost 0,4 additional story points that we had not known about 4 sprints ago. </p><p> Again, this evolves over time and you also get more precise numbers as time passes and the team matures. On one of our projects, we came to a long-term statistic of 0,75 of extra story points of unpredictable stuff for every 1 known story point, just to give you some perspective. </p><p> Having a measurable metric like this also helps when talking to the stakeholders. No one likes to hear that you keep a large buffer just in case; that's hard to grasp and managers usually will try to get rid of that in any planning. So a metric derived from experience is much easier to explain and defend. </p><h2>Making predictions</h2><p> So back to the reason why we actually started with all these statistics in the first place. In order to provide some qualified predictions, we need to do some final math. </p><p> With considerable consistency, we got to a state where we know the (rough) sizes of items in our backlog and therefore we know the amount of known work. Now we also know the typical portion of the unexpected stuff as a ratio to the known work. You also know the velocity of your team. </p><p> We will now add the percentage of unpredicted work to the known work and we get the actual amount of work that we can expect. Dividing by the team's velocity, we can get to the amount of time the team will need to develop all of it. </p><p> Let's demonstrate that with an example: There's a long list of items in the product backlog and you're interested in knowing how long it will take to develop the top 30 of them. There shouldn't be any stories labeled with the “no idea” sizes like “100” or “??”. That would skew the calculation considerably, we need to make sure such items don't exist there. So in our example, we know the 30 stories are worth 360 story points. </p><p> We've observed that our ratio of unpredictable to known stuff is 0,4:1. So 360 * 0,4 = 144. That means that even though we now see stuff for 360 points in our list, it is probable that by the time we finish the last one , we will actually make another (of course <i>roughly</i>) 144 points of work that we don't know about yet. So in total, we will have <i>roughly</i> 500 points of work to do. </p><p> Knowing our velocity (let's stick with 50 points per sprint), let's divide 500 / 50 = 10. So we can conclude that to finish the thirtieth item in our list, it will take us <i>roughly</i> 10 sprints. It might be 8 or it might be 12, depending on the deviations in our velocity and the team's maturity. </p><h2>Additional decisions we can take</h2><p> Two common types of questions that we can now answer: </p><ol><li> It's the first of January and we have 2-week long sprints with the team from the previous example. Are we able to deliver all of the 30 items by March? Definitely not. Are we able to deliver them by December? Absolutely. It seems that they will be dealt with sometime around May or June. </li><li> We know our budget will last for (e.g.) 4,5 months from now. Will we be able to deliver those 30 items? If things go optimistically well, it might be the case. But we should evaluate the risk and decide accordingly. </li></ol><p> How can we act upon this? We can now systematically influence the variables in order to increase our chances of fulfilling the plan. A few options out of many: </p><ul><li>We can try to raise the team's velocity by adding a developer if that's deemed a good idea.</li><li>We can try to simplify some stories in the backlog to make the amount of known work smaller.</li><li>Or we can push the plan's end date.</li></ul><p> A warning: Some choose an approach to let everything be constant and try to increase the velocity by “motivating” (understand forcing) the team to plan more story points for a sprint. I don't need to explain that this is a dead-end that, statistically speaking, leads to the most likely scenario of having something “fall over” from the sprint backlog. It burdens the team with the unnecessary overhead of having to deal with the consequences of overcommitment during the sprint and work that won't get done any faster anyway. We can rather review the development tools and processes to see if there is any chance for velocity improvement, but that should be a permanent and continuous activity for any team regardless of plans. </p><h2>Final words</h2><p> Planning projects is never an exact process. But there are certain statistics and metrics that can give us guidelines and help us see how realistic various plans are. We can then distinguish between surefire plans, totally unrealistic plans, or reasonable ones. It can tell us when we should be especially cautious and take action to increase our chances. </p><p> But any predictions will only be as precise as we are transparent and honest with ourselves when getting the statistics. Trying to obscure anything in order to pretend there are no unforeseen factors or problems will only make the process more unpredictable in the long run. </p><p> So hopefully this article will inspire you on how to tackle the future in a more comfortable way. </p>​<br>#scrum;#agile;#project-management;#release-management