mobile it

 

 

Building a chatbot, pt. 3: How to design a conversationhttps://www.mobileit.cz/Blog/Pages/chatbot-3.aspxBuilding a chatbot, pt. 3: How to design a conversation<p>As users, we require a good and practical interface when interacting with any system. It has to be simple enough for us not to get lost in it. At the same time, it has to meet all the needs we might have. The same applies to interactions with a chatbot. The conversation has to be simple enough so the user is not bothered with unnecessary information. On the other hand, it has to cover all possibilities that might occur when processing a user's request. </p><p>In previous articles, I described our chatbot solution that uses checklists as the main element of conversation design. In this article, I would like to share our process of creating a complex conversation from a simple feature request. </p><p>Since the design is specific to our chatbot solution, I will use the same tools for design as we use during real development, that is, pen and paper. Surprisingly, these are still the best tools to use. As a design template, I will use finite state machine diagrams. This is to simplify analysis and also it serves as a template to use in the final code implementation. So what will be shown? Circles and arrows. Every circle represents one piece of information that is successfully processed by the bot. Every arrow represents an input sentence or a condition that is imposed on data already provided during the conversation. </p><h2>Feature request</h2><p>Let's say that we have the following request for a chatbot feature from a customer:</p><p><em>In our company, we have meeting rooms that we use daily. We need to schedule our daily calls and business events like product demonstrations and virtual meetings. We need you to create and design a conversation feature that will enable us to book a room. Also, let's send an email to all colleagues that need to attend a given meeting. It would be nice if we could append an optional message. If any of the colleagues already have another meeting set up, then the organizer needs to see this information beforehand.</em></p><p>To simplify this example, we are going to assume that all other layers necessary for this feature are already prepared (sending emails, checking the schedule of users, etc.). That way we can focus solely on the conversation design.</p><h2>Simple conversation line</h2><p>Let’s start our first conversation design by looking at this:</p><p><em>In our company, we have meeting rooms that we use daily. We need to schedule our daily calls and business events such as product demonstrations and virtual meetings.</em></p><p>So how can we approach this? As designers and analysts, we have to think from the perspective of the chatbot. What information do we need to achieve the goal of this request? We will need to know at least the time of the meeting. There should be an action specification, and to avoid other interpretations we should ask the user for a clarification of the object, a room, in this case. From this we can put together our first checklist:</p><ul><li>Book</li><li>Room</li><li>Time or time range</li></ul><p>Once we have all this information we should be able to move to the next step. In this case, the bot will make a reservation and send an email. Now let's transform this checklist into the conversation diagram:</p> <img alt="simple conversation" src="/Blog/PublishingImages/Articles/chatbot-3-01.png" data-themekey="#" /> <p>We will have 4 steps in conversation. In each step there will be an interaction with the user if this conversation was not specified earlier:</p><ol start="1"><li>What would you like to do? -> <em>Book</em></li><li>What would you like to book? -> <em>Room</em></li><li>When would you like to plan your meeting? -> <em>Tomorrow from 9:00 to 11:00</em></li><li>I have created a reservation for you</li></ol><p>Congratulations. We have designed our first conversation. What will happen if a user creates a single line with all specifications like this?</p><p><em>Book me a room for tomorrow from 9:00 to 11:00</em></p><p>In our chatbot solution, we only ask for information that we don't have. In this sentence all the information is present and so the only response from the bot will be number 4:</p><p><em>I have created a reservation for you</em></p><h2>Optional parameter</h2><p>Now let's have a look at the next part of the feature.</p><p><em>We need you to create and design a conversation feature that will enable us to book a room. Also, let's send an email to all colleagues that need to attend a given meeting. It would be nice if we could append an optional message.</em></p><p>For now, let's focus on the <em>optional message</em> specification and deal with <em>adding users</em> later in this article. </p><p>From the previous iteration our checklist looks like this:</p><ul><li>Book</li><li>Room</li><li>Time or time range</li></ul><p>After time specification we should give the user an option to add a message to his email. A simple question should be enough. How about:</p><p><em>Do you want to add a custom message for your colleagues?</em></p><p>If the answer is yes, then we ask the user for the message itself. Otherwise, we move on to the reservation. We need to enhance our previous design a bit:</p><ul><li>Book</li><li>Room</li><li>Time or time range</li><li>Do you want to add a custom message for your colleagues?</li><ul><li>Yes - add a message</li><li>No - no message</li></ul></ul><p>At this point we have multiple possibilities on how to convert this checklist into a diagram. User interaction should be always the same but as we know, there are multiple ways to design a system. This is one of the possible approaches:</p> <img alt="optional parameter" src="/Blog/PublishingImages/Articles/chatbot-3-02.png" data-themekey="#" /> <p>We’ve added a few steps to the previous diagram. The first 3 are the same. At the fourth one, we will ask for additional users, but for now, we can assume that this is taken care of, and we can focus on step 5. We can assume that the conversation from the chatbot point of view can be prepared like this:</p><ol start="5"><li>Would you like to add a custom message for your colleagues?</li><ol type="a" start="a"><li><em>Yes</em> - Ok, what would you like to say?</li><li><em>No</em> - Alright (and continue with I have created a reservation for you)</li><li><em>“users message”</em> - Great, I will make sure your colleagues get this. (and go back to step 6)</li></ol><li>I have created a reservation for you and your colleagues.</li></ol><p>There is a reason for designing a conversation in this way. This conversation branch can be reused. We can use the same branch when adjusting this meeting. Or in another conversation where we need a custom message.</p><p>Now let’s see what the conversation for this iteration can look like:</p><p> <strong style="color:#381457;">User</strong> : <em>Book me a room for tomorrow from 9:00 to 11:00</em><br> <strong style="color:#9d9d9d;">Bot</strong> : <em>Would you like to add some of your colleagues?</em><br> <strong style="color:#381457;">User</strong> : <em>Yes, add Michael and Jordan</em><br> <strong style="color:#9d9d9d;">Bot</strong> : <em>Alright. Would you like to add a custom message for your colleagues?</em><br> <strong style="color:#381457;">User</strong> : <em>Yes</em><br> <strong style="color:#9d9d9d;">Bot</strong> : <em>Ok, what would you like to say?</em><br> <strong style="color:#381457;">User</strong> : <em>Prepare your ideas for teambuilding!</em><br> <strong style="color:#9d9d9d;">Bot</strong> : <em>Great, I will make sure your colleagues get this. I have created a reservation for you and your colleagues</em><br> </p><h2>Complex enhancement and iteration</h2><p>In the last part of our design, we should introduce a way to add multiple other users to the meeting. Let's have a look at the relevant part of the request:</p><p><em>Also, let's send an email to all colleagues that need to attend a given meeting. It would be nice if we could append an optional message. If any of the colleagues already have another meeting set up, then the organizer needs to see this information beforehand.</em></p><p>Again we find ourselves at a point where there are multiple possible approaches to design a conversation that would satisfy this request.</p><p>I will present one of the possible approaches. The checklist for it will look like this:</p><ol start="1"><li>What would you like to do?</li><li>What would you like to book?</li><li>When would you like to plan your meeting?</li><li>Would you like to add your colleagues to this meeting? Who is it going to be? (<em>list of users</em> / <em>no</em>)</li><ol type="a" start="a"><li>Great that makes “N” of your colleagues, would you like to add more users? (<em>yes</em> / <em>no</em>)</li><li><em>Yes</em> - Alright, who’s next?</li><li><em>No</em> - Alright</li><li>Consider it done. (and back to point a) - Great that makes “N” of you...</li><li><u>Some of the users are already occupied</u> - I am sorry, but this / these user(s) already has / have another meeting scheduled. Would you like to create a meeting without them or invite them anyway? (<em>remove them</em> /<em>invite them anyway</em>)</li><li>Ok, that means {list of users} will participate</li><li>Alright, I will send an invitation to all the users you’ve listed</li><li><u>The current user is not present in the listed users</u> - Are you going to participate as well? (<em>yes</em> /<em>no</em>)</li><li>Ok, I will add you to the list of participants</li><li>Alright</li></ol><li>Would you like to add a custom message for your colleagues? (already done)</li><li>I have created a reservation for you</li></ol><p>So what does this checklist tell us? We start with a straight question - if there are going to be additional users. If not, then we direct the user to another part of the conversation - a custom message. For an affirmative answer (list of users should be also considered as an affirmative answer) we will move into additional questions. Here we can iterate and ask the user if the list of his colleagues is final. If so, there should be some validations and quality of life questions. Those can consist of checking given users if they are in our system. Or we can ask the user if he also wants to participate if his name is not already on the list. Also, this is the place for checking other users’ schedules for conflict meetings. Once everything is sorted out and the user is able to navigate all the possibilities, we should wrap this part of the conversation and move into a custom message part. And that part is already sorted out, so we should be done with the design.</p><p>Now let’s have a look at the diagram of this approach. For simplicity, it only shows the part for adding users.</p> <img alt="Complex enhancement and iteration" src="/Blog/PublishingImages/Articles/chatbot-3-03.png" data-themekey="#" /> <p>From a simple and straight conversation, we’ve arrived at a really complex conversation full of questions from the bot and required answers from the user. But every possibility is covered and the user will have a list of all participants eventually. The first part (steps <em>a</em>, <em>b</em>, and <em>c</em>) deals with adding users. Then we process both conditions for conflicting meetings and current user participation. After that, we return a final list of users back to step 4 and we are done.</p><p>Even though this design is quite elaborate, the user conversation should be natural:</p><p> <strong style="color:#381457;">User</strong> : <em>Book me a room for tomorrow from 9:00 to 11:00</em><br> <strong style="color:#9d9d9d;">Bot</strong> : <em>Would you like to add some of your colleagues?</em><br> <strong style="color:#381457;">User</strong> : <em>Yes, add Michael and Jordan</em><br> <strong style="color:#9d9d9d;">Bot</strong> : <em>Great, that makes 2 of your colleagues, would you like to add someone else?</em><br> <strong style="color:#381457;">User</strong> : <em>Yes</em><br> <strong style="color:#9d9d9d;">Bot</strong> : <em>Alright, who’s next?</em><br> <strong style="color:#381457;">User</strong> : <em>Alfred and Molly</em><br> <strong style="color:#9d9d9d;">Bot</strong> : <em>Consider it done. Great, that makes 4 of your colleagues, would you like to add someone else?</em><br> <strong style="color:#381457;">User</strong> : <em>No</em><br> <strong style="color:#9d9d9d;">Bot</strong> : <em>I am sorry, but Jordan and Molly already have another meeting scheduled at this time. Would you like to create a meeting without them or invite them anyway?</em><br> <strong style="color:#381457;">User</strong> : <em>Invite them anyway</em><br> <strong style="color:#9d9d9d;">Bot</strong> : <em>Ok. Are you going to participate as well?</em><br> <strong style="color:#381457;">User</strong> : <em>Yes.</em><br> <strong style="color:#9d9d9d;">Bot</strong> : <em>Ok, I will add you to the list of participants. Would you like to add a custom message for your colleagues?</em><br> <strong style="color:#381457;">User</strong> : <em>Yes, please.</em><br> <strong style="color:#9d9d9d;">Bot</strong> : <em>Ok, what would you like to say?</em><br> <strong style="color:#381457;">User</strong> : <em>Prepare your ideas for teambuilding!</em><br> <strong style="color:#9d9d9d;">Bot</strong> : <em>Great, I will make sure your colleagues get this. I have created a reservation for you and your colleagues</em><br> </p><h2>Conclusion</h2><p>From a feature request to a solution - that should be a goal of every application developer. We took this vision and applied it to a chatbot solution. In essence, almost any issue that can be solved by a conversation with a person should be possible to convert into a conversation with a chatbot that will handle it as well, no matter the complexity, no matter the subject. There are still cases that we are not able to fit into an automated conversation, but that is another topic.</p><p>From a simple conversation to a complex one, there is always a way for a design that can give users the ability to finalize their task. For the best user experience, we should mind their requests and adjustments.</p> #chatbot;#ai;#neural-network
So you want to create a design system, pt. 4: Layouts & dimensionshttps://www.mobileit.cz/Blog/Pages/design-system-4.aspxSo you want to create a design system, pt. 4: Layouts & dimensions<p>People today can choose between an incredible number of mobile devices that vary enormously in display size and resolution. How can you ensure your app looks and handles great on all of them? Let’s talk about dimensions, positions, and layouts, and how they fit into design systems!</p><h2>Absolute nonsense</h2><p>Some very popular design tools do this:</p> <img alt="Sample mobile screen with hardcoded dimensions" src="/Blog/PublishingImages/Articles/design-system-4-01.png" data-themekey="#" /> <p>It all looks nice and thoroughly specified, doesn't it? Wrong! Dimensions specified in absolute values have long been unusable for several reasons:</p><ul><li>Mobile device displays vary widely in terms of physical pixels.</li><li>Mobile devices also vary widely in terms of absolute physical display size—many phones are now approaching 7 inches, while at the other end of the spectrum devices around 5 inches are more than common. </li><li>The aspect ratios of displays also vary widely—today we can commonly see ratios such as 16:9, 18:9, 18.5:9, 19:9, 19.5:19, 20:9, 21:9, and this is by no means a complete list. </li><li>Since the advent of retina-like displays, physical pixels have become more or less irrelevant, and instead, we have to consider units independent of actual display density. </li><li>The amount of space your application gets on the display for rendering itself may vary depending on the presence, position, and size of system bars or display cutouts. </li><li>The operating system may also feature some sort of split-screen mode; and don’t get me started about foldables, large screen devices, and the like. </li></ul> <img alt="Different screen aspect ratios" src="/Blog/PublishingImages/Articles/design-system-4-02.png" data-themekey="#" /> <p>When designing screens for mobile devices, this means that you know literally <em>nothing</em> about the display, making absolute dimensions totally pointless (this also applies to platforms with a limited number of well-known models such as iOS—who knows what displays iPhones will have next year?). Hopefully, design tools will start to take this into account, but until that happens, we need to help ourselves in other ways.</p><h2>Units united</h2><p>So if physical pixels are no longer usable as a unit of dimension, what to use instead? Well, each platform has its own way of specifying dimensions independent of physical resolution, and unfortunately, you have to account for all of them in your design system.</p><p>For example, Android uses two units: scalable pixels (SP) for text sizes and density-independent pixels (DP) for everything else (actually, even that isn’t entirely true—as far as raster bitmaps are concerned, good old pixels are sometimes used as well, and Android letter-spacing units are totally weird). Moreover, SPs and DPs are converted to the same physical size by default, but this need not be the case if the user so chooses (for accessibility purposes).</p><p>Confused? This is perfectly understandable, but if the design system is to be usable, there is no choice but to learn how each platform handles dimensions, and then use those ways everywhere in the specs. The second best option is to use some abstract universal units and provide a conversion formula for each platform, but this way is more error-prone than specifying platform-native units that developers can use directly.</p><h2>Get into position</h2><p>Even when using platform-specific units, absolute positioning is still not feasible. So how do you adapt your layouts to all possible screen configurations?</p><p>The answer is relative and responsive positioning. Two things need to be specified for each component: Its dimensions and its relative position to some other component, its container, or screen. Together, these parameters form constraints that are used to unambiguously and responsively specify the entire layout, regardless of screen size, aspect ratio, etc.</p><p>Size constraints can be specified as:</p><ul><li>wrapping or “hugging” the content of the component</li><li>occupying all (or a proportion) of the available space</li><li>fixed values, but only in specific cases (e.g. icons)</li></ul><p>Some components may have additional options. For example, a component for displaying text can limit its height to the maximum number of lines displayed, or a component for displaying images can have a fixed aspect ratio.</p><p>It might be useful to add auxiliary parameters to these constraints, such as enforcing minimum or maximum dimensions, but then you must take care to prevent conflicts with other constraints.</p><p>The constraints for width and height are independent and can freely combine the above options, but only so that the resulting size is always unambiguous.</p><p>Relative positioning is quite simple: Each component must be anchored somehow in both horizontal and vertical directions to:</p><ul><li>another component</li><li>the container in which it is placed</li><li>the screen (this option must be handled with care)</li></ul> <img alt="Components and constraints" src="/Blog/PublishingImages/Articles/design-system-4-03.png" data-themekey="#" /> <p>Since the application has no control over how large a display will be available (and what its aspect ratio will be), nor how large the dynamic content will be (e.g. image size or text length), it is always necessary to specify a strategy for what to do if all the content does not fit. The basic options are:</p><ul><li>to make the component scrollable (remember that there is also a horizontal direction, but horizontal scrolling is usually much less intuitive) </li><li>limit the text by the number of characters or lines—in this case, it is necessary to also specify the indication of overflow (ellipsis, some kind of fade effect...) </li><li>crop the image to a fixed dimension or predefined aspect ratio (including the specification of whether to stretch or fit the image into the resulting container) </li></ul><h2>Everything’s not relative</h2><p>While the dimensions and positions of components should be specified as relative where possible for the design to be responsive, there are still situations where we need to use absolute values:</p><ul><li>spacings between components (or their margins)</li><li>container paddings</li><li>corner radii</li><li>dividers</li><li>elevation and/or z-index (depending on your platform, these may or may not be two different things) </li><li>small spacings used to visually align the parts encapsulated inside reusable components (e.g., space between icon and text inside a button) </li><li>parts of the typography specification (e.g., line-height; although in some cases these can be specified relatively as well) </li></ul><p>For the first five cases, it is absolutely essential (as in the case of <a href="/Blog/Pages/design-system-2.aspx">colours</a>) to introduce semantic dimension constants into the design system and then use them exclusively in all designs. You don't want to hardcode these values because it's easy to make a mistake when using them, and you also want to use them according to their purpose, not their value (which means there's no harm in having multiple semantic constants resolved to the same size).</p><p>So how to name these semantic constants? The first part is simple—it should always express the primary purpose (<span class="pre-inline">spacing</span>, <span class="pre-inline">padding</span>, <span class="pre-inline">cornerRadius</span>, <span class="pre-inline">elevation</span>, etc.), possibly combined with secondary usage (e.g., <span class="pre-inline">padding.screen</span>, <span class="pre-inline">padding.dialog</span>, etc.). In some cases, you’ll also need several size variations for a given purpose, so it's good to have a system for naming these variations as well, for example:</p><ul><li>size adjectives like <span class="pre-inline">tiny</span>, <span class="pre-inline">small</span>, <span class="pre-inline">normal</span>, <span class="pre-inline">medium</span>, <span class="pre-inline">large</span>, <span class="pre-inline">huge</span>—these work well, but don’t go overboard with quirky names like <span class="pre-inline">tiniest</span>, <span class="pre-inline">reallyReallyLarge</span>, <span class="pre-inline">humongous</span>; you need to be sure that the order from smallest to largest is always absolutely clear </li><li>T-shirt sizes like XS, S, M, L, XL, XXL—their order is unambiguous, but if you find out later that you need another value between, say, L and XL, you'll have a problem (this also applies to the first option to a certain degree) </li><li>clearly (this is important) dimensionless numbers without semantics like 100, 150, 200, 400, 1200—have the advantage of allowing you any number of values, you can always squeeze a new one between any two, and their order is unambiguous, but a problem will occur if this number is confused with the actual value, which is why I would recommend this only as a last resort (needless to say, the name of a constant must never contain its value) </li></ul><p>Putting it all together, your design system can define set of dimensions such as</p><pre> <code class="kotlin hljs">dimension.padding.screen.normal dimension.padding.screen.large dimension.padding.dialog.normal dimension.spacing.small dimension.spacing.normal dimension.spacing.large dimension.spacing.huge dimension.cornerRadius.normal dimension.cornerRadius.large dimension.elevation.none dimension.elevation.normal </code></pre><p>What actual values should these constants take? This is where it's also good to have a system so that your UI has a consistent rhythm. Different platforms usually have a grid defined as a multiple of some value (e.g., on Android it is 8 DP), and it's good to stick to that (so for example <span class="pre-inline">dimension.spacing.normal</span> might be 16 DP, <span class="pre-inline">dimension.spacing.large</span> 24 DP and so on) because your app always shares the screen with at least a part of the system UI, and ignoring the default grid might make your app feel subconsciously “wrong”.</p><p>And finally, what about the last two bullet points—dimensions used to visually tweak space inside encapsulated components, or in text styles? In this case (and only in this case!) I dare to say: hardcode them. Yes, I know, after all the talk about semantic constants, isolating values in one place and all that, this is unexpected, but I have two good reasons for that: </p><ol><li>These dimensions are implementation details of isolated, otherwise totally encapsulated components or text styles. They are completely hidden, not used externally anywhere, and not reused nor reusable. </li><li>Especially in the case of smaller components, these values are completely arbitrary, chosen based on visual appeal, and don't have to follow a predefined grid at all (e.g., if your button looks best with vertical padding of 3 DP, just use that value). </li></ol><h2>Putting it all together</h2><p>So let's apply the techniques we mentioned above to the example in the first figure:</p> <img alt="Sample mobile screens with dimension properties" src="/Blog/PublishingImages/Articles/design-system-4-04.png" data-themekey="#" /> <p>This is much better! Systematic and responsive, a UI specified this way is much easier to implement and use.</p><p>There are other things to think about:</p><ul><li>With interactive elements like buttons, the platform's minimum tap target size <em>in both dimensions</em> must be respected at all times. This includes not-so-obvious things like hyperlinks in paragraphs of text and the like. </li><li>If your app is running in fullscreen mode, you need to be <em>very</em> aware of and careful about display cutouts, the system UI overlaying your app, etc. </li><li>Modern phones are getting bigger and bigger, and if you use such a device single-handedly, you can't reach the whole screen with your fingers anymore. In this case, the screen is effectively divided into several zones, which differ substantially in how well they can be interacted with. Moreover, often the place that is the most prominent when visually scanning the screen (the top left corner) is also the hardest to reach physically, and vice versa. You have to take this very much into account and design a suitable compromise. And don’t forget about left-handed people! <img alt="Regions on phone screens reachable with a thumb" src="/Blog/PublishingImages/Articles/design-system-4-05.png" data-themekey="#" /> </li><li>Gesture implementation is another can of worms that we won't go into here. Even if your app doesn't support any custom gestures, current mobile platforms use system gestures that can interfere with your app. </li><li>Another major topic (for another blog post maybe) is support for foldable and large displays. This is where things like breakpoints, multi-panel screens, and the like come into play, and it's a whole different ballgame. </li></ul><h2>Design your system, systemize your designs</h2><p>This concludes our series on design systems. I hope it has helped to make the design and implementation of your mobile apps more consistent, easier, and more efficient, and who knows, maybe even a little more fun.</p>#design-system;#ui;#ux;#development;#android;#iOS
Relative Estimateshttps://www.mobileit.cz/Blog/Pages/relative-estimates.aspxRelative Estimates<p>​​​​ In my past articles related to <a href="/Blog/Pages/scrum-smells-6.aspx">project</a> and <a href="/Blog/Pages/scrum-smells-4.aspx">sprint planning</a>, we touched on the concept of relative estimates. Those articles were focused more on the planning aspect and the usage of the estimates and less on the actual process of estimation. So let's talk about estimation techniques my colleagues and I found useful. </p><h2>Exact estimate</h2><p> I already touched on this <a href="/Blog/Pages/scrum-smells-5.aspx">before</a>, there is a huge misunderstanding in what makes a feature development estimate exact. People intuitively think that an exact estimate is a precise number with no tolerance. Something like 23.5 man-days of work. Not a tad more or less. </p><p> How much can we trust that number? I think we all feel that not much unless we know more about how the estimate was created. What precise information did the estimator base his estimate on? What assumptions did he make about future progress? What risks did he consider? What experience does he have with similar tasks? </p><p> We use this knowledge to make our own assessment on how likely it is that the job's duration will vary from the estimate. What we do is make our own estimation of a probable range, where we feel the real task's duration is going to be. </p><p> It is quite a paradoxical situation, isn't it? We force someone to come up with precise numbers so that we can do our own probability model around it. Wouldn't it be much more useful for the estimate to consider this probability in the first place? </p><p> That also means that (in my world) a task estimate is never an exact number, but rather a qualified prediction of the range of probability in which a certain job’s duration is going to land. The more experience with similar tasks the estimator has, the narrower the range is going to be. A routine task that one has already done hundreds of times can be estimated with a very narrow range. </p><p> But even with a narrow range, there are always variables. You might be distracted by someone calling you. You mistype something and have to spend time figuring it out. Even though those variables are quite small and will not likely alter the job's duration by an order of magnitude, it still makes an absolutely precise estimate impossible. </p><h2>Linear and non-linear estimates</h2><p> On top of all that, people are generally very bad at estimating linear numbers due to a variety of cognitive biases. I mentioned some of them here [link: Wishful plans - Planning fallacies]. So (not just) from our experience, we proved that it is generally better to do relative estimates. </p><p> What is it? Basically, you are comparing future tasks against the ones that you already have experience with. You are trying to figure out if a given task (or user story or job or anything else for that matter) is going to be more, less, or similarly challenging compared to a set benchmark. The more the complexity increases, the more unknowns, and risks there generally are. That is the reason why relative estimates use non-linear scales. </p><p> One of the well-known scales is the pseudo-Fibonacci numerical series, which usually goes like 0, 1, 2, 3, 5, 8, 13, 20, 40, 100. An alternative would be T-Shirt sizes (e.g. XS, S, M, L, XL, XXL). The point is that the more you move up the scale, the bigger is the increase in difference from the size below. That takes out a lot of the painful (and mostly wildly inaccurate) decision-making from the process. You're not arguing about if an item should be sized 21 or 22. You just choose a value from the list. </p><h2>Planning poker</h2><p> We had a good experience with playing planning poker. Planning poker is a process in which the development team discusses aspects of a backlog item and then each developer makes up his mind as to how “big” that item is on the given scale (e.g. the pseudo-Fibonacci numbers). When everyone is finished, all developers present their estimates simultaneously to minimize any mutual influence. </p><p> A common practice is that everyone has a deck of cards with size values. When ready, a developer will put his card of choice on the table, card facing down. Once everyone has chosen his card, all of the cards are presented. </p><p> Now each developer comments on his choice. Why did he or she choose that value? We found it helpful that everyone answers at least the following questions: </p><ul><li>What are similarly complex backlog items that the team has already done in the past?</li><li>What makes the complexity similar to such items?</li><li>What makes the estimated item more complex than already done items, which were labeled with a complexity smaller by one size degree?</li><li>What makes the estimated item less complex than already done items, which were labeled with a complexity higher by one size degree?</li></ul><p> A few typical situations can arise. </p><h3>1) Similar estimates</h3><p> For a matured team and well-prepared backlog items, this is a swift process, where all the individual estimates are fairly similar, not varying much. The team can then discuss and decide together as to what value it will agree on. </p><h3>2) An outlying individual estimate</h3><p> Another situation is that all individual estimates are similar, but there is one or two, which is completely different. This might have several causes. Either that outlying individual has a good idea, that no-one has figured out or he misunderstands the backlog item itself. Or he has not realized all the technical implications of the development of that particular item. Or he sees a potential problem that the others overlook. </p><p> In such situations we usually took the following approach. People with lower estimates explain the work they expect to be done. Then the developers with higher estimates state the additional work they think needs to be done in comparison to the colleagues with lower estimates. By doing this, the difference in their assumptions can be identified and now it is up to the team to decide if that difference is actually necessary work. </p><p> After the discussion is finished, the round of planning poker is repeated. Usually, the results are now closer to the first case. </p><h3>3) All estimates vary greatly</h3><p> It can also happen, that there is no obviously prevailing complexity value. All the estimates are scattered across the scale. This usually happens, when there is a misunderstanding in what is actually a backlog item's purpose and its business approach. In essence, one developer imagines a simple user function and another sees a sophisticated mechanism that is required. </p><p> This is often a symptom of a poorly groomed backlog that lacks mutual understanding among the devs. In this case, it is usually necessary to review the actual backlog item's description and goal and discuss it with the product owner from scratch. The estimation process also needs to be repeated. </p><p> Alternatively, this can also happen to new teams with little technical or business experience of their product in the early stages of development. </p><h2>It's a learning process</h2><p> Each product is unique, each project is unique, each development environment is different. That means the development team creates their perception of complexity references anew when they start a project. It is also a constant process of re-calibration. A few backlog items that used to serve as a benchmark reference size at the beginning of a project usually need to be exchanged for something else later on. The perception of scale shifts over time. </p><p> The team evolves and gains experience. That means the team members need to revisit past backlog items and ask themselves if they would have estimated such an item differently with the experience they have now. It is also useful, at the end of a sprint, to review items that in the end were obviously far easier or far more difficult than the team initially expected. </p><p> What caused the difference? Is there any pattern we can observe and be cautious in the future? For instance, our experience from many projects shows that stuff that involves integrations to outer systems usually turns out to be far more difficult in comparison to what the team anticipates. So whenever the devs see such a backlog item, the team knows it needs to think really carefully about what could go wrong. </p><h2>Don't forget the purpose</h2><p> In individual cases, the team will sometimes slightly overestimate and sometimes slightly underestimate. And sometimes estimates are going to be completely off. But by self-calibrating using retrospective practices and the averaging effect over many backlog items, the numbers can usually be relied on in the long run. </p><p> Always bear in mind that the objective of estimating backlog items is to produce a reasonably accurate prediction of the future with a reasonable amount of effort invested. This needs to be done as honestly as possible given the current circumstances. We won't know the future better unless we actually do the work we're estimating. </p>​​​​<br><br>#scrum;#agile;#project-management;#release-management
So you want to create a design system, pt. 3: Typographyhttps://www.mobileit.cz/Blog/Pages/design-system-3.aspxSo you want to create a design system, pt. 3: Typography<p>Long gone are the days when apps could only use a single system font, bold and italic at most, and that was it. Typography is now a significant part of product identity, but how do you apply it systematically?</p><p>As with colors, the most important thing is to avoid hardcoding type-specific values in the design tool and in the code. This means that you need to define a set of text styles and use them consistently wherever text appears in your UI. </p><p>Both major mobile platforms provide default text styles for different situations such as headings, subheadings, paragraphs, captions, or labels. However, these styles don't match across platforms, and it's also likely that your product needs won't fit neatly into these preset categories. In that case, rather than combining built-in and custom styles, it's easier to define your own styles for everything and ignore the built-in ones.</p><h2>Elements of style</h2><p>So what does the text style contain? To be on the safe side and avoid surprises caused by built-in components and their default values, you should always define at least the following properties:</p><ul><li><strong>Typeface:</strong> The font family you want to use. If appropriate on the platform, it is a good idea to specify a generic (or fall-back) font family too, such as serif, sans-serif, monospace, etc. </li><li><strong>Weight:</strong> Modern font families have a much wider range of weights than just regular and bold, not to mention variable fonts. The weight is usually expressed as a number or a name. Here is a table of the most common values: <img alt="Table of font weights" src="/Blog/PublishingImages/Articles/design-system-3-01.png" data-themekey="#" /> </li><li><strong>Style:</strong> Normal or italics, that's more or less it.</li><li><strong>Width:</strong> The width of each letter. Font families with variable widths are not quite common. Examples include: <img alt="Table of font widths" src="/Blog/PublishingImages/Articles/design-system-3-02.png" data-themekey="#" /> </li><li><strong>Case:</strong> Uppercase, lowercase, or small caps.</li><li><strong>Text decoration:</strong> Overline, underline, or strikethrough text.</li><li><strong>Size:</strong> The height of the characters. This is where it starts to get tricky, see implementation details in the next section. </li><li><strong>Letter spacing (tracking):</strong> The space between the characters. Zero is the default value specified in the font family, but it is often advisable to use a slightly higher positive value to improve readability (especially for paragraph styles with smaller text size), or a slightly negative value to better visually balance large headings. </li><li><strong>Line height:</strong> Vertical space between text lines, measured from baseline to baseline. Baseline is the invisible line on which each character sits, not including downstrokes (like in lowercase letters <em>p</em> or <em>g</em>).<br>As with letter spacing, each font family has a default value that may be adjusted for readability. </li><li><strong>Paragraph spacing:</strong> Vertical space between paragraphs.</li><li><strong>Paragraph alignment:</strong> Left, right, center, or block. Be careful with block alignment, as the legibility and visual quality of the resulting typesetting depends a lot on the quality of the algorithm used (which usually cannot be changed), including hyphenation algorithms for different languages. </li><li><strong>Text direction:</strong> If your application supports languages that are written from right to left, you often need to adjust layouts as well, and consistently use layout terms that are independent of text direction, such as start and end instead of left and right. </li><li><strong>Color:</strong> Should color be directly part of the text style specification? A slightly tricky question, both options have their pros and cons. However, specifying a default color probably won't do any harm, so I’d generally recommend including color in text style specification. </li></ul><h2>Practical type system</h2><p>So what text styles does a typical application need?</p><p>First of all, it is a good idea to distinguish between text styles for a text that stands on its own, as “top-level” content (headings, paragraphs, labels, captions, notes, etc.), and text styles for components that happen to contain text (buttons, menus, toolbars, input fields, tabs, etc.).</p><p>Some very well-known design systems don't distinguish between those usages (or, on the contrary, mix those together), but this is unfortunate—it often happens that in such systems a change of paragraph style unintentionally results in a change of the text style in some component like button or input field, which is something you usually don't want.</p><h2>Content text styles</h2><p>As with colors, it's a good idea to hide content text styles behind semantic names. The choice is completely yours, but usually, you will need at least several levels of headings and subheadings, one or two styles for regular text in paragraphs, accompanying styles such as captions or notes, and maybe even some styles for things like list items, etc.</p><p>If your app's domain is so specific that it's worth creating styles for concrete elements (e.g. cart items in an e-shop app, or waypoints in a navigation app), then definitely do so, even if those styles are visually very similar or even the same as the general-purpose styles. It's important to be able to change text styles that have common semantics (which means they change together, for the same reason), not just a coincidentally common look. </p> <img alt="Content text styles example" src="/Blog/PublishingImages/Articles/design-system-3-03.png" data-themekey="#" /> <h2>Component text styles</h2><p>What about component text styles? Most importantly, they should be considered private implementation details of the components, meaning they mustn’t be used in other components or stand-alone text.</p><p>They can only be reused between a group of tightly knit components, e.g., it’s fine to have a common text style for primary, secondary, outlined, or text buttons, but it’s a bad idea to share this style with unrelated components like tabs or chips—chances are, some of these components will change independently (at a different time, or for a different reason, or both), causing problems in unrelated places.</p><p>Beware—many platforms support some kind of inheritance for text styles, meaning you can derive a new style from an existing one by adding or overriding properties. Although this feature looks appealing because it can save implementation effort, when used incorrectly it leads to unwanted coupling, similar to the reuse of styles in unrelated components.</p><p>Never misuse inheritance as a tool to share implementation. Inheritance only works when it creates an “is-a” relationship—e.g., a secondary button certainly is a kind of button, but a tab is probably not a kind of button, and thus its text style should be kept separate.</p> <img alt="Component text styles example" src="/Blog/PublishingImages/Articles/design-system-3-04.png" data-themekey="#" /> <h2>Technical difficulties</h2><p>You may often encounter some complications during text style specification and implementation:</p><ul><li>Size units are a minefield. There are a large number of units and each platform uses its own specific ones. Sometimes the platform may even use different units for different things, which can be further complicated when the platform must support displays with different physical resolutions.<br>The system design specification needs to state the values using the units appropriate for each platform, or at least provide a conversion formula. </li><li>Be careful, not all fonts provide all the weights or styles. Some platforms then try to interpolate the weight when asked for a value that the font does not contain, and the result is usually obviously fake, and visually pretty bad; the same can happen with italics. <img alt="Real and fake italics and bold" src="/Blog/PublishingImages/Articles/design-system-3-05.png" data-themekey="#" /> </li><li>Since the text in the application can come from a variety of sources, and some have built-in formatting (e.g. when displaying HTML, Markdown, etc. with bold and italics applied), the formatting may interfere with the specified weight or text style. In this case, you need to either remove the formatting first, or specify what bold and italics actually mean for each text style. </li><li>You need to be absolutely sure that the font family you choose contains all the characters from all the languages your application supports. It's not just a problem of seeing embarrassing "tofu" blocks in place of missing glyphs, but also of rendering diacritics correctly. <img alt="Missing and invalid glyphs" src="/Blog/PublishingImages/Articles/design-system-3-06.png" data-themekey="#" /> </li><li>Especially if your application uses multiple font families, each with multiple weights and styles, pay attention to size of the font files you are packaging with the application. Higher download sizes increase customer churn. </li></ul><p>Other problems you may encounter include:</p><ul><li><a href="http://www.ravi.io/language-word-lengths">Languages vary greatly in word length.</a> You have to take this into account especially when designing layouts, because what fits on one line in one language may need two or more lines in another. You have to define what should happen in that case, e.g., words are replaced, ellipsized, hyphenated, text size is reduced… </li><li>Especially if you are designing mobile apps, you necessarily need to see all the text on an actual phone screen. Checking on a computer or laptop display is not enough, because it may give you a distorted impression of the size and, above all, the readability of the text.<br>Also remember that many people still use mobile devices with lower physical resolution, which has a big impact on readability, especially the text is small. It may often be appropriate to use other font families that render better on such devices. <img alt="Displays with different resolutions" src="/Blog/PublishingImages/Articles/design-system-3-07.png" data-themekey="#" /> </li></ul><h2>Headline2</h2><p>Typography plays a crucial aesthetic and practical role in a design system. At the same time, it is important to create a robust yet flexible set of text styles to support different functions of text in digital products.</p><p>Next time we'll look at one more important part of design systems—the grid.</p>#design-system;#ui;#ux;#development;#android;#iOS
Our first year with Kotlin Multiplatformhttps://www.mobileit.cz/Blog/Pages/kotlin-multiplatform-first-year.aspxOur first year with Kotlin Multiplatform<p>Kotlin Multiplatform allows you to use the same programming language on mobile, desktop, web, backend, IoT devices, and more. There is a large number of possibilities and a steadily growing number of real-world applications.</p><p>Kotlin Multiplatform was introduced at the end of 2018 and many teams have started adopting it right away, first only on smaller parts of their projects, which later grew over time.</p><p>In the case of the Cleverlance mobile team, we took a different approach in adopting Multiplatform technology. From the beginning, we believed Kotlin Multiplatform was the right approach to share business and application logic between Android and iOS mobile platforms. We've been following cross-platform and multi-platform technologies for a long time, but prior to the advent of Kotlin Multiplatform, none had convinced us of their long-term sustainability in an app agency environment like ours. It's not just about the functionality of the technology itself. What is important is the maturity of the whole platform, the community around it, and last but not least the good availability of experts in the subject matter, as my colleague summarized in his article <a href="/Blog/Pages/choosing-mobile-app-technology.aspx">Questions to ask before choosing mobile app technology</a>.</p><h2>(Too) early days</h2><p>As with any new technology we consider for use in our production applications, we set up a small project with Kotlin Multiplatform and tried to implement elementary tasks like sharing pure Kotlin code between Android and iOS, sharing network services, database access, or offloading work to a background thread.</p><p>At the same time, we started testing the right application architecture and establishing which parts of the application are suitable to share between platforms and how to do it properly. We addressed questions about the structure and rules for building shared code API, whether it's possible to call Kotlin Coroutines from Swift code, etc. And last but not least, we tested the suitable structure of the project and created a build pipeline, at the end of which an Android and iOS app package will be created.</p><p>In the beginning, the work went rather slowly. We built a list of problems or unresolved issues that prevented us from using the technology in our production apps. However, Kotlin Multiplatform has evolved very dynamically and we have to really appreciate the response time of its authors when bugs or shortcomings reported by us were resolved and a new version released in a matter of weeks.</p><p>During 2020, our demo project was gradually becoming usable, the list of unresolved issues was getting shorter, and we were eagerly awaiting the stable release of Kotlin 1.4, which promised a lot of good things.</p><h2>Starting off</h2><p>This happened at the end of summer 2020 when the list of issues that would prevent the Kotlin Multiplatform from being used in production was down to the last two.</p><p>The first one concerned <a href="https://github.com/JetBrains/kotlin-native/issues/2423">how to optimally build Kotlin code as a framework</a> for iOS apps. Even today, a year later, this topic is still not completely resolved, but the solution in the form of the so-called <a href="https://github.com/JetBrains/kotlin-native/issues/2423#issuecomment-490576896">Umbrella module (or Integration module)</a> turned out to be functional and sufficient for our needs, without limiting our work in any way.</p><p>The second one is about <a href="https://github.com/Kotlin/kotlinx.coroutines/issues/462">the memory model in Kotlin/Native</a> (in our case for iOS), and the possibility of using a background thread for coroutines. This one doesn't have a final implementation yet either, but a final solution <a href="https://blog.jetbrains.com/kotlin/2021/08/try-the-new-kotlin-native-memory-manager-development-preview/">is on the way</a>. However, a temporary solution from JetBrains in the form of a special <a href="https://github.com/Kotlin/kotlinx.coroutines/blob/native-mt/kotlin-native-sharing.md">native-mt Coroutines</a> build has proven to be sufficient in not delaying the use of Kotlin Multiplatform any further.</p><h2>First app</h2><p>On 1st September we started working on a new project. It was a mobile application for the consumer/marketing apps sector, for Android and iOS. During the first week of the project, we worked in a team of 1 Android and 1 iOS engineer on the basics of the project structure and app architecture, but especially on the alignment of practices for creating interfaces between shared and platform code and between Kotlin and Swift. Both of us were already experienced in Kotlin Multiplatform, which was important especially on the part of the iOS developer, who was thus already familiar with Kotlin and well aware of the platform differences.</p><p>At Cleverlance, we are proponents of Clean Architecture and have applied its principles to native app development for years. In terms of structure and architecture, we didn't need to invent any new approach, we just adapted our habits and proven practices in a few places to share code between the two platforms more efficiently.</p><h2>Worlds collide</h2><p>In the next phase of the project we added one more Android and iOS engineer, but they hadn't worked with Kotlin Multiplatform yet, so it was interesting to see how quickly they would get comfortable with the new mindset.</p><p>Perhaps unsurprisingly, this was not a major problem for the Android engineer. Due to the almost identical architecture and familiar language, it was enough to get used to the slightly different project structure (especially the module structure, the separation of shared and platform code, etc.) and to the new technologies, e.g. for network communication or storage, where it is not possible to directly use the well-known platform libraries, but one has to reach for the multiplatform ones (although these often just cover those known platform technologies under a single multi-platform roof).</p><p>It was much more interesting to watch the progress of the iOS developer who had never written a single line of Kotlin and had never encountered the Android world or the structure of Gradle projects. But as he was no stranger to the architecture, even though up until this point he had only known it in similar iOS projects written in Swift. It turned out that those principles and practices we shared years before between Android and iOS developers are that crucial foundation, and the technologies or programming languages are just the tools by which they are implemented.</p><p>So at first, this iOS engineer worked mainly on the iOS part of the app, and only occasionally modified shared code written in Kotlin. But he very quickly started writing more of the shared code on his own, until he opened his first PR at the end of September, in which a small but complete feature was implemented in the shared code and the iOS platform implementation. I, as an Android developer, then only wrote the small platform part for Android, on which I spent about 2 hours instead of the 2 days that the feature would have taken me if I had written it all natively for Android.</p><p>And let me tell you, it feels nice and exalting. Then, when that same iOS developer single-handedly fixed a bug in the Android app for us a few weeks later, I realized that this method of development has a whole new dimension that we never dreamed of before.</p><p>The following weeks and months of the project weren't entirely without problems. We occasionally ran into inconsistencies between the nascent Kotlin Multiplatform libraries, especially between Kotlin Coroutines and Ktor. The iOS framework build system stumbled a few times, but none of these issues ever gave us even a hint of stopping the development, and what's more, problems of this type gradually subsided. Around December 2020, after a new version of Kotlin Coroutines 1.4 was released, fully in line with the principles of multiplatform development, these difficulties became completely marginal and we were able to concentrate fully on the app development.</p><h2>Crunching the numbers</h2><p>As the project entered its final phase just before its release into production, it was time to look at the numbers.</p><p>When I checked the last two projects that we created as standard separate native Android and iOS apps, I found that the amount of code (lines of code) required for the Android and iOS app was pretty similar. In fact, the iOS apps were slightly smaller, but I attribute that mostly to the chatty way UI definitions are done on Android, a thing that is changing dramatically with the advent of technologies like Jetpack Compose and Swift UI. Likewise, at the project level, it can be argued that a similar amount of time is required to implement apps for both platforms.</p><p>As for our first multiplatform project, it worked out as follows in terms of lines of code:</p> <img alt="Comparison of lines of code" src="/Blog/PublishingImages/Articles/kotlin-multiplatform-first-year-1.png" data-themekey="#" /> <p>If the effort to implement one native app is 100%, then with Kotlin Multiplatform almost 60% of the code can be shared, and only a little over 40% needs to be implemented twice, for both Android and iOS, meaning the effort to implement both apps is 140% instead of 200%, saving almost a third of total development costs. Here again, it turns out that the amount of code needed for finalization on both platforms is similar. It should be noted that we also count a non-negligible amount of unit tests that we write only once and share.</p><p>The actual breakdown of which parts of the code we share and which we don't is a topic for a separate post, but as a rough preview, I'd give the following chart:</p> <img alt="Amount of shared code per area" src="/Blog/PublishingImages/Articles/kotlin-multiplatform-first-year-2.png" data-themekey="#" /> <p>The user interface is a very platform-specific layer, but the presentation layer does not contain such differences, and the reasons we did not share it in the first project are more technical. On subsequent multiplatform projects, we have however focused more on this part and now we are able to share the presentation layer code at about 70%, which has a positive impact on the percentage of overall code shared in the project.</p><p>When we look at the amount of time spent on this project, we get this graph:</p> <img alt="Comparison of time spent per developer" src="/Blog/PublishingImages/Articles/kotlin-multiplatform-first-year-3.png" data-themekey="#" /> <p>However, the dominance of reported Android developer time is not due to Android apps being more demanding, but simply because Android developers had more time to spend on the project. In fact, one of the iOS developers was not always able to devote 100% of his time during the project, but this did not affect the speed of development for both apps, as his time was simply compensated by the Android developers. The same worked the other way around when, for example, both Android developers were on holiday at the same time. This is not to say that an Android and iOS developer is an equivalent entity in terms of project staffing, but definitely, the multiplatform development gives you a certain amount of flexibility in human resource planning.</p><h2>Unexpected perks</h2><p>At the end of this post, I'd like to mention a few interesting facts and side effects we noticed during the development:</p><ul><li>The project is not fundamentally more demanding in its setup than a standard single platform project. Creating the basis for a new project takes a similar amount of time, in the order of days. </li><li>It's very good if Android developers learn at least the basics of Swift and working in Xcode. They can better prepare the shared code API and make small adjustments atomically across both platforms and shared code. </li><li>For iOS developers, learning Android technologies and ecosystem often involves discovering better tools than they are used to, which motivates them in their endeavors. </li><li>The second usage of shared code works as a very careful code review, and for some teams, this will allow the standard code review done during pull requests to be either reduced or removed entirely, thus increasing the development momentum. </li><li>From a project and business management perspective, we are building just one application. The same is true when communicating with the backend, where both applications act as one, there are no differences in implementation on each platform and it greatly facilitates team communication. </li><li>Short-term planned, but also unplanned developer downtime does not affect team velocity.</li><li>On one of the following projects, we were able to work in a mode of 1.5 Android developers and 3.5 iOS developers, with the development of both apps progressing similarly. </li></ul><h2>Conclusion</h2><p>It's been more than a year since we started working on our first application using Kotlin Multiplatform, and as the text above indicates, it hasn't remained an isolated experiment.</p><p>We are currently using this technology on five brand-new application projects. Besides that, we are discussing the opportunities with several long-term customers to deploy this technology in existing projects.</p><p>Kotlin Multiplatform is maturing like wine and we look forward to bringing the mobile platforms even closer together.</p><p><br></p><p>Pavel Švéda<br></p><p>Twitter: <a href="https://twitter.com/xsveda">@xsveda</a><br><br></p>​<br>#kotlin;#android;#iOS;#multiplatform;#kmp;#kmm
Underused Kotlin featureshttps://www.mobileit.cz/Blog/Pages/underused-kotlin-features.aspxUnderused Kotlin features<p>Kotlin is a modern and rapidly evolving language. Let's explore some nooks and crannies to see if there are any hidden gems.</p><h2>Value classes</h2><p>We often see domain models that look like this:</p><pre><code class="kotlin hljs">// DON'T DO THIS! data class CarCharger( val id: Long, val distance: Int, val power: Int, val latitude: Double, val longitude: Double, val note: String? = null /* ... */ )</code></pre><p>Unfortunately, this antipattern is so widespread that it has earned its own name - <a href="https://refactoring.guru/smells/primitive-obsession">primitive obsession</a>. If you think about it, it turns out that in the domain classes that we model based on real business entities, actually, very few things are <em>unbounded</em> Ints, Doubles, or Strings with totally arbitrary content.</p><p>The solution is to replace these primitives with proper, well-behaved types - wrapper classes that prohibit invalid values and invalid assignments:</p><pre><code class="kotlin hljs">// DON'T DO THIS! data class Latitude(val value: Double) { init { require(value in -90.0..90.0) { "Latitude must be in range [-90, 90], but was: $value" } } }</code></pre><p>We can use this class in the <span class="pre-inline">CarCharger</span> instead of the primitive type. This is much better in terms of safety and code expressiveness, but unfortunately it also often results in a noticeable performance hit, especially if the wrapped type is primitive. </p><p>But fret not! It turns out that thanks to Kotlin’s value classes, you can have your cake and eat it too! If you slightly modify the class declaration:</p><p></p><pre><code class="kotlin hljs">@JvmInline value class Latitude(val value: Double) { init { require(value in -90.0..90.0) { "Latitude must be in range [-90, 90], but was: $value" } } }</code></pre><p>the compiler (similarly to what happens with inline functions) will replace the class with the wrapped value at each call site. Thus, at compile-time, we have all the benefits of a separate type, but no overhead at runtime. Win-win! <a href="https://jakewharton.com/inline-classes-make-great-database-ids/">Inline classes also make great database IDs</a>. </p><p>Similar to data classes, <span class="pre-inline">equals</span> and <span class="pre-inline">hashcode</span> are automatically implemented for value classes based on the wrapped value (because value classes have no identity). Value classes can also have many features of standard classes, such as additional properties (without backing fields), or member functions, but there are also some restrictions - they cannot inherit from other classes (they can however implement interfaces), and they must be final. </p><p>Be sure to read the <a href="https://kotlinlang.org/docs/inline-classes.html">full documentation</a> and how value classes in Kotlin relate to <a href="https://github.com/Kotlin/KEEP/blob/master/notes/value-classes.md">Java’s upcoming Project Valhalla</a>.</p><p>With consistent use of value classes, your domain models (and other code, of course) can be significantly more secure and readable:</p><pre><code class="kotlin hljs">data class CarCharger( val id: CarChargerId, val distance: Kilometers, val power: Kilowatts, val coordinates: Coordinates, val note: String? = null /* ... */ )</code></pre><h2>Computed properties</h2><p>Computed properties are properties with custom getter and setter but without a backing field. They can be used to locally "overload" the assignment "operator".</p><p>For example, the currently popular reincarnation of the MVVM pattern involves a view model with a public, asynchronous, observable stream of UI states that are continuously rendered by the view. In Kotlin this state stream can be represented by <a href="https://kotlin.github.io/kotlinx.coroutines/kotlinx-coroutines-core/kotlinx.coroutines.flow/-flow/">Flow</a>, or more appropriately by its subtype <a href="https://kotlin.github.io/kotlinx.coroutines/kotlinx-coroutines-core/kotlinx.coroutines.flow/-state-flow">StateFlow</a>, which has some properties more suitable for this situation:</p><pre><code class="kotlin hljs">interface ViewModel<S : Any> { val states: StateFlow<S> }</code></pre><p>Let's create an abstract base class that will serve as the basis for concrete implementations. In order for the view model to update states, it must internally hold a writable version of StateFlow:</p><pre><code class="kotlin hljs">abstract class AbstractViewModel<S : Any>(defaultState: S) : ViewModel<S> { protected val mutableStates = MutableStateFlow(defaultState) override val states = mutableStates.asStateFlow() }</code></pre><p>If a concrete view model subclass wants to emit a state update based on the previous state, it must do something like this:</p><pre><code class="kotlin hljs">class GreetingViewModel : AbstractViewModel<GreetingViewModel.State>(State()) { data class State( val greeting: String? = null /* other state fields */ ) fun onNameUpdated(name: String) { mutableStates.value = mutableStates.value.copy(greeting = "Hello, $name") } }</code></pre><p>This works, but the code isn't quite readable, and worse, the implementation details of the abstract view model (that it uses <span class="pre-inline">MutableStateFlow</span> internally) leak into the concrete view model - classes must be well encapsulated not only against the outside world but also against their subclasses! </p><p>Let's fix this by hiding the <span class="pre-inline">MutableStateFlow</span> in the base view model, and instead provide a better abstraction for subclasses:</p><pre><code class="kotlin hljs">abstract class AbstractViewModel<S : Any>(defaultState: S) : ViewModel<S> { private val mutableStates = MutableStateFlow(defaultState) override val states = mutableStates.asStateFlow() protected var state: S get() = mutableStates.value set(value) { mutableStates.value = value } }</code></pre><p>The function in the subclass that needs to update the state then can look like this:</p><pre><code class="kotlin hljs">fun onNameUpdated(name: String) { state = state.copy(greeting = "Hello, $name") }</code></pre><p>The subclass now has no idea how the states are implemented - from its point of view, it just writes and reads the state from a simple property, so if in the future the mechanism for emitting states needs to be changed (and this has happened several times during the development of Kotlin coroutines), individual subclasses will not be affected at all.</p><p> <em>Note: The above code is theoretically thread-unsafe, but depending on the context (view model running on the main thread) this may not be an issue.</em></p><h2>Pseudoconstructors</h2><p>In all non-trivial systems, it is important to abstract the object creation process. Although Kotlin must ultimately call a constructor <em>somewhere</em> to create a new instance, this doesn’t mean that all code should be <em>directly coupled</em> to these constructors - quite the opposite. A robust system is independent of how its objects are created, composed, and represented.</p><p>Many <a href="https://en.wikipedia.org/wiki/Creational_pattern">classic design patterns</a> were created for this purpose, and many of them are still valid with Kotlin, but thanks to the interplay of Kotlin features, we can implement some of them with a twist that improves the discoverability and readability of the resulting code.</p><p>When exploring an unfamiliar API, I would argue that the most intuitive way and the first choice to create an object based on its type is to call its constructor.</p><p>Creational patterns, however, are meant to abstract concrete constructors away, so in traditional languages we may instead see constructs such as:</p><pre><code class="kotlin hljs">Foo.createFoo() Foo.getInstance() Foo.INSTANCE FooFactory.create() Foo.Builder().build() FooBuilder.getInstance().build()</code></pre><p>There are many possibilities and combinations, and it can be challenging to keep them all in your head. Luckily, Kotlin can help us!</p><p>The first example is a basic factory. Let's say we have a point of interest interface called simply <span class="pre-inline">Poi</span>. There are many specific types of POIs with different properties and we need a factory to instantiate them from their serialized representation. </p><p>If our factory can be stateless, we can simply create a top-level function of the same name in Kotlin:</p><pre><code class="kotlin hljs">fun Poi(serialized: String): Poi</code></pre><p>The call site then (except for an import statement maybe) looks exactly the same as if we were calling the constructor.</p><p>Moreover, we can do things with top-level functions that we can't do with constructors - for example, we can have such functions in different modules with different visibility and parameters, for different purposes, in different layers, etc., while a constructor always has to live in its own class.</p><p>This way we can also create "extension constructors" for types we don't own, for example:</p><pre><code class="kotlin hljs">fun ByteArray(base64: String): ByteArray { /* ... */ }</code></pre><p>And if our factory function has default parameters, it can also replace simpler builders.</p><p>Coroutines library authors do something similar with Jobs. When you write</p><pre><code class="kotlin hljs">val job = Job()</code></pre><p>what you actually call is this function:</p><pre><code class="kotlin hljs">fun Job(parent: Job? = null): CompletableJob = JobImpl(parent)</code></pre><p>Here, a "constructor" of a known type actually returns its public subtype, implemented by a private subclass. This gives the library authors a great deal of flexibility for the future.</p><p>The second example is more complicated - let's say we've been tasked with creating a year view for a calendar application where workers in a factory can see their shift schedule and other necessary data. The UI looks something like this:</p> <img alt="Calendar UI" src="/Blog/PublishingImages/Articles/underused-kotlin-features-01.png" data-themekey="#" /> <p>and the domain model of one day is as follows:</p><pre><code class="kotlin hljs">data class Day( val date: LocalDateTime, val shift: Shift, val dayType: DayType, val workType: WorkType ) { enum class Shift { None, Day, Night } enum class DayType { Normal, Weekend, NationalHoliday } enum class WorkType { Normal, Inventory, Maintenance, Training, Vacation } }</code></pre><p>Since the calendar can display several years at once in this view, and there are many possible combinations in each cell, and there can be hundreds of cells on the screen at once, and the whole thing has to scroll smoothly both vertically and horizontally, it is not possible for performance reasons to implement individual cells as regular widgets with an image and a text field.</p><p>We need to optimize this UI so that the individual cells are bitmaps that we render directly to the screen. But there would still be hundreds of such bitmaps, and color bitmaps take up a surprising amount of memory surprisingly quickly.</p><p>The solution is to cache bitmaps that look the same, effectively making them <a href="https://en.wikipedia.org/wiki/Flyweight_pattern">flyweights</a>. This will save a significant amount of rendering time and memory.</p><p>In a classic design, we would create a <span class="pre-inline">BitmapFactory</span>, add some <span class="pre-inline">BitmapCache</span>, and somehow wire it all together. With Kotlin, we can do this:</p><pre><code class="kotlin hljs">class DayBitmap private constructor(val imageBytes: ByteArray) { /* other properties and methods */ companion object { private val cache = mutableMapOf<DayCacheKey, DayBitmap>() private fun Day.cacheKey(): DayCacheKey = ... private fun Day.render(): ByteArray = ... operator fun invoke(day: Day): DayBitmap = cache.getOrPut(day.cacheKey()) { DayBitmap(day.render()) } } }</code></pre><p> <span class="pre-inline">ImageBytes</span> are raw image data that can be directly rendered to the screen. <span class="pre-inline">Cache</span> is a "static" global private cache for unique rendered images of days, <span class="pre-inline">DayCacheKey</span> is a helper type serving as a key to this cache (<span class="pre-inline">Day</span> class cannot be used as a key because it contains a date that is unique for each day - so <span class="pre-inline">DayCacheKey</span> uses all the fields from Day <em>except</em> the date).</p><p>The main trick is however the <span class="pre-inline">invoke</span> operator added to the <span class="pre-inline">DayBitmap</span> companion object.</p><p>First of all, what happens inside: A cache key is created from the given day, and if we already have a <span class="pre-inline">DayBitmap</span> object saved in the cache for this key, we return it immediately. Otherwise, we create it on-demand using its private constructor (which no one else can call!), cache it, and return it immediately. This is the actual flyweight-style optimization. </p><p>But the greatest beauty of this approach is in the creation of <span class="pre-inline">DayBitmaps</span>. The long version of the call is this:</p><pre> <code class="kotlin hljs">// DON’T DO THIS! DayBitmap.Companion.invoke(day)</code></pre><p>But since we don't have to explicitly state a companion with an implicit name, and the <span class="pre-inline">invoke</span> operator just looks like parentheses in a function call, we can shorten the whole thing, and the call-site usage is then indistinguishable from a constructor call, for example</p><pre> <code class="kotlin hljs">val bitmaps = days.map { day -> DayBitmap(day) }</code></pre><p>but with the huge difference that this transformation is <em>internally</em> optimized!</p><h2>More than the sum of the parts</h2><p>The charm of Kotlin is often how its individual features can often be used together in somewhat unexpected ways. This was just a small sampling of the less frequent abilities Kotlin has to offer - we'll look at some more next time.</p>#kotlin;#development;#android
Scrum smells, pt. 7: Wishful planshttps://www.mobileit.cz/Blog/Pages/scrum-smells-7.aspxScrum smells, pt. 7: Wishful plans<p>​​​​ In the preceding parts of the planning series, we were just preparing our ground. So today, let's put that into practical use and make some qualified predictions. </p><p> You're planning an initial release of a product and you know what features need to be included so that it gets the necessary acceptance of users. Or your stakeholders are asking you how long it will take to get to a certain feature. Or you have a certain budget for a project and you're trying to figure out how much of the backlog is the team capable of delivering for that amount of money. </p><h2>Measuring velocity</h2><p> There is a useful metric commonly used in the agile world called development velocity (or team velocity). It basically says, what is the amount of work that a particular team can do within one sprint on a certain product in a certain environment? </p><p> In essence, it's just a simple sum of all the work that the team is able to do during a sprint. It is important to count only the work that actually got to the state where it meets the definition of done within that particular sprint. So when a team does work worth 50 story points within a sprint, that's the team's velocity in that given sprint. </p><p> Nonetheless, we must expect that there are variables influencing the “final” number. Estimates are not precise, the team might have its members sick or on vacation and so on. That means that the sprint velocity will vary between the sprints. So as always, the longer we observe and gather data, the more reliable numbers we can get. Longer-term statistical predictions are usually more precise than short-term ones. </p><p> So over time, we can calculate averages. I found it useful to calculate rolling averages over several past sprints because the velocity usually evolves. It smooths out local dips or highs caused for instance by the parallel vacation of several team members. Numbers from the beginning of a project will probably not relate very much to values after two years of the team maturing. The team gets more efficient, makes better estimates, and also the benchmark for estimates usually changes somewhat over the course of time. </p><p> That means that we will get an average velocity that represents the typical amount of work that a given team is able to do within one sprint. For instance, a team that finished 40, 65, 55, 60, 45, and 50 story points in subsequent sprints will have an average velocity of slightly over 50 story points per sprint over that time period. </p><p> Note: If you're a true geek, you can calculate standard deviation and plot a chart out of it. That will give you a probability model. </p><h2>Unexpected work's ratio</h2><p> Now the last factor we need to know in order to be able to create meaningful longer-term plans is the bias between the known and unknown work. </p><p> I'll use an example to explain the logic that follows. So let's say we have 10 user stories at the top of our product backlog, worth 200 story points. The development team works on them and after 4 sprints it gets them done. But when retrospectively examining the work that was actually done within those past 4 sprint backlogs, we see that there was a lot of other (unpredicted) stuff done apart from those original 20 stories. If we've been consistent enough and have most of the stuff labeled with sizes, we can now see their total size. Let's say 15 unexpected items got done in a total size of 75 story points. </p><p> That means we now have an additional metric. We can compare the amount of unexpected work to the work expected in the product backlog. In this particular example, our ratio for the past 4 sprints is 75:200, which means that for every expected story point of work, there came almost 0,4 additional story points that we had not known about 4 sprints ago. </p><p> Again, this evolves over time and you also get more precise numbers as time passes and the team matures. On one of our projects, we came to a long-term statistic of 0,75 of extra story points of unpredictable stuff for every 1 known story point, just to give you some perspective. </p><p> Having a measurable metric like this also helps when talking to the stakeholders. No one likes to hear that you keep a large buffer just in case; that's hard to grasp and managers usually will try to get rid of that in any planning. So a metric derived from experience is much easier to explain and defend. </p><h2>Making predictions</h2><p> So back to the reason why we actually started with all these statistics in the first place. In order to provide some qualified predictions, we need to do some final math. </p><p> With considerable consistency, we got to a state where we know the (rough) sizes of items in our backlog and therefore we know the amount of known work. Now we also know the typical portion of the unexpected stuff as a ratio to the known work. You also know the velocity of your team. </p><p> We will now add the percentage of unpredicted work to the known work and we get the actual amount of work that we can expect. Dividing by the team's velocity, we can get to the amount of time the team will need to develop all of it. </p><p> Let's demonstrate that with an example: There's a long list of items in the product backlog and you're interested in knowing how long it will take to develop the top 30 of them. There shouldn't be any stories labeled with the “no idea” sizes like “100” or “??”. That would skew the calculation considerably, we need to make sure such items don't exist there. So in our example, we know the 30 stories are worth 360 story points. </p><p> We've observed that our ratio of unpredictable to known stuff is 0,4:1. So 360 * 0,4 = 144. That means that even though we now see stuff for 360 points in our list, it is probable that by the time we finish the last one , we will actually make another (of course <i>roughly</i>) 144 points of work that we don't know about yet. So in total, we will have <i>roughly</i> 500 points of work to do. </p><p> Knowing our velocity (let's stick with 50 points per sprint), let's divide 500 / 50 = 10. So we can conclude that to finish the thirtieth item in our list, it will take us <i>roughly</i> 10 sprints. It might be 8 or it might be 12, depending on the deviations in our velocity and the team's maturity. </p><h2>Additional decisions we can take</h2><p> Two common types of questions that we can now answer: </p><ol><li> It's the first of January and we have 2-week long sprints with the team from the previous example. Are we able to deliver all of the 30 items by March? Definitely not. Are we able to deliver them by December? Absolutely. It seems that they will be dealt with sometime around May or June. </li><li> We know our budget will last for (e.g.) 4,5 months from now. Will we be able to deliver those 30 items? If things go optimistically well, it might be the case. But we should evaluate the risk and decide accordingly. </li></ol><p> How can we act upon this? We can now systematically influence the variables in order to increase our chances of fulfilling the plan. A few options out of many: </p><ul><li>We can try to raise the team's velocity by adding a developer if that's deemed a good idea.</li><li>We can try to simplify some stories in the backlog to make the amount of known work smaller.</li><li>Or we can push the plan's end date.</li></ul><p> A warning: Some choose an approach to let everything be constant and try to increase the velocity by “motivating” (understand forcing) the team to plan more story points for a sprint. I don't need to explain that this is a dead-end that, statistically speaking, leads to the most likely scenario of having something “fall over” from the sprint backlog. It burdens the team with the unnecessary overhead of having to deal with the consequences of overcommitment during the sprint and work that won't get done any faster anyway. We can rather review the development tools and processes to see if there is any chance for velocity improvement, but that should be a permanent and continuous activity for any team regardless of plans. </p><h2>Final words</h2><p> Planning projects is never an exact process. But there are certain statistics and metrics that can give us guidelines and help us see how realistic various plans are. We can then distinguish between surefire plans, totally unrealistic plans, or reasonable ones. It can tell us when we should be especially cautious and take action to increase our chances. </p><p> But any predictions will only be as precise as we are transparent and honest with ourselves when getting the statistics. Trying to obscure anything in order to pretend there are no unforeseen factors or problems will only make the process more unpredictable in the long run. </p><p> So hopefully this article will inspire you on how to tackle the future in a more comfortable way. </p>​<br>#scrum;#agile;#project-management;#release-management
So you want to create a design system, pt. 2: Colorshttps://www.mobileit.cz/Blog/Pages/design-system-2.aspxSo you want to create a design system, pt. 2: Colors<p>Color is probably the most distinctive element of any design, and also the most important expression of brand identity (at least until <a href="https://material.io/blog/announcing-material-you">Material You</a> completely reverses this relationship, but it remains to be seen how it will be adopted). So how do we approach color when designing and implementing a design system so that our solution is usable, versatile, and scalable?</p><h2>Color me curious</h2><p>Besides conveying the brand and evoking emotions, colors have several other roles in current applications, including:</p><ul><li>highlighting different application states such as errors, warnings, success, or info messages </li><li>ensuring usability, legibility, and accessibility of the application under all conditions </li><li>providing different themes from which the user (or system) can choose according to environmental conditions or personal preferences </li></ul><p>Regarding the last point, users nowadays expect support for at least light and dark themes. Often this is more than just an aesthetic choice - for example, a car navigation app that dazzles drivers at night with large areas of bright colors can be downright dangerous.</p><p>And while the app supports switching between themes, it doesn't have to stop at just these two basic ones, for example:</p><ul><li>Is accessibility extremely important to your app? Add a specially designed high-contrast or colorblind-friendly theme. </li><li>Does the app owner currently run a major promotion, have an anniversary, or celebrate some significant event? Make it known with a special temporary theme. </li><li>Do you want to differentiate products or make it clear that the customer bought a premium version of the app or service? Add a special, more luxurious-looking theme. </li></ul> <img alt="Various application themes" src="/Blog/PublishingImages/Articles/design-system-2-01.png" data-themekey="#" /> <p>Theme support is a feature that is unique in that it makes both users and the marketing department happy. But how to construct it so that both designers and developers can actually work with it and be productive?</p><h2>Layers of indirection</h2><p>Let's start with what is definitely not suitable: Hardcoding the colors in the design tool and therefore in the code.</p> <img alt="Do not hardcode colors" src="/Blog/PublishingImages/Articles/design-system-2-02.png" data-themekey="#" /> <p>There are obvious drawbacks to this method, including the inability to change colors globally in a controlled manner (no, “find & replace” isn’t really a good idea in this case), and the need to copy and edit all designs for each additional theme we want to add (for designers), or cluttering the code with repetitive conditions (for developers). It also often leads to inconsistencies and it’s extremely error-prone - did you notice the mistake in the picture above?</p><p>Unfortunately, we still occasionally encounter this approach because many design tools will happily automagically present all the colors used, even if they are hardcoded, creating the illusion that the colors are under control and well-specified. They aren’t. Don’t do this.</p><p>So how to get rid of hardcoded colors? The first step is to hide them behind named constants and reuse these constants in all places.</p> <img alt="Do not use color constants alone" src="/Blog/PublishingImages/Articles/design-system-2-03.png" data-themekey="#" /> <p>This is definitely better - the colors can be changed globally in one place, but the problem arises when supporting multiple themes. The naive solution is to override each constant with a different value in every theme. This works as long as the colors in the different themes change 1:1. But consider the following situation:</p> <img alt="Do not override color constants per theme" src="/Blog/PublishingImages/Articles/design-system-2-05.png" data-themekey="#" /> <p>Since it is usually not advisable to use large areas of prominent colors in a dark theme, although the toolbar and button in a light theme are the same color, the toolbar should be more subdued in a dark theme. This breaks the idea of overriding the colors 1:1 in different themes because where one theme uses a single color, another theme needs more colors.</p><p>The solution to this situation is the so-called (and only slightly ironic) <a href="https://en.wikipedia.org/wiki/Fundamental_theorem_of_software_engineering">fundamental theorem of software engineering</a>:</p><p style="text-align:center;"><em>“We can solve any problem by introducing an extra level of indirection.”</em></p><p>In this case, that means <em>another</em> layer of named color constants. I kid you not - please stay with me, it’ll be worth it.</p><h2>The solution</h2><p>We achieve our goals, i.e. the ability to easily and safely change colors globally, and support any number of themes, by following these steps:</p><ol><li><strong>Define a set of semantic colors.</strong> These are colors named and applied based on their purpose in the design. Their names must not express specific colors or shades, but <em>roles</em>. For example, Google’s Material Design defines the following semantic colors: <img alt="Semantic colors of Material Design" src="/Blog/PublishingImages/Articles/design-system-2-06.png" data-themekey="#" /> These names are a good starting point, but of course, you can define your own, based on your needs. What's important is that semantic colors don't have concrete values by themselves, they are placeholders or proxies that only resolve to specific colors when applied within a specific theme, meaning one semantic color will probably have a different actual value in each theme. </li><li><strong>Define a set of literal colors.</strong> These constants literally represent the individual colors of your chosen color palette. They are common to all themes, so there are usually more of them than semantic colors. Unlike semantic colors, they are named purely on the basis of their appearance. For example, an earlier version of Material Design defined the following shades: <img alt="Old Material Design literal colors" src="/Blog/PublishingImages/Articles/design-system-2-07.png" data-themekey="#" /> Recently it has become a common practice to distinguish colors with different lightness using a number where 1000 is 0% lightness (i.e. black) and 0 is 100% lightness (white), but of course you can devise your own system. </li><li>Follow this rule in both design <em>and</em> code (no exceptions!):<br><strong>Semantic colors must be used exclusively and everywhere. Literal colors (or even hard-coded constants) must <em>never</em> be used directly.</strong><br> This means that the use of colors in design and implementation must have the possibility of being completely specified in the form of "wireframes" like this: <img alt="Design wireframe specified with semantic colors" src="/Blog/PublishingImages/Articles/design-system-2-08.png" data-themekey="#" /> </li><li><strong>Finally, map semantic colors to concrete literals <em>per theme</em>.</strong> This step ultimately produces a specific theme from the design specification, which is in itself <em>independent</em> of a particular theme. Based on our previous example, the final result will look like this: <img alt="Themes resolved from semantic colors mapped to color literals" src="/Blog/PublishingImages/Articles/design-system-2-09.png" data-themekey="#" /> For example, toolbar background color is <em>specified</em> as <span class="pre-inline">Primary</span>, which in <span class="pre-inline">Light</span> theme is <em>mapped</em> to <span class="pre-inline">Purple700</span> literal color, but in <span class="pre-inline">Dark</span> theme it resolves to <span class="pre-inline">Purple900</span>. The most important thing is that <span class="pre-inline">Purple900</span> or <span class="pre-inline">Purple700</span> literal colors <em>aren't</em> mentioned in the design specification, only in theme definition. </li></ol><p>It's just a little extra work, but the benefits are enormous. We have successfully decoupled the <em>definition</em> of the colors from the <em>actual</em> colors used in various themes. </p><h2>Make it work for you</h2><p>There are usually questions that often arise or choices that need to be made when implementing this system. Here are some tips based on our experience:</p><ul><li><strong>Don't go overboard with the number of semantic colors.</strong> It's tempting to define a separate semantic color for every part of each UI element (e.g., <span class="pre-inline">ButtonBackground</span>, <span class="pre-inline">SwitchTrack</span>, <span class="pre-inline">ProgressIndicatorCircle</span>), which has the theoretical advantage that you can then change them independently, but it also makes it much harder to navigate the design and implementation. The ideal amount of semantic colors is one where one can hold more or less all of them in one's head at once. Try to find a minimum set of sufficiently high-level names that will cover 90% of situations. You can always add more names later. </li><li><strong>Naming is hard.</strong> Since semantic colors form the basis of the vocabulary used across the team and also appear everywhere in the code, it's a good idea to spend some time looking for the most appropriate names. If some of the chosen names turn out to be not that fitting, don't be afraid to refactor them. It's unpleasant, but living with inappropriate names for a long time is worse. </li><li><strong>Never mix literal and semantic names.</strong> For example, a set of semantic colors containing <span class="pre-inline">Orange</span>, <span class="pre-inline">OrangeVariant</span>, <span class="pre-inline">Secondary</span>, <span class="pre-inline">Background</span>, <span class="pre-inline">Disabled</span>, etc. isn’t going to work well, even if the main color of your brand is orange and everyone knows it. Even so, create a purely semantic name for such a color, like <span class="pre-inline">Brand</span> or <span class="pre-inline">Primary</span>. </li><li><strong>If you need multiple versions of a semantic color, never distinguish them with adjectives expressing properties of literal colors</strong> such as <span class="pre-inline">BrandLight</span>, <span class="pre-inline">BrandDark</span>, etc., because what is darker in one theme may be lighter in another and vice versa. Instead, use adjectives expressing purpose or hierarchy, such as <span class="pre-inline">BrandPrimary</span>, <span class="pre-inline">BrandAccent</span>, or even <span class="pre-inline">BrandVariant</span> (but if you have <span class="pre-inline">Variant1</span> thru <span class="pre-inline">Variant8</span>, you have, of course, a problem as well). </li><li><strong>For each semantic color that can serve as a background color, define the corresponding semantic color for the content that can appear on that background.</strong> It's a good idea for these colors to contain the preposition <span class="pre-inline">on</span> or the word <span class="pre-inline">content</span> in the name, like <span class="pre-inline">OnPrimary</span> or <span class="pre-inline">SurfaceContent</span>. Avoid the word <span class="pre-inline">text</span> (e.g., <span class="pre-inline">SurfaceText</span>), as this color will often be applied to other elements such as icons or illustrations, and try to avoid the word <span class="pre-inline">foreground</span> because sometimes the use of background and foreground colors can be visually inverted: <img alt="Two components with inverted colors" src="/Blog/PublishingImages/Articles/design-system-2-10.png" data-themekey="#" /> </li><li><strong>The use of the alpha channel in literal colors is a difficult topic.</strong> Generally speaking, the colors that will be used as backgrounds should be 100% opaque to avoid unexpected combinations when several of them are layered on top of each other (unless this effect is intentional). Content colors, on the other hand, can theoretically contain an alpha channel (useful, for example, for defining global secondary or disabled content colors that work on different backgrounds), but in this case, it is necessary to verify that the given color <em>with its alpha value</em> works with any background.<br>Another question is alpha channel support in your design tool and code - is the alpha value an integral part of the color, or can we combine separate predefined colors and separate predefined alpha values? </li><li><strong>If your design tools don't directly support semantic colors or multiple themes at the same time, work around that.</strong> Tools come and go (or, in rare cases, are upgraded), but your design system and the code that implements it represents much more value and must last longer. Don’t be a slave to a particular tool. </li><li><strong>All text should be legible and meet accessibility standards</strong> (icons on the other hand don’t need to do that, but it’s generally a good idea for them to be compliant as well) - see <a href="https://www.w3.org/TR/UNDERSTANDING-WCAG20/visual-audio-contrast-contrast.html">The Web Content Accessibility Guidelines (WCAG 2.0)</a>, and use automated tools that check for accessibility violations. </li></ul><h2>Design is never finished</h2><p>...so it's important that your design system is able to evolve sustainably. This way of defining color, although certainly not the simplest, allows for exactly that. We'll look at other fundamental elements of design systems and how to handle them next time.</p>#design-system;#ui;#ux;#development;#android;#iOS
Scrum smells, pt. 6: Unknowns and estimateshttps://www.mobileit.cz/Blog/Pages/scrum-smells-6.aspxScrum smells, pt. 6: Unknowns and estimates<p>Today, I'd like to share some of the ideas and estimation approaches that helped us in past projects. The tricky part in long and short-term planning is how to predict the unknowns that will influence us in the future. As I wrote earlier, there are several things that usually come up and may not be visible in the product backlog when you are planning something.</p><h2>The unknowns</h2><p>In projects related to mobile app development, we usually encounter the following unplanned activities:</p><ul><li>Defect fixing</li><li>Backlog refinement activities</li><li>Collaboration on UI/UX design</li><li>Refactoring</li><li>New user stories</li></ul><p>Defect fixing is quite obvious and we have spoken about it already. You can't usually foresee what bugs will appear.</p><p>Backlog refinement activities include understanding the backlog items, analyzing the underlying technical and usability aspects, and making the backlog items meet the definition of ready. </p><p>The UI/UX design process is not just a simple decision about colors and shapes. The controls used and the screen layouts and flows usually have a large impact on how the application needs to be built, and we witness over and over again that a seemingly small aspect of the design idea can have a vast impact on the complexity of the actual implementation. So in order to keep the cost/benefit ratio reasonable, we have learned that it is necessary that the developers collaborate closely with the designers in order to prevent any unpleasant surprises. You can read more about this topic in <a href="/Blog/Pages/design-system-1.aspx">this blog series</a>. </p><p>Refactoring existing code and infrastructure setup is a must if we want to develop a product that will be sustainable for longer than a few weeks. It can also have the potential of making the dev team more effective.</p><p>New user stories are interesting. You invest a lot of time into the backlog refinement and it just looks perfect, everything is thought through and sorted. Fast forward two months into the future and you discover (with new knowledge from that past two months) that you need to simplify some stories while others have become obsolete, but more importantly, you realize that you need to introduce completely new features that are vital for app's meaningfulness. You couldn’t see this before you had the actual chance to play around with the features from the past couple of months and gather feedback from users, analyze the usage stats or see the economical results.</p><h2>Estimates</h2><p>Having most of the stuff in the backlog estimated for its complexity (size) is vital for any planning. But as we have all probably learned the hard way, estimates are almost always anything but precise. We, therefore, did not find any value in trying to produce exact estimate values (like 13.5 man-days of work), but we rather use the approach of relative estimation while using pseudo-Fibonacci numbers: 0, 1, 2, 3, 5, 8, 13, 20, 40, 100.</p><p>It is important to understand that these are dimensionless numbers. They are not hours, man-days, or anything similar. It is an abstract number used solely to set a benchmark and compare other items against each other.</p><p>So what does that mean? At the beginning of the project we pick an item in the backlog that seems to be of a common size and appears neither small nor big, a number between the 5-8 range. That will be our benchmark and all other stories are then compared to it. How much more difficult (or easy) is this or that item compared to our benchmark?</p><p>Over time, we usually found out that the initial benchmarks and estimates were completely off. But that is OK, it's a learning process. It is important to review the estimates after the actual development and from them. Was that user story really an 8? Were these two items as similar as we initially thought? If not, how would we estimate them now and why? That also means that from time to time it's necessary to revisit all the already estimated items in the product backlog. </p><p>It usually is not necessary to go into deep details with stuff that is several sprints ahead. As the team gains experience with the product domain, the developer's gut feelings get more relevant and precise. That means useful estimates can be done quite swiftly after the team grasps the particular feature's idea. Sure, some stuff in the backlog will be somewhat underestimated, some overestimated. But with long-term planning and predictions it usually suffices because statistically, the average usually gets quite reliable.</p><p>The outcome of all this is a backlog where every item is labelled with its size. It becomes clear what items are meaningfully defined. The development team has an idea about the technical solution (meaning that the size is reasonable) and what items are completely vague or for which the team members lack key business or technical information. Those are usually the items with estimates labels of “40”, “100”, or even “??”.</p><p>If such inestimable stories are buried in the lower parts of the backlog and the product owner does not even plan to bring them to the market for a long time from now, that's fine. But do any of these items have a high value for the product and do we want to bring it to the market soon? If that's the case, it sends a clear message to the product owner: back to the drawing board, let's completely re-think and simplify such user stories and expect that some team capacity may be needed for technical research. </p><p>So after all this hassle, the upper parts of the backlog will have numbers that you can do math with.</p><h2>Quantifying unexpected work</h2><p>The last piece of the puzzle requiring predictions and plans is to quantify how much of the unexpected stuff usually happens. Now, this might seem like a catch-22 situation - how can we predict the amount of something that we can't predict by its definition? At the beginning of the development, this is indeed impossible to solve. But as always, agile development is empirically oriented - over time we can find ways to get an idea about what is ahead based on past experience. As always, I am not preaching any universal truth. I am just sharing an experience that my colleagues and I have gathered over time and we find useful. So do we do it? </p><p>It's vital to visualize any team's work in the product and sprint backlog as transparently as possible. So it's also good to include all the stuff that are not user stories, but the team knowingly needs to put some effort into them (like the known regressions, researches, refactorings, etc.) into the backlog too. If it's possible to estimate the size upfront, let's do it. If it's not, either cap the maximum capacity to be invested or re-visit and size the item after it's been done. This is necessary in order to gather statistics. </p><p>Just to be clear - let's not mistake such unexpected work with a scope creep. I assume that we don't suffer from excessive scope creep, the unexpected work is indeed solely highly valuable and necessary work that was just not discovered upfront.</p><p>So now we have a reasonably transparent backlog, containing the originally planned stories and also the on-the-go incoming items. We have most of it labelled with sizes. In the next part of this series, we'll try to make some statistics and conclusions on top of all this. </p>#scrum;#agile;#project-management;#release-management