mobile it



Building a chatbot, pt. 2: Conversations a chatbot, pt. 2: Conversations<p>​​​​Everybody has met or talked to a person that had a script to follow. Insurance salesmen, dealers, receptionists… All these people were given a script to follow, a checklist to go through, or just a simple answer to give when asked. Not unlike a chatbot.</p><p>In the previous article, I described a chatbot that we built for making reservations. Not much was said about the inner works or about its potential. There are indeed multiple topics to cover. In this article, I would like to talk a bit about conversations and conversation flow.</p><h2>AI?</h2><p>Our application is designed to fulfill users’ requests. A user usually has a general idea about what he or she wants to achieve. But even the most knowledgeable user can’t possibly know every item and variations that the application needs to achieve a goal. Even if the user is not sure what the right way to ask is, the chatbot should be able to guide the user and in the end, find exactly what the user desires, and then fulfill his request. For each request, the bot should have a list of instructions to follow - information that the user has to provide so that the application is able to execute the user's command.</p><h2>Illusion of choice</h2><p>To provide a good user experience, we decided to leave the first step to the user. The user’s first sentence gives us a direction to follow and an idea of the user’s intent, but actually, this is the last time a user is in command of the conversation flow. From this point on, every step of the conversation is directed by the application. The bot knows what checklist it has to follow to fulfill the request. Even if the user is not specific or not sure what to say, the bot should be able to get this information from the user with properly constructed questions.</p><p>Every conversation in our application is directed by a set of checklists. These checklists can be simple, even just one item, or they can spawn other checklists, and even create cyclic conversations. What exactly we are able to create with these types of checklists will be described in a separate blog. For now, let’s focus on the following example.</p><h2>From a conversation to a checklist</h2><p>Let’s meet our example user: Joe. Joe is working an office job and has multiple meetings per day. For those, he needs to schedule reservations in a call booth or a meeting room with his colleagues. At the start of the week, he knows about two meetings that will take place. One today (for the sake of this example let’s assume it is Monday) and one tomorrow. Interaction with the bot will then look like this:</p><p style="margin-left:40px;margin-top:20px;margin-bottom:30px;"> <strong style="color:#381457;">User</strong> : <em>Book me a room for today from 11:30 to 12:30</em><br><strong style="color:#9d9d9d;">Bot</strong> : <em>Alright! Do you want to book a specific meeting room or should I just book you one that is free?</em><br><strong style="color:#ffed00;">Options</strong> : <em>Any meeting room, Specific meeting room</em><br> </p><p>For today's meeting Joe has a remote call with colleagues from out of town, therefore any available room will do the trick. He will select “any meeting room” from the options given. If there is an available room in the system, Joe will get one. Like this:</p><p style="margin-left:40px;margin-top:20px;margin-bottom:30px;"> <strong style="color:#381457;">User</strong> : <em>Book me a room for today from 11:30 to 12:30</em><br><strong style="color:#9d9d9d;">Bot</strong> : <em>Alright! Do you want to book a specific meeting room or should I just book you one that is free?</em><br><strong style="color:#381457;">User</strong> : <em>Any meeting room</em><br><strong style="color:#9d9d9d;">Bot</strong> : <em>I have booked room R.332 for today from 11:30 to 12:30 for you</em><br> </p><p>And that's it. Joe has booked a room for a given time. From a user's point of view this is quite easy. Using our mobile applications, Joe can even use voice input!. This is all great, but how does it work?</p><p>To make a reservation, the chatbot needs to pass a checklist. For the example above, where the user is making a room reservation, the checklist looks like this:</p><ul><li>Intent - make a booking</li><li>What (seat, room, call booth)</li><li>Where (any available room, or does the user want a specific room?)</li><li>When (date and time, or duration of the meeting)</li></ul><p>Joe knew most of these and intuitively created a sentence that matched his wishes best. What was not explicit in the first sentence, the bot asked for in the follow-up question. When asking follow-up questions, chatbot usually gives users multiple options to select from. These suggestions give the user an idea of how to respond and also what kind of information the bot is expecting. In the same answer, the user can also write additional information.</p><p>Now let's say that Joe needs a specific room for a meeting that takes place on Wednesday. He needs a big interactive board, a projector and a room for 10 people where he will introduce new products to customers. This room is quite busy so he needs to find a spot for his meeting. The conversation could look like this one:</p><p style="margin-left:40px;margin-top:20px;margin-bottom:30px;"> <strong style="color:#381457;">User</strong> : <em>When is room R.121 available?</em><br><strong style="color:#9d9d9d;">Bot</strong> : <em>For which date(s) do you want to see the room's availability?</em><br><strong style="color:#381457;">User</strong> : <em>Wednesday</em><br><strong style="color:#9d9d9d;">Bot</strong> : <em>How long will you need the room for?</em><br><strong style="color:#9d9d9d;">Bot</strong> : <em>I will find you times, where there is enough space for your meeting. For example: 1 hour 30 minutes or just like 15 minutes</em><br><strong style="color:#381457;">User</strong> : <em>2 hours</em><br><strong style="color:#9d9d9d;">Bot</strong> : <em>Alright.</em><br><strong style="color:#9d9d9d;">Bot</strong> : <em>The room R.121 is available on wednesday from 9:00 to 13:59.</em><br><strong style="color:#9d9d9d;">Bot</strong> : <em>Do you want to book it?</em><br><strong style="color:#381457;">User</strong> : <em>Yes</em><br><strong style="color:#9d9d9d;">Bot</strong> : <em>Ok, can you specify the time for me please?</em><br><strong style="color:#381457;">User</strong> : <em>10:00</em><br><strong style="color:#9d9d9d;">Bot</strong> : <em>I have booked room R.121 on wednesday from 10:00 to 12:00 for you.</em><br> </p><p>That's a slightly different conversation, right? But in the end, Joe will have his reservation ready. So what does the checklist on the bot’s side look like for this example?</p><ul><li>Intent - check availability</li><li>What - specific place to check</li><li>When - time-frame to check</li><li>How long - meeting duration</li><li>To book it or not</li></ul><p>Up to this point Joe has only checked for available time slots. If he wants to make a booking now, he can. For booking, the chatbot has a different checklist. That one was previously listed, but here it is again for completion:</p><ul><li>Intent - make a booking</li><li>What (seat, room, call booth)</li><li>Where (any available room, or does the user want a specific room?)</li><li>When (date and time, or duration of the meeting)</li></ul><p>Most of these parameters are already checked for the booking. Joe has already told the bot to look for a specific room. He’s even given it a date so the only thing missing is the time. Joe gave the bot a meeting duration when he was checking for room availability. Now if Joe selects only the start of the meeting, the application already knows the duration and so it will make the booking last 2 hours.</p><p>But wait! That’s not all. Up to this point, all user’s requests had no problems. Let's say that our Joe has had a long vacation, he is back in the office again, but lost track of dates:</p><p style="margin-left:40px;margin-top:20px;margin-bottom:30px;"> ​ <strong style="color:#381457;">User</strong> : <em>Book</em><br><strong style="color:#9d9d9d;">Bot</strong> : <em>Do you want me to book you a seat or a meeting room?</em><br><strong style="color:#381457;">User</strong> : <em>Meeting room</em><br><strong style="color:#9d9d9d;">Bot</strong> : <em>Alright do you want to book a specific meeting room or should I just book you one that is free?</em><br><strong style="color:#381457;">User</strong> : <em>What is available?</em><br><strong style="color:#9d9d9d;">Bot</strong> : <em>For which time frame(s) do you want to see available rooms?</em><br><strong style="color:#9d9d9d;">Bot</strong> : <em>You can tell me any time you want or you can use one of shortcuts bellow.</em><br><strong style="color:#381457;">User</strong> : <em>20.3.2008</em><br><strong style="color:#9d9d9d;">Bot</strong> : <em>Unfortunately you cannot book or check seat availability in the past.</em><br><strong style="color:#9d9d9d;">Bot</strong> : <em>Please tell me a time or a time range you want to see available rooms for.</em><br><strong style="color:#9d9d9d;">Bot</strong> : <em>For example: Today, Tomorrow, Thursday, 24.10. or something like next week or from 10:30 to 11:15</em><br><strong style="color:#381457;">User</strong> : <em>Ok what about tomorrow?</em><br><strong style="color:#9d9d9d;">Bot</strong> : <em>Here is a list of available meeting rooms in your preferred area for tomorrow:</em><br><strong style="color:#9d9d9d;">Bot</strong> : <em>1. R.121</em><br><strong style="color:#9d9d9d;">Bot</strong> : <em>1. R.122</em><br><strong style="color:#9d9d9d;">Bot</strong> : <em>1. R.123</em><br><strong style="color:#9d9d9d;">Bot</strong> : <em>1. R.124</em><br><strong style="color:#9d9d9d;">Bot</strong> : <em>1. R.125</em><br><strong style="color:#9d9d9d;">Bot</strong> : <em>These are the first ones I found. But there are some more.</em><br><strong style="color:#9d9d9d;">Bot</strong> : <em>Do you want to have some of these? You can also tell me, which specific one do you want to have</em><br><strong style="color:#381457;">User</strong> : <em>Yes</em><br><strong style="color:#9d9d9d;">Bot</strong> : <em>Alright</em><br><strong style="color:#9d9d9d;">Bot</strong> : <em>What room would that be?</em><br><strong style="color:#381457;">User</strong> : <em>R.124</em><br><strong style="color:#9d9d9d;">Bot</strong> : <em>Can you specify the time for me please?</em><br><strong style="color:#381457;">User</strong> : <em>From 10:00 just for a quick call</em><br><strong style="color:#9d9d9d;">Bot</strong> : <em>I have book room R.124 for tomorrow from 10:00 to 10:30 for you.</em><br> </p><p>Even in case of a longer conversation and with an invalid date, the bot was able to provide Joe with a reservation.</p><p>Again, I will show you the checklist that the chatbot had to go through:</p><ul><li>Intent - make a booking</li><li>What - meeting room</li><li>Specification:<br> <ul><li>Check available meeting rooms</li><li>When - with validation</li><li>Want to book one?</li></ul></li><li>Specific room selection</li><li>When - with specific time selection</li></ul><h2>Think in checklists</h2><p>The main point of these examples is to introduce the checklist philosophy. Every conversation that we have in the system is designed as a checklist with multiple options. It's not a simple checklist where you just confirm an item and move on to the next one. Some of the items can be optional, however at least one has to be mandatory. Each item on the checklist can have its own checklists or policies. There can be policies that end the conversation on the spot or point to another checklist where the conversation continues. Some of the policies and transitions can be seen in the previous examples.</p><p>For example, for time specification we want only the current day or a day in the future, but not in the very distant future. For room bookings, only single-day reservations are allowed. Furthermore, when specifying a room, we have to select from a place that exists in the system database.</p><h2>Conclusion</h2><p>Conversations with our chatbot are based on checklists with predefined policies to follow. The user is given an illusion of choice with a first request that can be created any way the user desires, and afterward the control of conversation flow is entirely coded in the chatbot. From a developer’s point of view, that gives us an easy way to test these conversations, and develop each branch of conversation independently.</p><p>From the user's point of view, the bot asks for missing pieces of information. Thanks to the firm grip on the conversation, the user should not end up in a situation where he doesn't know what to answer or how to reach a certain goal.</p><p>So is it artificial intelligence? No. It's a state machine that follows pre-set lines of conversation. However, the conversation can be very complex, and when it is complex enough, users might come to the conclusion that this is what AI looks like.</p> ​<br>#chatbot;#ai;#neural-network
Scrum smells, pt. 5: Planning fallacies smells, pt. 5: Planning fallacies<p>As the scrum godfathers said, scrum is a lightweight framework used to deal with complex problems in a changing environment. Whether you use it for continuous product development or in a project-oriented mode, stakeholders always demand timelines, cost predictions, roadmaps, and other prophecies of this sort. It is perfectly understandable and justifiable - in the end, the project or product development is there to bring value to them. And financial profit is certainly one of these values.</p><p>Many of us know how painful the inevitable questions about delivery forecasts can be. When will this feature be released? How long will it take you to develop this bunch of items? Will this be ready by Christmas? We would, of course, like to answer them in the most honest way: "I don't have a clue". But that rarely helps, because even though it is perfectly true, it is not very useful and does not help the management very much. For them, approving a project development based on such information would be writing a blank check.</p><p>I've seen several ways in which people approach such situations. Some just give blind promises and hope for the best, while feeling a bit nervous in the back of their minds. Others go into all the nitty-gritty details of all the required backlog items, trying to analyze them perfectly and then give a very definitive and exact answer, while feeling quite optimistic and confident that they have taken everything into account. Some people also add a bottom line "...if things go as planned".</p><h2>If things go as planned</h2><p>Well, our experience shows that all these approaches usually generate more problems than benefits because the impact of that innocent appendix "...if things go as planned" proves to be massive and makes the original plan fall far from reality. It actually stems from the very definition of the words project and process. A process is a set of actions, which are taken to achieve an expected result, and this set is meant to be repeated on demand. On the other hand, a project is a temporary undertaking that aims to deliver a unique outcome or product. While the process is meant to be triggered as a routine and its variables are well known and defined, a project is always unique.</p><p>So, a project is something that people do for the first time, to achieve something new. And when we do something for the first time, there are two kinds of unknowns involved: the known unknowns (knowledge we consciously know we are lacking) and the unknown unknowns (stuff we don't know and we don't even realize it). Based on the nature and environment of the project and our experience in this field, we can identify some of the unknowns and risks to a certain degree. But I don't believe that there will really be a project where all the potential pitfalls could be identified unless you actually implement the project - only then you will know for sure. If we'd like to identify all risks and analyze future problems and their potential impact, we need to try it out in real life. Only then could we be certain about the outcomes, approving or disapproving our initial expectations.</p><p>I am trying to express that uncertainty is part of every project. That means that when planning a project, we need to take that into account. So when setting up a project and trying to get a grasp of the costs, timeline, and scope, we must understand we're always dealing with estimates and planning errors. So instead of trying to pretend it doesn't exist and requiring (or providing) a seemingly "exact and final" project number, I think a more constructive discussion would be about the actual scale of the error. </p><h2>Cognitive biases</h2><p>While the above is generally logically acceptable to rational and experienced people, why do we tend to ignore or underestimate the risks at the beginning? I believe it's got something to do with how our minds work.</p><p>There is a phenomenon called the <strong>planning fallacy</strong>, first described by psychologists in the 1970s. In essence, they found that people tend to (vastly) underestimate time, costs, and risks of actions while (vastly) overestimating the benefits. The researchers measured how probable were various subjects to finish various tasks within the timeframes the subjects have estimated. Interestingly, over half of the subjects often needed more time to finish the task than was their catastrophic-case estimate.</p><p>The actual thinking processes are even more interesting. Even with past experience of solving a similar problem and a good recollection of it, people tend to think they will be able to solve it quicker this time. And that people genuinely believe their past predictions (which went wrong in the end) were too optimistic, but this time they believe they are making a realistic estimate.</p><p>There is also something called an <strong>optimism bias</strong>. Optimism bias makes people believe that they are less likely to experience problems (compared to others). So even though we can have a broad range of experience with something, we tend to think things will evolve in an optimistic way. We tend to put less weight on the problems we may have already encountered in similar situations, believing this was "back then" and now we are of course more clever, and we won't run into any problems this time. People tend to think stuff is going to go well just because they wish for it.</p><p>Another interesting factor is our tendency to take credit for whatever went well in the past, overestimating our influence, while naturally shifting the reasons for negative events to the outside world - effectively blaming others for what went wrong or blaming bad luck. This might not be expressed out loud, but it influences our views regardless. This stems from a phenomenon called <strong>egocentric bias</strong>.</p><h2>Combining psychology with projects</h2><p>So it becomes quite obvious that if we combine the lack of relevant experience (a project is always a unique undertaking up to a certain degree, remember?) with the natural tendency to wish for the best, we get a pretty explosive mixture.</p><p>We need to understand that not just the project team itself, but also the stakeholders fall victim to the above-mentioned factors. They also wish for a project to go as they planned and managers rarely like sorting out any problems that stem from a project in trouble that doesn't evolve as expected.</p><p>Yes, I have met managers who naturally expect considerable risks and don't take positive outcomes for granted. Managers who understand the uncertainties and will constructively attempt to help a project which slowly deviates from the initial expectations. When we have a manager who addresses risks and issues factually and rationally, it is bliss.</p><p>But what if that's not the case? Many managers try to transfer the responsibility for possible problems to the project teams or project managers while insisting that the project manager must ensure "project goes as estimated". Usually, their way of supporting a project is by stressing how important it is to deliver stuff in time and that the team must ensure it no matter what. And that all the features need to be included, of course.</p><p>Now when you combine the fuse in the form of pressure from stakeholders with this explosive mix, that's when the fireworks start.</p><p>So how to increase the chance of creating a sane plan, keep the stakeholders realistically informed, while maintaining a reasonably peaceful atmosphere in the development team? I think we can help it by gathering certain statistics and knowing we are constantly under the effect of cognitive biases. We'll look at this in the next part of this series.</p>#scrum;#agile;#project-management;#release-management
How to run Fastlane and GitLab with AppStore Connect API to run Fastlane and GitLab with AppStore Connect API<p>​​​​In this brief tutorial, I am going to walk you through a relatively painless process of using <span class="pre-inline">.p8</span> certificates from App Store Connect API to authenticate during Fastlane builds. As we also use GitLab as our CI, I will also show you how to pass the key from GitLab to Fastlane script.</p><h2>Step 1: Allow keys to be created</h2><p>For this step, you need to be an account holder. If you are not an account holder, you need to ask him to do it for you.</p><p>Note: Keys are generated per organization. For example, if you are a developer on project A (First organization), project B (First organization), and project C (Second organization), you will have to ask account holders of both the First organization and Second organization to allow access.</p><p>The first step is to go to Users and Access (tab Keys), you will see something like in <strong>Image 1</strong>.</p> <img alt="Image 1 - Starting point for the account holder" src="/Blog/PublishingImages/Articles/fastlane-appstore-01.png" data-themekey="#" /> <p></p><center><strong>Image 1</strong> - Starting point for the account holder</center><p></p><p>Following this, you need to click on the “Request Access” button. This will allow anyone with an Admin role to create new keys. You will see some kind of pact with the devil, or disclaimer if you prefer. You need to check the box and click submit. Visual reference in <strong>Image 2</strong>. If everything goes well, you are rewarded with the screen captured in <strong>Image 3</strong>.</p> <img alt="Image 2 - Disclaimer" src="/Blog/PublishingImages/Articles/fastlane-appstore-02.png" data-themekey="#" /> <p></p><center><strong>Image 2</strong> - Disclaimer</center><p></p> <img alt="Image 3 - The result screen" src="/Blog/PublishingImages/Articles/fastlane-appstore-03.png" data-themekey="#" /> <p></p><center><strong>Image 3</strong> - The result screen</center><p></p><p>For the account holder, the journey ends here. Now it’s your turn.</p><h2>Step 2: Obtaining the key</h2><p>For this part, you will have to have Admin rights, which is referred to in <a href="">AppStore Connect documentation</a>.</p><p>You will start as <strong>Image 3</strong> shows, Users and Access, tab Keys. Now the process is still pretty straightforward. Just tap the magic button and you will see the form for key creation. Fill in the details (<strong>Image 4</strong>), the name is not substantial, but access has to be at least <strong>App Manager</strong>. Otherwise, your Fastlane pilot will fail when updating info as stated in <a href="">Fastlane documentation</a>. If you do not need to modify info, <strong>Developer</strong> access will suffice.</p> <img alt="Image 4 - Access key form" src="/Blog/PublishingImages/Articles/fastlane-appstore-04.png" data-themekey="#" /> <p></p><center><strong>Image 4</strong> - Access key form</center><p></p><p>Next, you will see your key has been created (<strong>Image 5</strong>). There are 2 values to take into account, Issuer ID (blue rectangle) and Key ID (red rectangle). You don’t need to write them down as they are always accessible to you. What is not always accessible, however, is the “Download API Key” button. You will be prompted about this when trying to download the key.</p> <img alt="Image 5 - Newly created key" src="/Blog/PublishingImages/Articles/fastlane-appstore-05.png" data-themekey="#" /> <p></p><center><strong>Image 5</strong> - Newly created key</center><p></p><h2>Step 3: Using the key</h2><p>Now to the main part. Download the key. You can only download it only once, but it is not really a problem, just a mild inconvenience as you can always generate a new key.</p><p>There are two ways to use keys within the Fastlane script. One is by using the base64 encoded key directly and the other by using the json file described <a href="">in the documentation</a>. </p><p>I use the base64 encoded key. Open the terminal window and go to the folder where you have your key. Run the command <span class="pre-inline">cat {KEYNAME} | base64</span> and copy the result for later usage.</p> <img alt="Image 6 - Writing out base64 key" src="/Blog/PublishingImages/Articles/fastlane-appstore-06.png" data-themekey="#" /> <p></p><center><strong>Image 6</strong> - Writing out base64 key</center><p></p><p>Now that you have your key, you have to adjust your Fastlane and GitLab script. If you are not using GitLab, skip that part.</p><p>For the Fastlane script, you need to start using options from your lane. I have my base64 key as an argument with the name <span class="pre-inline">api_key</span> supplied to a script. Now you may notice the colorful rectangles. The red one should be issuer id from <strong>Image 5</strong>, and of course, the blue one is the key id from the very same image. For reference, see my lane in <span class="pre-inline">fastfile</span> in <strong>Image 7</strong>.</p> <img alt="Image 7 - Fastfile sample" src="/Blog/PublishingImages/Articles/fastlane-appstore-07.png" data-themekey="#" /> <p></p><center><strong>Image 7</strong> - Fastfile sample</center><p></p><p>If you don’t use any other tool which manages CI, you are pretty much all set. You may call your script as you did before, just add the parameter followed by “:” and value. For example, <span class="pre-inline">fastlane {MY LANE} api_key:”{MY KEY}”</span>. If you do not want to manually input it every time, you might consider some kind of key storage, for instance, keychain. Then you may skip passing the argument to the lane, but that is a topic for another day.</p><h2>GitLab adjustments</h2><p>For this to work, you first need to go to your GitLab interface and set a new variable. <strong>Image 8</strong> will help you find it. Add a name that will be later used in the script, for me it is <span class="pre-inline">APP_STORE_CONNECT_API_KEY</span> and the base64 version of your key. Don’t forget to tick the box to mask it from logs (<strong>Image 9</strong>).</p> <img alt="Image 8 - GitLab settings" src="/Blog/PublishingImages/Articles/fastlane-appstore-08.png" data-themekey="#" /> <p></p><center><strong>Image 8</strong> - GitLab settings</center><p></p> <img alt="Image 9 - Adding a new variable" src="/Blog/PublishingImages/Articles/fastlane-appstore-09.png" data-themekey="#" /> <p></p><center><strong>Image 9</strong> - Adding a new variable</center><p></p><p>Now you have to adjust your Gitlab script. Open your <span class="pre-inline">.gitlab-ci.yml</span> file and for every lane where you want to use the key add it as a named parameter as in <strong>Image 10</strong>.</p> <img alt="Image 10 - Script adjustments" src="/Blog/PublishingImages/Articles/fastlane-appstore-10.png" data-themekey="#" /> <p></p><center><strong>Image 10</strong> - Script adjustments</center><p></p><p>Now you are all set to continue doing automated submissions with GitLab and Fastlane.</p>#appstore;#fastlane;#gitlab;#iOS
Scrum smells, pt. 4: Dreadful planning smells, pt. 4: Dreadful planning<p>In a few of our past projects, I encountered a situation that might sound familiar to you: Developers are getting towards the end of a sprint. The product owner seems to have sorted the product backlog a bit for the sprint planning meeting - he changed the backlog order somewhat and pulled some items towards the top because he currently believes they should be added to the product rather soon. He added some new things as well because the stakeholders demand them. In the meantime, the team works on the development of the sprint backlog. The sprint ends, the team does the end-of-sprint ceremonies and planning we go.</p><p>At the planning meeting, the team sits down to what seems to be a groomed backlog. They go through the top backlog items with the product owner, who explains what he has prioritized. The team members try to grasp the idea and technical implication of the backlog items and try their best to plan them for development. But they find out that one particular story is very complex and can't be fitted within a sprint, so they negotiate with the product owner about how to meaningfully break it down into several smaller pieces. Another item has a technical dependency on something that has not been done yet. The third item has a functional dependency - meaning it won't work meaningfully unless a different story gets developed. The fourth item requires a technology that the developers haven’t had enough experience with. Therefore, they are unable to even remotely tell how complex it is. And so on it goes - the team members dig through the “prepared” backlog, try to wrap their heads around it, and finally find out that they can't work on every other story for some reason.</p><p>One possible outcome is that such items are skipped, and only the items that the team feels comfortable with are planned into the sprint backlog. Another outcome is that they will want to please the product owner and “try” to do the stuff somehow. In any case, the planning meeting will take hours and will be a very painful experience.</p><p>In both cases, the reason is poor planning. If there ever was a planned approach by the product owner towards the backlog prior to the planning meeting, it was naive, and now it either gets changed vastly, or it gets worked on with many unknowns - making the outcome of the sprint a gamble.</p><h2>What went wrong?</h2><p>One might think all the planning occurs exclusively at the planning meeting. Why else would it be called a planning meeting? Well, that is only half true. The planning meeting serves the purpose for the team to agree on a realistic sprint goal, and discuss with the product owner what can or cannot be achieved within the upcoming sprint, and create a plan of attack. Team members pull the items from the top of the backlog into the sprint backlog in a way that gets to that goal in the best possible way. It is a ceremony that actually starts the sprint, so the team sets off developing the stuff right away.</p><p>In order to create a realistic sprint plan that delivers a potentially releasable product increment with a reasonable amount of certainty, there has to be enough knowledge and/or experience with what you are planning. The opposite approach is called gambling.</p><h2>Definition of ready</h2><p>It is clear that the backlog items need to fulfill some criteria before the planning meeting occurs. These criteria are commonly referred to as a “definition of ready” (DoR). Basically, it is a set of requirements set by the development team, which each backlog item needs to meet if the product owner expects it to be developed in upcoming sprints. In other words, the goal of DoR is to make sure a backlog item is immediately actionable, the developers can start developing it, and they can be realistically finished within a sprint.</p><p>We had a good experience with creating DoR with our teams. However, we also found that this looks much easier at a first glance than it is in practice. But I believe it is definitely worth the effort, as it will make predictions and overall workflow so much smoother.</p><p>DoR is a simple set of rules which must be met before anyone from the scrum team can say “we put this one into the sprint backlog”. They may be dependent on the particular product or project, and they can be both technical and business-sided in nature, but I believe there are several universal aspects to them as well. Here are some of our typical criteria for determining if backlog item satisfies the DoR:</p><ul><li>Item has no technical or business dependencies.</li><li>Everyone from the team understands the item's meaning and purpose completely.</li><li>We have some idea about its complexity.</li><li>It has a very good cost/benefit ratio.</li><li>It is doable within one sprint.</li></ul><p>There are usually more factors (such as a well-written story definition, etc.), but I picked the ones that made us sweat the most to get them right.</p><h2>Putting backlog refinement into practice</h2><p>This is a continuous and never-ending activity, which in my opinion has the mere goal of getting the DoR fulfilled. As usual, the goal is simple to explain, but in practice not easy to achieve. Immature teams usually see refinement activities as a waste of time and a distraction from the “real work”. Nonetheless, our experience has proven many times that if we don't invest sufficient time into the refinement upfront, it will cost us dearly in time (not so much) later in the development.</p><p>So, during a sprint, preparing the ground for future sprints is a must. The development team must take this t into account when planning the sprint backlog. Refinement activities will usually occupy a non-negligible portion of the team's capacity.</p><p>The product owner and the team should aim at having at least a sprint or two worth of stuff in the backlog, which meets the DoR. That means there needs to be a continuous discussion about the top of the backlog. The rest of the scrum team should challenge the product owner to make sure nothing gets left there just “because”. Why is it there? What is its purpose and value in the long term?</p><p>Once everyone sees the value, it is necessary to evaluate the cost/benefit ratio. The devs need to think about how roughly complex it will be to develop such a user story. In order to do that, they will need to work out a general approach for the actual technical implementation and identify its prerequisites. If they are able to figure out what the size roughly is, even better.</p><p>However, from time to time, the devs won't be able to estimate the complexity, because the nature of the problem will be new to them. In such cases, our devs usually assigned someone who did research on the topic to roughly map the uncharted area. The knowledge gained was then used to size the item (and also later on, in the actual development). This research work is also tracked as a backlog item with it's intended complexity, to roughly cap the amount of effort worth investing into it.</p><p>Now with the approximate complexity established, the team can determine whether the item is not too large for a sprint. If it is, then back to the drawing board. How can we reduce or split it into more items? In our experience, in most cases, a user story could be further simplified and made more atomic to solve the root of the user's problem. Maybe in a less comfortable way for him, but it is still a valuable solution - remember the Pareto principle. The product owner needs the support of the devs to know how “small” a story needs to be, but he must be willing to reduce it, and not resist the splitting process. All of the pieces of the “broken down” stories are then treated as separate items with their own value and cost. But remember, there always needs to be a user value, so do vertical slicing only!</p><p>Then follows the question: “Can't we do something with a better ratio between value and cost instead?” In a similar fashion, the team then checks the rest of the DoR. How are we going to test it? Do we need to figure something out in advance? Is there anything about the UI that we need to think about before we get to planning? Have we forgotten anything in dependencies?</p><p>Have we taken all dependencies into account? <strong>Are we able to start developing it and get it done right away?</strong></p><h2>Let the planning begin!</h2><p>Once all the questions are answered, and both the devs and the product owner feel comfortable and familiar with the top of the backlog, the team can consider itself ready for the planning meeting.</p><p>It is not necessary (and in our case was also not common) for all devs to participate in the refinement process during a sprint. They usually agreed on who is going to be helping with the refinement to give the product owner enough support, but also to keep enough devs working on the sprint backlog. At the planning meeting, the devs just reassure themselves that they have understood all the top stories in the same way, recap the approach to the development, distribute the workload and outline a time plan for the sprint.</p><p>The sprint retrospective is also a good time to review the DoR from time to time, in case the team encounters problematic patterns in the refinement process itself.</p><p>Proper and timely backlog refinement will prevent most last-minute backlog changes from happening. In the long run, it will save money and nerves. It is also one of the major contributors to the team's morale by making backlog stuff easier to plan and achieve.</p>#scrum;#agile;#project-management;#release-management
Apple developer centre – organized and automated developer centre – organized and automated<p> Code signing goes hand in hand with iOS development, whether you wish to build and upload your app to your device, or you just want to upload it to the App Store. If you're new to iOS development and don't want to deal with it right from the start, you can enable automatically managed code signing, which is fine for the time being, but in a team of 50, it becomes rather ineffective. When someone removes their device and invalidates a wildcard development provisioning profile, or accidentally invalidates a distribution certificate, your pipeline will fail out of nowhere, and the robustness of continuous integration and/or deployment suffers as a consequence. </p><p> The right approach for getting rid of human-error in any process is to remove humans from the equation. Don't worry, in this case, it just means to remove their access to the developer centre. But how do you keep people able to develop their apps on real devices and distribute apps to the AppStore? </p><h2> It's a Match! Fastlane Match </h2><p> Fastlane and its match don’t need much introduction in the iOS community. It's a handy tool that ensures everyone has access to all development and distribution certificates, as well as profiles, without having access to the dev centre, as match uses git as storage for encrypted files. It offers a <span class="pre-inline">read-only</span> switch that makes sure nothing gets generated and invalidated accidentally. There are two roles in this approach - one for the admin and the developer. The developer uses match to install whatever is needed at the time of development and sets up CI/CD. He only needs access to the match git repository, not the developer centre. That's where the admin comes in - he is the one responsible for setting up all the devices, provisioning profiles, certificates, and git repository, where all the match magic happens. It's good to have at least two admins in case something goes awry while one of them is out of office. </p><h2> Match setup (admin perspective) </h2><p> The idea behind match is pretty simple, you don't have to deal with the developer centre as much, and you can instead focus on having a private git repository set up with all your certificates and provisioning profiles, all properly encrypted, of course. It supports developer and distribution certificates, a single repository can even handle multiple accounts. Match expects a specific folder structure in order to automatically find the matching type of certificates and profiles, but it's pretty straightforward: </p><pre><code class="hljs">|-certs |--development |--distribution |-profiles |--appstore |--development </code></pre><p> The certs folder contains a private key and a public certificate, both are encrypted. Profiles contain encrypted provisioning profiles. Match works with <span class="pre-inline">AES-256-CBC</span>, so to encrypt the provisioning profile you can use <span class="pre-inline">openssl</span>, which comes pre-installed on macOS. </p><h2> Certificate encryption </h2><p> First, you create a certificate in the dev centre. The certificate’s key is then exported from the keychain to the p12 container, and the certificate itself is exported to the cert file. Match expects the key and the certificate to be in separate files, so don't export them both from the keychain to a single p12 container. You need to pick a passphrase that is used to encrypt and later decrypt certificates and profiles. It's recommended to distribute the passphrase to others in some independent way, storing it in the repository (even though private) would make the encryption useless. </p><p> To encrypt key, run: </p><pre><code class="hljs">openssl aes-256-cbc -k "my_secret_password" -in private_key.p12 -out encrypted_key.p12 -a </code></pre><p> To encrypt the certificate: </p><pre><code class="hljs">openssl aes-256-cbc -k "my_secret_password" -in public_cert.cer -out encrypted_cert.cer -a </code></pre><p> You can have multiple certificates of the same kind (developer or distribution) under one account. To assign a provisioning profile to its certificate you need to use a unique identifier generated and linked to the certificate in the developer centre. The following Ruby script lists all the certificates with their generated identifiers. The identifier is used as a name for the key and for the certificate: </p><pre><code class="ruby hljs">require 'spaceship' Spaceship.login('') Spaceship.select_team Spaceship.certificate.all.each do |cert| cert_type = Spaceship::Portal::Certificate::CERTIFICATE_TYPE_IDS[cert.type_display_id].to_s.split("::")[-1] puts "Cert id: #{}, name: #{}, expires: #{cert.expires.strftime("%Y-%m-%d")}, type: #{cert_type}" end </code></pre><h2> Provisioning profiles encryption </h2><p> Provisioning profiles are encrypted in the same way as the certificates: </p><pre><code class="hljs">openssl aes-256-cbc -k "my_secret_password" -in profile.mobileprovision -out encrypted_profile.mobileprovision -a </code></pre><p> Naming is a bit easier: Bundle identifier is prefixed with the type for the provisioning profile like this: </p><pre><code class="hljs"> </code></pre><h2> Good orphans </h2><p> The typical git branching model doesn't make much sense in this scenario. Git repository is used as storage for provisioning profiles and certificates, rather than for its ability for merging one branch into another. It's no exception to having access to multiple dev centres, for instance, one for the company account, one for the enterprise account, and multiple accounts for the companies you develop and deploy apps for. You can use branches for each of those accounts. As those branches have no ambition of merging into each other, you can create orphan branches to keep them clearly separated. Then just use the <span class="pre-inline">git_branch</span> parameter to address them (for both development and distribution): </p><pre><code class="hljs">fastlane match --readonly --git_branch "company" ... fastlane match --readonly --git_branch "enterprise" ... fastlane match --readonly --git_branch "banking_company" ... </code></pre><h2> With great power... </h2><p> As the admin of a team without access to the dev centre, you're going to get a lot of questions on how to install certificates and profiles. It's helpful to set up a README in your codesigning repository that describes which apps are stored under which branches, and even includes <a href="">match documentation</a> and fastlane's <a href="">code signing guides</a>. It's also super cool of you to set up an installation script for each project, and put it under version control of the said project. Then when a new member joins the team and asks how to set stuff up, you just point them to run <span class="pre-inline">./</span>. </p><h2> Match usage (developer perspective) </h2><p> As a developer, you don't have access to the dev centre. You only need access to the git repository and a few commands to download profiles and install them on your machine. You also need to have your device registered in the account and assigned to the provisioning profile you'd like to use. But since you don't have access, you need to ask admins to set it up for you, which is a price paid by the admins for the sake of order and clarity. After that, you're all set and can run the commands to install whatever is necessary. The developer gets asked the passphrase the first time the command is run. You can choose to store it in the keychain if you'd like to skip entering it next time. </p><h2> Development profiles </h2><p> There are but a few inputs to match command: <span class="pre-inline">git_branch</span> reflects which account the app is registered in, <span class="pre-inline">app_identifier</span> is a bundle identifier of the app, and the others are also quite self-explanatory. If you're not sure which branch to use, you can go one by one and browse the profiles folder to see if the bundle identifier is listed there; it is unique across all accounts, so it should only be in one branch. </p><p> For instance, to install a development profile with certificate for the bundle id <span class="pre-inline"></span> you'd run: </p><pre><code class="hljs">fastlane match --readonly --git_branch "company" --git_url "" --app_identifier "" --type development </code></pre><p> You can also store a wildcard profile in the match repository, even if it does not have any real bundle identifier. In such a case you can just choose any identifier and use that, for instance <span class="pre-inline">*</span>: </p><pre><code class="hljs">fastlane match --readonly --git_branch "company" --git_url "" --app_identifier "*" --type development </code></pre><h2> Distribution profiles </h2><p> Distribution of the app to the App Store is basically the same as installing developer profiles, just change the <span class="pre-inline">type</span> from <span class="pre-inline">development</span> to <span class="pre-inline">appstore</span>: </p><pre><code class="hljs">fastlane match --readonly --git_branch "company" --git_url "" --app_identifier "" --type appstore </code></pre><p> Distribution to the App Store is usually scripted in a Fastfile script, which consists of many different actions in addition to match. That is outside the scope of this post and is well explained in other posts on the Internet. </p><h2> Conclusion </h2><p> You can clean up your dev centre and avoid certificates/profiles being revoked accidentally by switching the responsibility to git versioned repository using match. You can trick match to think that the wildcard provisioning profile is just some made-up bundle id in order to store it in git. You can have multiple branches for multiple types of dev centre accounts for an extra level of tidiness. On top of all that, you save your development team a lot of time by distributing the scripts to install whatever they need, and you can make life a bit easier for newcomers as well. </p> #iOS;#code-signing
Building a chatbot, pt. 1: Let's chat a chatbot, pt. 1: Let's chat<p>A few years ago, a client asked us to create an application that allows its users to create bookings for conference rooms and workspaces. That looks quite easy, right? A few database tables, a thin server, and thick web and mobile applications for a smooth user experience. Almost every company has a solution like that, so it should be fairly easy. But wait, there is a catch! The user interface has to be a chatbot!<br></p><p>That’s a completely different situation. How do we build something like that from scratch? We need to adjust our strategy a bit; we are going to need a thick server, thin web, and mobile applications. To limit the scope of this article, we will focus on the server-side.</p><h2>So it begins</h2><p>After a few searches and a fair amount of experiments we stumbled across <a href=""><em>NLP - Natural language processing</em></a>. These three words describe a key component of every modern chatbot platform. The chatbot takes ordinary sentences and transforms them into a data structure that can be easily processed further, without all the noise that surrounds core information.</p><p>Let's look at this example:</p> <img alt="Example of analysis" src="/Blog/PublishingImages/Articles/chatbot-1-01.png" data-themekey="#" /> <p>A simple sentence like this is split into multiple items that can be named and searched for. In this case, the phrase <em>“I need place”</em> is identified as a general intent that can be interpreted as a request for booking. Other items add information to this request. These <em>attributes</em> can carry either simple or complex information. In this example, the word <em>“some”</em> gives us the freedom to select any room from a list of available rooms, and the word <em>“meeting”</em> is interpreted as a request for a meeting room. Those parts were the easiest to classify. Time recognition attributes are more complex.</p><p>This is great for identifying atomic attributes in the sentence, but it's still a text. It took us almost a year to put together a comprehensive training data set for our target languages (English and German), but our bot finally understands the vast majority of user's requests. But how do you connect a room number to a specific room entity, username to the user, or date description to an actual date?</p><p>For that, we had to build an additional layer. Some of the post-processors need a whole blog post to describe it, but in the end, we managed to get a nice set of domain objects that are used in the bot’s decision-making process. In general, it looks like this:</p> <img alt="Cognitive processor overview" src="/Blog/PublishingImages/Articles/chatbot-1-02.png" data-themekey="#" /> <p>Input sentences are processed by the NLP and each <em>intent</em> or <em>attribute</em> is then passed to an <em>interpreter</em> that creates one or more objects that are used in conversation flow.</p><p>The most difficult part - the recognition - was solved (or so we thought). NLP gave us a nice structure with multiple items that can be <em>interpreted</em> as simple data objects.</p><h2>Neurons or no neurons, that’s the question</h2><p>The logic for conversion of recognized data to actions on the database was quite simple at the beginning. We had a few separated, well-defined use cases that were easy to implement. But complexity grew quite rapidly. A few <em>'if'</em>s were not sufficient anymore, so we had to look for a more robust solution.</p><p>After a little bit of research, we found that most of the solutions depend heavily on neural networks. That gives these solutions an edge with multiple short sentences, and general conversations about weather, sport, local natural wonders, etc. This is a robust solution for general use, when the conversations flow naturally from beginning to end. Decision-making is hidden in the neural network, which is trained with a sample data set. Neural networks are easy to start with, and adding new features is simple. Let's use it!</p><p>Well, not so fast... In testing, it works wonders, but as soon as we put it into the hands of test-users, we were bombarded with bugs. There was something we forgot: Real people. Users were giving us only partial information, and we didn't cover every possible angle. We quickly lost control over the conversation flow, with multiple use cases and various responses from the database.</p><p>This was not ideal. If we were aiming for a small-talk bot, a neural network would be ideal, but we were building a single-purpose bot. Users know exactly why they open a conversation with our bot - they want a reservation. We had to regain control of the conversation flow in the code and get all the information which the app needed from the user. The solution had to be simple, maintainable, testable, and scalable.</p><p>And so we rebuilt the application into a state machine where the bot is in control of the conversation flow. Simply put, it gets a <em>state</em>, in our case a point in the conversation with the user, and a list of information that the user already gave to the bot. Based on these data, it transitions into the next state - the next point in the conversation. </p><p>Users seek to achieve a goal, the bot guides them through the conversation and asks for the information needed. We didn't forget to add a few simple small talk phrases as well, but not many users come to our bot for small talk. The core of the state machine deserves its own blog post, but here is a small example:</p><h2>What does the bot say?</h2><p>Let's say we have a user who needs to book a room for an important meeting for next Monday from 9:30 to 10:30. In general, to make such a reservation, you will need a time and room specification. This is how the conversation could unfold:</p><p> <strong style="color:#381457;">User</strong> : <em>Book me room E.3.30 for next Friday from 9:30 to 10:30.</em><br> <strong style="color:#9d9d9d;">Bot</strong> : <em>I have booked room E.3.30 on 26.02. from 09:30 to 10:30 for you.</em> </p><p>Quite simple, right? But the same conversation could look also like this:</p><p> <strong style="color:#381457;">User</strong> : <em>I need a reservation.</em><br> <strong style="color:#9d9d9d;">Bot</strong> : <em>Do you want me to book you a seat or a meeting room?</em><br> <strong style="color:#381457;">User</strong> : <em>Meeting room.</em><br> <strong style="color:#9d9d9d;">Bot</strong> : <em>Alright! Would you like to book a specific meeting room or Shall I just book one that is free?</em><br> <strong style="color:#381457;">User</strong> : <em>Specific meeting room.</em><br> <strong style="color:#9d9d9d;">Bot</strong> : <em>What room would that be?</em><br> <strong style="color:#381457;">User</strong> : <em>R.23.</em><br> <strong style="color:#9d9d9d;">Bot</strong> : <em>For which time shall I reserve the room?</em><br> <strong style="color:#381457;">User</strong> : <em>Next Monday.</em><br> <strong style="color:#9d9d9d;">Bot</strong> : <em>Can you specify the time for me, please?</em><br> <strong style="color:#381457;">User</strong> : <em>9:30 to 10:30.</em><br> <strong style="color:#9d9d9d;">Bot</strong> : <em>I have booked room R.23 for 1st March from 09:30 to 10:30 for you.</em> </p><p>In the first example, the user knew exactly what he wanted. In the second conversation, the bot guides the user. These examples are on the opposite sides of the conversation spectrum, but we also cover everything in the middle. When the user states the date and time earlier in the conversation, the bot should not ask for it again. The main point is that all of these conversations are processed with the same conversation flow (same code, same tests).</p><p>What is neat about this approach is that we can take a part of the conversation and re-use it for multiple intents. For example, time validation can be reused in any conversation where a time specification is needed.</p><p>There is one part of the example that I've excluded, and that's the access to the reservation system itself. Here we simply save the request and call it a day, but in everyday use, there are some limitations - the reservation may very well be refused. All of these possibilities have to be covered, and users have to be properly informed. Again, how to do that is a topic for a whole new blog post.</p><h2>Conclusion</h2><p>As you can see, there are a number of topics to consider when building a chatbot from scratch: from NLP to decision making, to actions in the reservation system, and finally to the answers.</p><p>Thanks to rigorous testing and a clear framework, we are not blocked by bloated training data sets, and multiple devs can develop independently of each other.</p><p>Currently, our application can process multiple base intents like <em>show</em>, <em>cancel</em>, <em>check</em> or <em>book</em> in English and German. Based on these intents, the bot can give the user up to 300 different conversations with multiple responses. More conversations and variations are still in development and we hope to reach 500 in the near future. Our system is currently used by more than 1400 users and on average 2000 interactions happen every week.</p> #chatbot;#ai;#neural-network
Questions to ask before choosing mobile app technology to ask before choosing mobile app technology<p>Embarking on a new project is exciting. So many possibilities, so many choices! But you better get them right from the start, otherwise, your project might suffer in the long run.</p><p>Choosing a platform to build your mobile app can be a daunting task. For some apps, a simple responsive web or PWA will suffice, whereas for others only native solutions will do. And there’s of course a range of popular cross-platform or hybrid technologies like Xamarin, React Native, Flutter, or Kotlin Multiplatform, to name a few.</p><p>Evaluating all these alternatives is difficult. There are no universally right or wrong answers, but to make the choice easier, we offer you a list of questions that, when answered, will help you make the right choice.</p><h2>Lifespan</h2><ol><li><strong>What is the planned lifetime period of your app?</strong> Short-lived marketing or event apps have different requirements than apps that need to live happily for years. </li><li><strong>What is more important: Time to market, or sustainable development over time?</strong> Sometimes quick’n’dirty solutions make perfect business sense, sometimes they are poison. </li><li><strong>Will the chosen technology still exist when your app approaches the end of its life?</strong> Obsolete or abandoned technology will severely hinder your ability to support and expand your app. </li><li><strong>Will the technology be supported by its authors? Will it be supported on target platforms?</strong> Open source technology can be theoretically maintained by anybody, however, in practice, the majority of work often rests on a surprisingly small number of individuals. </li><li><strong>How will the technology evolve over time?</strong> There is a significant difference between a technology that the authors primarily develop to serve their own needs (even if it’s open-sourced), and a technology that is truly meant as a general-purpose tool. </li><li><strong>Is there a risk of vendor lock-in?</strong> If the technology is currently free to use, will it still be free in the future? What is the cost of moving to an alternative solution? </li></ol><h2>Runtime</h2><ol start="7"><li><strong>What runtime environment does the app need?</strong> The app may be compiled to native code, it may need bridges, wrappers, interpreters, etc. Those can differ wildly in various regards, sometimes by an order of magnitude. </li><li><strong>How is the performance?</strong> Nobody wants sluggish, janky apps.</li><li><strong>Is it stable?</strong> Frequent crashes destroy an app's reputation quickly.</li><li><strong>How big are deployed artifacts? Do they need to be installed?</strong> A complicated or slow installation process lowers the chances that users will even <em>launch</em> your app, while every extra megabyte increases churn. </li></ol><h2>UI</h2><ol start="11"><li><strong>Does the technology use native components, or does it draw its own? Can the user tell the difference?</strong> Non-native components may look similar, but users are surprisingly sensitive to even small inconsistencies. </li><li><strong>Does it respect the look’n’feel of each platform?</strong> You don’t want your app to look unintentionally alien on the target platform. </li><li><strong>Are all platform-specific components available?</strong> Custom UI components often demand a lot of work and if many are not available, your app can get very expensive, very quickly. </li><li><strong>How difficult is it to create custom components?</strong> Even if all platform components are available, there will be times when you’ll need to create your own—and it needs to be reasonably effective to do so. </li><li><strong>How difficult is it to create animations?</strong> When done right, animations are a crucial part of the UX, but implementing animations can sometimes be exceedingly difficult. </li><li><strong>How are the components integrated with the target system?</strong> Appearances are not everything—you also need to consider things like gestures, accessibility, support for autocomplete, password managers, etc. </li></ol><h2>Compatibility and interoperability</h2><ol start="17"><li><strong>What level of abstraction does the technology bring?</strong> Some try to completely hide or unify the target platforms, some are very low-level. Both can be good, or bad. </li><li><strong>Which system functionalities does it support directly?</strong> UI is not everything—chances are your app will need to support at least some of the following things: biometry, cryptography, navigation, animations, camera, maps, access to user’s contacts or calendar, OCR, launcher widgets, mobile payment systems, AR/VR, 3D rendering, sensors, various displays, wearables, car, TV, … </li><li><strong>How difficult is it to access native APIs?</strong> Every abstraction is leaky. There will come a time when you’ll need to interact with the underlying platform directly. The difficulty to do so can vary greatly. </li><li><strong>Are cutting-edge platform features available right away?</strong> Especially when using bridges or wrappers, support for the latest features can be delayed. </li><li><strong>What other platforms does the technology support?</strong> The ability to run your app on other platforms can sometimes be very advantageous, just keep in mind that the extra investment required can vary. </li></ol><h2>Paradigm and architecture</h2><ol start="22"><li><strong>How steep is the learning curve?</strong> Your team needs to be up-and-running in a reasonable amount of time. </li><li><strong>How rigid is the technology?</strong> Some frameworks try to manage everything—painting by the numbers can be simple and effective, but at the same time, it may limit your ability to implement things for which the framework doesn’t have first-class support. On the other hand, libraries may be more difficult to wire together, but they grant you greater freedom. </li><li><strong>How distant is the given paradigm from the default way of doing things?</strong> Nonstandard or exotic approaches can steepen the learning curve significantly. </li><li><strong>Is the technology modular? On what levels?</strong> Usually, you need the ability to slice the app across various boundaries (e.g., features, layers), and at various levels (e.g., code, compilation, deployment, etc.). </li><li><strong>How does it scale?</strong> Nowadays, even mobile apps can easily grow to hundreds of screens, and the app mustn’t crumble under that weight for both its developers and users. </li></ol><h2>Tooling</h2><ol start="27"><li><strong>Is there an official IDE? What does it cost? Can it be extended with plugins?</strong> Developer productivity is paramount, and the best tools pay for themselves quickly. </li><li><strong>Which build system does the technology use?</strong> There are many of them, but they’re not all equally simple to use, fast, or extendable. </li><li><strong>How is the CI/CD support?</strong> It needs to integrate smoothly with your CI/CD system of choice. </li><li><strong>What about testing, debugging, instrumentation, or profiling?</strong> Your developers and QA people need to be able to quickly dissect your app to identify and fix potential problems. </li><li><strong>How mature and effective are the tools?</strong> Your developers should focus on your app, they shouldn’t be fighting the tools. </li><li><strong>Does the technology support hot reload, or dynamic feature modules?</strong> These features usually greatly enhance developer productivity. </li></ol><h2>Ecosystem</h2><ol start="33"><li><strong>Is the technology open source?</strong> There are countless advantages when it is. </li><li><strong>What is the availability, quality, and scope of 3rd party libraries?</strong> The ability to reuse existing, well-tested code can make or break projects. </li><li><strong>Is the official documentation up-to-date, complete, and comprehensive?</strong> While learning about particular technology by trial and error can be fun, it certainly isn’t effective. </li><li><strong>Do best practices exist?</strong> If there are many ways to do a thing, chances are some of them will end up with your developers shooting themselves in the foot. </li><li><strong>How accessible is community help? Are there blog posts, talks, or other learning materials?</strong> Search StackOverflow, or try to find newsletters, YouTube channels, podcasts, or conferences dedicated to the technology in question. </li><li><strong>Are consultants available if needed?</strong> Some of them are even helpful.</li><li><strong>What is the overall community sentiment towards the technology?</strong> Dedicated fans are a good sign, but be careful not to fall for marketing tricks. </li><li><strong>Do other similar organizations have experience with the technology?</strong> Learn from the successes and mistakes of others. </li></ol><h2>Human resources</h2><ol start="41"><li><strong>What primary programming language does the technology rely on?</strong> It isn’t enough that developers are able to <em>edit</em> source files to make the machine do something—they need to be able to write idiomatic and expressive code that can be read by human beings. </li><li><strong>Do you already have suitable developers?</strong> Why change a whole team, when you might already have a stable, well-coordinated one? </li><li><strong>Will mobile developers be effective using the language?</strong> There could be great friction when switching developers from one language to another, especially when the new language is significantly different (e.g., statically vs. dynamically typed, compiled vs. interpreted, etc.). </li><li><strong>Will non-mobile developers be effective on mobile platforms?</strong> For example, some technologies try to port web frameworks to mobile platforms, so it might look like a good idea to assign web developers to the project—but the reality is not that simple. </li><li><strong>What is the current market situation? What is the market profile of available developers?</strong> You usually need a suitable mix of junior and senior developers, but they might not be easy to find, or their cost might not be economically feasible. </li></ol><h2>Existing codebase</h2><ol start="46"><li><strong>Do you already have some existing code?</strong> Rewriting from scratch is tempting, but it isn’t always a good idea. </li><li><strong>What have you invested in it so far?</strong> It may be very cheap to throw away, or it may represent a major asset of your organization. </li><li><strong>What is its value to your organization?</strong> It may earn or save you a ton of money, or it may be a giant liability. </li><li><strong>How big is the technical debt?</strong> The value of unmaintainable code is not great, to put it mildly. </li><li><strong>Can it be maintained and evolved?</strong> The software must be, well, soft. If yours is rigid, again, its value is not that great. </li><li><strong>Can it be transformed piece-by-piece?</strong> Some technologies allow gradual migration, some are all-or-nothing propositions. </li></ol><h2>Final questions</h2><p>Each app has different needs, and there will always be tradeoffs. In the end, you’ll need to prioritize the various viewpoints implied by the aforementioned questions.</p><p>Which qualities are most important for your project? Which properties bring you opportunities? Which increase risk?</p><p>When you put the alternatives into the right perspective, you certainly have a much better chance at success. May your apps live long and prosper!</p>#project-management;#android;#iOS
Scrum smells, pt. 3: Panic-driven bug management smells, pt. 3: Panic-driven bug management<p>Bugs create a special atmosphere. They often cause a lot of unrest or outright panic. But does it have to be that way?</p><p>Nearly every developer out there has come across the following scenario: The development team is working on the sprint backlog when suddenly the users report an incident. The marketing manager comes in and puts pressure on the development team or their product owner to urgently fix the bug. The team feels guilty so some of the developers stop working on whatever they've been doing and focus on fixing the bug. They eventually succeed, and now the testers shift their focus as well to verify the fix as soon as possible, so the developers can release a hotfix. The hotfix is deployed, sprint passes by, and the originally planned sprint backlog is only half-done. Everyone is stressed out.</p><p>A similar situation is often created by a product owner: He finds a defect in functionality, created two sprints ago, but demands an immediate repair.</p><p>Is this all really necessary? Sure, some issues have a great impact on the product or service, and then this approach might be justifiable, but rather often this kind of urgent defect whacking is a process that is more emotional than rational. So how to treat bugs systematically?</p><h2>What are bugs and bug fixes?</h2><p>A defect, incident, or simply a “bug” is effectively any deviation of the existing product from its backlog. Any behavior that is different from the one agreed upon between the dev team and a product owner can be called a bug. Bugs aren’t only defects in the conventional meaning (e.g., crashes or computational errors); a technically correct behavior in conflict with a boundary set by a user story can also be considered a defect.</p><p>Some bugs are related to the product increment being implemented in the current sprint. Other bugs are found retrospectively: They are related to the user stories developed in past sprints. These fall into two categories:</p><ol><li>Regressions: When a subsequent development broke a formerly functional part of the code. </li><li>Overlooked bugs: They were always there, but no one had noticed.</li></ol><p>Conversely, a bug fix is something that adds value to the current product by lowering the above-mentioned deviation. It requires a certain amount of effort and it raises the value of the present product. At the end of the day, a bug is just another unit of work, and we can evaluate its cost/benefit ratio. It is the same as any other backlog item.</p><h2>A bit of psychology</h2><p>Scrum teams and stakeholders tend to approach both defect categories differently. They also treat them differently than the “regular” backlog items.</p><p>In my experience, there are two important psychological factors influencing the irrational treatment of defects.</p><p>First of all, there's often a feeling of guilt when a developer is confronted with a bug. The natural response of most people is to try to fix the error as soon as possible so that they feel they are doing a good job. Developers naturally want to get rid of such debts.</p><p>Another factor is how people perceive gains and losses. People are evolutionarily averse to losses because the ability to obtain and preserve resources has always been key to survival. There have been studies concluding that on average, people perceive a loss four times as intensely compared to a gain of the same objective value: If you lose 5 dollars, it is four times as painful compared to the gratification of finding 5 dollars lying on the ground. You need to find 20 dollars to have a comparable intensity of feeling as when you lose the mentioned 5. The bug/defect/incident is perceived as a loss for the team's product, especially if it's a regression. A small bug can therefore be perceived as much more important than a newly delivered valuable feature.</p><p>Don't get me wrong—I am not saying that bugs are not worth fixing or that they don't require any attention. That is obviously not true. One of the key principles of scrum is to deliver a functional, <em>potentially releasable</em> product increment in every sprint. That means that a high development quality is fundamental and teams should always aim at developing a debt-free product. Nonetheless, bugs will always have to be dealt with.</p><h2>Bugs caused by newly added code</h2><p>When working on a sprint backlog, the team needs to set up a system to validate the increment they’ve just developed. The goal is to make sure that at the end of the sprint, a feature is free of debt, and can be potentially released. Our experience shows that during a sprint backlog development, the team should focus on removing any bugs related to the newly developed features as quickly as possible in order to keep the feedback/verification loop as short as possible. This approach maximizes the probability that a newly developed user story is done by the end of the sprint and that it is potentially releasable.</p><p>Sometimes there are just too many bugs and it becomes clear that not everything planned in the sprint backlog can be realistically achieved. The daily scrum is the opportunity to point this out. The development team and the product owner together can then concentrate their efforts on a smaller amount of in-progress user stories (and related bugs). It is always better to make one user story done by the end of the sprint than to have ten stories halfway finished. Of course all bugs should be recorded transparently in the backlog.</p><p>Remember, a user story is an explanation of the user's need that the product tackles, together with a general boundary within which the developed solution must lie. A common pitfall is that the product owner decides on the exact way for developing a (e.g., defines the exact UI or technical workflow) and insists on it, even though it is just her personal preference. This approach not only reduces the development team's options to come up with the most effective solution but also inevitably increases the probability of a deviation, thus increasing the number of bugs as well.</p><h2>Regressions and bugs related to past development</h2><p>I think it's important to treat bugs (or rather their fixes) introduced before the current sprint as regular backlog items and prioritize them accordingly. Whenever an incident or regression is discovered, it must go into the backlog and decisions need to be made: What will be the benefit of that particular bug fix compared to other backlog items we can work on? Has the bug been introduced just now or have the users already lived with it for some time and we just did not know it? Do we know the root cause and are we able to estimate the cost needed to fix it? If not, how much effort is worth putting into that particular bug fix, so that the cost/benefit ratio is still on par with other items on the top of the backlog?</p><p>By following this approach, other backlog items will often be prioritized over the bug fix, which is perfectly fine. Or the impact of the bug might be so negligible that it's not worth keeping it in the backlog at all. One of the main scrum principles is to always invest the team's capacity in stuff that has the best return on invested time/costs. When the complexity of a fix is unknown, we have good experience with putting a limit on the invested capacity. For instance, we said that at the present moment, this particular bug fix is worth investing 5 story points for us. If the developers managed to fix the issue, great. If not, it was abandoned and re-prioritized with this new knowledge. By doing this, we mitigated the situations when developers dwell on a single bug for weeks, not being able to fix it.</p><p>I think keeping a bug-log greatly hinders transparency, and it’s a sign that a product owner gives up on making decisions that really matter and refuses to admit the reality.</p><h2>Final words</h2><p>I believe all backlog items should be approached equally. A bug fix brings value in a similar way as a new functionality does. By keeping bug fixes and new features in one common backlog and constantly questioning their cost/benefit ratio, we can keep the team going forward, and ensure that critical bugs don't fall through.</p>#scrum;#agile;#project-management;#release-management
Jetpack Compose: What you need to know, pt. 2 Compose: What you need to know, pt. 2<p>This is the second and final part of the Jetpack Compose series that combines curious excitement with a healthy dose of cautious skepticism. Let’s go!</p><h2>Ecosystem</h2><p><strong>Official documentation doesn’t cover enough.</strong></p><p>That’s understandable in this phase of development, but it absolutely needs to be significantly expanded before Compose hits 1.0.</p><p>On top of that, Google is once again getting into the bad habits of 1) mistaking developer marketing for advocacy and 2) scattering useful bits of information between <a href="">official docs</a>, KDoc, semi-official <a href="">blogs</a>, <a href="">code samples</a>, or other sources with unknown relevance. Although these can be useful, they’re difficult to find and are not usually kept up-to-date. </p><p><strong>Interoperability is good.</strong></p><p>We can use <a href="">legacy Views</a> in our Compose hierarchy and composables as <a href="">parts</a> of View-based UIs. It works, we can migrate our UIs gradually. This feature is also important in the long term, as I wouldn’t expect a Compose version of WebView or MapView written from scratch any time soon, if ever.</p><p>Compose also plays nicely with other libraries—it integrates well with Jetpack <a href="">ViewModel</a>, <a href="">Navigation</a>, or <a href="">reactive streams</a> (LiveData, RxJava, or Kotlin Flow—<a href="">StateFlow</a> is especially well suited for the role of a stream of states coming from the view model to the root composable). Popular 3rd party libraries such as <a href="">Koin</a> also have support for Compose.</p><p>Compose also gives us additional options. Its simplicity allows for much. For example, it is very well possible to completely get rid of fragments and/or Jetpack Navigation (although in this case, I think one vital piece of the puzzle is still missing—our DI frameworks need the ability to create scopes tied to composable functions), but of course you don’t have to. Choose what’s best for your app.</p><p>All in all, the future of the Compose ecosystem certainly looks bright.</p><p><strong>Tooling is a work in progress, but the fundamentals are already done.</strong></p><p>Compose alphas basically require <a href="">canary builds of Android studio</a>, which are expected to be a little bit unstable and buggy. Nevertheless, specifically for Compose, the Android tooling team has already added custom syntax and error highlighting for composable functions, a bunch of live templates, editor intentions, inspections, file templates, and even color previews in the gutter (Compose has its own color type).</p><p>Compose also supports <a href="">layout previews</a> in the IDE, but these are more cumbersome than their XML counterparts. A true hot reload doesn’t seem to be possible at the moment.</p><p>The IDE also sometimes struggles when a larger file with lots of deeply nested composable functions is opened in the editor. That said, the tooling won’t hinder your progress in a significant way.</p><p><strong>UI testing is perhaps more complicated than it was with the legacy toolkit.</strong></p><p>In Compose, there are no objects with properties in the traditional sense, so to facilitate UI tests, Compose (mis)uses its accessibility framework to expose information to the tests. </p><p>To be honest, it all feels a little bit hacky, but at least we have support for running the tests on JUnit 4 platform (with the help of a custom rule), <a href="">Espresso-like APIs</a> for selecting nodes and asserting things on them, and a helper function to print the UI tree to the console.</p><p>The situation is thus fairly similar to the legacy toolkit, and so is my advice: Mind the <a href="">test pyramid</a>, don’t rely too much on UI tests, and structure your app in such a way that the majority of the code can be tested by simple unit tests executed on the JVM.</p><h2>Performance and stability</h2><p><strong>Build speeds can be surprising.</strong></p><p>In a good way! One would think that adding an additional compiler to the build pipeline would slow things down (and on its own, it would), but Compose replaces the legacy XML layout system, which has its own performance penalties (parsing XMLs, compiling them as resources, etc.). </p><p>It turns out, even now when Compose is still in a very early stage of development, the build time of a project written with Compose is at least comparable to the legacy UI toolkit version—and it might be even faster, as measured <a href="">here</a>. </p><p><strong>Runtime performance is a mixed bag.</strong></p><p>UIs made with Compose can be laggy sometimes, but this is totally expected since we are still in alpha. Further optimizations are promised down the line, and because Compose doesn’t come with the burden of <a href="">tens of thousands of LOC</a> full of compatibility hacks and workarounds in each component, I hope someday Compose will actually be faster than the legacy toolkit.</p><p><strong>It crashes (it’s an alpha, I know).</strong></p><p>In my experience, Compose crashes both at compile time (the compiler plugin) and at runtime (usually because of a corruption of Compose’s internal data structure called “slot table”, especially when animations are involved). When it does crash, it leaves behind a very, very long stack trace that is full of synthetic methods, and which is usually also totally unhelpful. </p><p>We definitely need special debugging facilities for Compose (similar to what coroutines have), and yes, I know, the majority of these bugs will be ironed out before 1.0. The thing is, Compose simply must be reliable and trustworthy at runtime because we are not used to hard crashes from our UI toolkit—for many teams, that would be an adoption blocker. </p><h2>Expectations</h2><p><strong>Compose is meant to be the primary UI toolkit on Android.</strong></p><p>Several Googlers confirmed that if nothing catastrophic happens, this is the plan. Of course, it will take years, and as always, it won’t be smooth sailing all the way, but Google and JetBrains are investing heavily in Compose.</p><p><strong>Compose is no silver bullet.</strong></p><p>Yes, Compose in many ways simplifies UI implementation and alleviates a significant amount of painful points of the legacy UI toolkit.</p><p>At the same time, it’s still possible to repeat some horrible old mistakes regarding Android’s lifecycle (after all, your root composable must still live in some activity, fragment, or view), make a huge untestable and unmaintainable mess eerily similar to the situation when the whole application is written in one single Activity, or even invent completely new and deadly mistakes.</p><p>Compose is <em>not</em> an architecture. Compose is just a UI framework and as such it must be isolated behind strict borders. </p><p><strong>Best practices need to emerge.</strong></p><p>Compose is architecture-agnostic. It is well suited to clean architecture with MVVM, but that certainly isn’t the only possible approach, as it’s evident from the <a href="">official samples repo</a>. However, in the past, certain ideas proved themselves better than others, and we should think very carefully about those lessons and our current choices.</p><p>Just because these are official samples by Google (or by anyone else for that matter), that doesn’t mean you should copy them blindly. We are all new to this thing and as a community, we need to explore the possibilities before we arrive at a set of reasonable, reliable, and tried-and-proven best practices.</p><p>Just because we can do something doesn’t mean we should.</p><p><strong>There are a lot of open questions.</strong></p><p>The aforementioned official samples showcase a variety of approaches, but in my book, some are a little bit arguable or plainly wrong. For example, ask yourself: </p><p>How should the state be transformed while passed through the tree, if ever? How should internal and external states be handled? How smart should the composable functions be? Should a view model be available to any composable function directly? And what about repositories? Should composable functions have their own DI mechanism? Should composable functions know about navigation? And data formatting, or localization? Should they handle the process death themselves? The list goes on.</p><p><strong>Should you use it in production?</strong></p><p>Well, it entirely depends on your project. There are several important factors to consider:</p><ul><li>Being still in alpha, the APIs will change, sometimes significantly. Can you afford to rewrite big parts of your UI, perhaps several times? </li><li>There are features missing. This situation will get better over time, but what you need now matters the most. </li><li>Runtime stability might be an issue. You can work around some things, but there’s no denying that Compose right now is less stable than the legacy toolkit. </li><li>What is the lifespan of your application? If you’re starting an app from scratch next week, with plans to release v1.0 in 2022 and support it for 5 years, then Compose might be a smart bet. Another good use might be for proof of concept apps or prototypes. But should you rewrite all your existing apps in Compose right now? Probably not. </li></ul><p>As always with new technology, all these questions lead us to these: Are you an early adopter? Can you afford to be?</p><h2>Under the hood</h2><p><strong>Compose is very cutting edge (and in certain aspects quite similar to how coroutines work).</strong></p><p>In an ideal world, no matter how deeply composable functions were nested and how complex they were, we could call them all on each and every frame (that’s 16 milliseconds on 60 FPS displays, but faster displays are becoming more prevalent). However, hardware limitations of real world devices make that infeasible, so Compose has to resort to some very intricate optimizations. At the same time, Compose needs to maintain an illusion of simple nested function calls for us developers.</p><p>Together, these two requirements result in a technical solution that’s as radical as it’s powerful—changing language semantics with a custom Kotlin compiler plugin.</p><p><strong>Compose compiler and runtime are actually very interesting, general-purpose tools.</strong></p><p>Kotlin functions annotated with @Composable behave very differently to normal ones (as it’s the case with suspending functions). This is possible thanks to the <a href="">IR code</a> being generated for them by the compiler (Compose uses the Kotlin IR compiler backend, which itself is in alpha).</p><p>Compose compiler tracks input argument changes, inner states, and other stuff in an internal data structure called <em>slot table</em>, with the intention to execute only the necessary composable functions when the need arises (in fact, composable functions can be executed in any order, in parallel, or even not at all).</p><p>As it turns out, there are other use cases when this is very useful—composing and rendering UI trees is just one of them. Compose compiler and runtime can be used for <a href="">any programming task</a> where working efficiently with tree data structures is important.</p><p><strong>Compose is the first big sneak peek at Kotlin’s exciting future regarding compiler plugins.</strong></p><p>Kotlin compiler plugins are still very experimental, with the API being unstable and mostly undocumented (if you’re interested in the details, read <a href="">this blog series</a> before it becomes obsolete), but eventually the technology will mature—and when it does, something very interesting will happen: Kotlin will become a language with more or less stable, fixed <em>syntax</em>, and vastly changeable, explicitly pluggable <em>behavior</em>.</p><p>Just look at what we have at our disposal even now, when the technology is in its infancy: There is Compose, of course (with a <a href="">desktop port</a> in the works), a plugin to <a href="">make classes open</a> to play nice with certain frameworks or tests, <a href="">Parcelable generator</a> for Android, or <a href="">exhaustive when for statements</a>, with <a href="">more plugins</a> coming in the future.</p><p>Last but not least, I think that the possibility to modify the language with external, independent plugins will lower the pressure on language designers, reducing the risk of bloating the language—when part of the community demands some controversial feature, why not test-drive it in the form of a compiler plugin first?</p><h2>Final words</h2><p>Well, there you have it—I hope this series helped you to create an image of Compose in your head that is a little bit sharper than the one you had before. Compose is certainly going to be an exciting ride!</p>#android;#jetpack;#compose;#ui