mobile it



Scrum smells, pt. 4: Dreadful planning smells, pt. 4: Dreadful planning<p>In a few of our past projects, I encountered a situation that might sound familiar to you: Developers are getting towards the end of a sprint. The product owner seems to have sorted the product backlog a bit for the sprint planning meeting - he changed the backlog order somewhat and pulled some items towards the top because he currently believes they should be added to the product rather soon. He added some new things as well because the stakeholders demand them. In the meantime, the team works on the development of the sprint backlog. The sprint ends, the team does the end-of-sprint ceremonies and planning we go.</p><p>At the planning meeting, the team sits down to what seems to be a groomed backlog. They go through the top backlog items with the product owner, who explains what he has prioritized. The team members try to grasp the idea and technical implication of the backlog items and try their best to plan them for development. But they find out that one particular story is very complex and can't be fitted within a sprint, so they negotiate with the product owner about how to meaningfully break it down into several smaller pieces. Another item has a technical dependency on something that has not been done yet. The third item has a functional dependency - meaning it won't work meaningfully unless a different story gets developed. The fourth item requires a technology that the developers haven’t had enough experience with. Therefore, they are unable to even remotely tell how complex it is. And so on it goes - the team members dig through the “prepared” backlog, try to wrap their heads around it, and finally find out that they can't work on every other story for some reason.</p><p>One possible outcome is that such items are skipped, and only the items that the team feels comfortable with are planned into the sprint backlog. Another outcome is that they will want to please the product owner and “try” to do the stuff somehow. In any case, the planning meeting will take hours and will be a very painful experience.</p><p>In both cases, the reason is poor planning. If there ever was a planned approach by the product owner towards the backlog prior to the planning meeting, it was naive, and now it either gets changed vastly, or it gets worked on with many unknowns - making the outcome of the sprint a gamble.</p><h2>What went wrong?</h2><p>One might think all the planning occurs exclusively at the planning meeting. Why else would it be called a planning meeting? Well, that is only half true. The planning meeting serves the purpose for the team to agree on a realistic sprint goal, and discuss with the product owner what can or cannot be achieved within the upcoming sprint, and create a plan of attack. Team members pull the items from the top of the backlog into the sprint backlog in a way that gets to that goal in the best possible way. It is a ceremony that actually starts the sprint, so the team sets off developing the stuff right away.</p><p>In order to create a realistic sprint plan that delivers a potentially releasable product increment with a reasonable amount of certainty, there has to be enough knowledge and/or experience with what you are planning. The opposite approach is called gambling.</p><h2>Definition of ready</h2><p>It is clear that the backlog items need to fulfill some criteria before the planning meeting occurs. These criteria are commonly referred to as a “definition of ready” (DoR). Basically, it is a set of requirements set by the development team, which each backlog item needs to meet if the product owner expects it to be developed in upcoming sprints. In other words, the goal of DoR is to make sure a backlog item is immediately actionable, the developers can start developing it, and they can be realistically finished within a sprint.</p><p>We had a good experience with creating DoR with our teams. However, we also found that this looks much easier at a first glance than it is in practice. But I believe it is definitely worth the effort, as it will make predictions and overall workflow so much smoother.</p><p>DoR is a simple set of rules which must be met before anyone from the scrum team can say “we put this one into the sprint backlog”. They may be dependent on the particular product or project, and they can be both technical and business-sided in nature, but I believe there are several universal aspects to them as well. Here are some of our typical criteria for determining if backlog item satisfies the DoR:</p><ul><li>Item has no technical or business dependencies.</li><li>Everyone from the team understands the item's meaning and purpose completely.</li><li>We have some idea about its complexity.</li><li>It has a very good cost/benefit ratio.</li><li>It is doable within one sprint.</li></ul><p>There are usually more factors (such as a well-written story definition, etc.), but I picked the ones that made us sweat the most to get them right.</p><h2>Putting backlog refinement into practice</h2><p>This is a continuous and never-ending activity, which in my opinion has the mere goal of getting the DoR fulfilled. As usual, the goal is simple to explain, but in practice not easy to achieve. Immature teams usually see refinement activities as a waste of time and a distraction from the “real work”. Nonetheless, our experience has proven many times that if we don't invest sufficient time into the refinement upfront, it will cost us dearly in time (not so much) later in the development.</p><p>So, during a sprint, preparing the ground for future sprints is a must. The development team must take this t into account when planning the sprint backlog. Refinement activities will usually occupy a non-negligible portion of the team's capacity.</p><p>The product owner and the team should aim at having at least a sprint or two worth of stuff in the backlog, which meets the DoR. That means there needs to be a continuous discussion about the top of the backlog. The rest of the scrum team should challenge the product owner to make sure nothing gets left there just “because”. Why is it there? What is its purpose and value in the long term?</p><p>Once everyone sees the value, it is necessary to evaluate the cost/benefit ratio. The devs need to think about how roughly complex it will be to develop such a user story. In order to do that, they will need to work out a general approach for the actual technical implementation and identify its prerequisites. If they are able to figure out what the size roughly is, even better.</p><p>However, from time to time, the devs won't be able to estimate the complexity, because the nature of the problem will be new to them. In such cases, our devs usually assigned someone who did research on the topic to roughly map the uncharted area. The knowledge gained was then used to size the item (and also later on, in the actual development). This research work is also tracked as a backlog item with it's intended complexity, to roughly cap the amount of effort worth investing into it.</p><p>Now with the approximate complexity established, the team can determine whether the item is not too large for a sprint. If it is, then back to the drawing board. How can we reduce or split it into more items? In our experience, in most cases, a user story could be further simplified and made more atomic to solve the root of the user's problem. Maybe in a less comfortable way for him, but it is still a valuable solution - remember the Pareto principle. The product owner needs the support of the devs to know how “small” a story needs to be, but he must be willing to reduce it, and not resist the splitting process. All of the pieces of the “broken down” stories are then treated as separate items with their own value and cost. But remember, there always needs to be a user value, so do vertical slicing only!</p><p>Then follows the question: “Can't we do something with a better ratio between value and cost instead?” In a similar fashion, the team then checks the rest of the DoR. How are we going to test it? Do we need to figure something out in advance? Is there anything about the UI that we need to think about before we get to planning? Have we forgotten anything in dependencies?</p><p>Have we taken all dependencies into account? <strong>Are we able to start developing it and get it done right away?</strong></p><h2>Let the planning begin!</h2><p>Once all the questions are answered, and both the devs and the product owner feel comfortable and familiar with the top of the backlog, the team can consider itself ready for the planning meeting.</p><p>It is not necessary (and in our case was also not common) for all devs to participate in the refinement process during a sprint. They usually agreed on who is going to be helping with the refinement to give the product owner enough support, but also to keep enough devs working on the sprint backlog. At the planning meeting, the devs just reassure themselves that they have understood all the top stories in the same way, recap the approach to the development, distribute the workload and outline a time plan for the sprint.</p><p>The sprint retrospective is also a good time to review the DoR from time to time, in case the team encounters problematic patterns in the refinement process itself.</p><p>Proper and timely backlog refinement will prevent most last-minute backlog changes from happening. In the long run, it will save money and nerves. It is also one of the major contributors to the team's morale by making backlog stuff easier to plan and achieve.</p>#scrum;#agile;#project-management;#release-management
Apple developer centre – organized and automated developer centre – organized and automated<p> Code signing goes hand in hand with iOS development, whether you wish to build and upload your app to your device, or you just want to upload it to the App Store. If you're new to iOS development and don't want to deal with it right from the start, you can enable automatically managed code signing, which is fine for the time being, but in a team of 50, it becomes rather ineffective. When someone removes their device and invalidates a wildcard development provisioning profile, or accidentally invalidates a distribution certificate, your pipeline will fail out of nowhere, and the robustness of continuous integration and/or deployment suffers as a consequence. </p><p> The right approach for getting rid of human-error in any process is to remove humans from the equation. Don't worry, in this case, it just means to remove their access to the developer centre. But how do you keep people able to develop their apps on real devices and distribute apps to the AppStore? </p><h2> It's a Match! Fastlane Match </h2><p> Fastlane and its match don’t need much introduction in the iOS community. It's a handy tool that ensures everyone has access to all development and distribution certificates, as well as profiles, without having access to the dev centre, as match uses git as storage for encrypted files. It offers a <span class="pre-inline">read-only</span> switch that makes sure nothing gets generated and invalidated accidentally. There are two roles in this approach - one for the admin and the developer. The developer uses match to install whatever is needed at the time of development and sets up CI/CD. He only needs access to the match git repository, not the developer centre. That's where the admin comes in - he is the one responsible for setting up all the devices, provisioning profiles, certificates, and git repository, where all the match magic happens. It's good to have at least two admins in case something goes awry while one of them is out of office. </p><h2> Match setup (admin perspective) </h2><p> The idea behind match is pretty simple, you don't have to deal with the developer centre as much, and you can instead focus on having a private git repository set up with all your certificates and provisioning profiles, all properly encrypted, of course. It supports developer and distribution certificates, a single repository can even handle multiple accounts. Match expects a specific folder structure in order to automatically find the matching type of certificates and profiles, but it's pretty straightforward: </p><pre><code class="hljs">|-certs |--development |--distribution |-profiles |--appstore |--development </code></pre><p> The certs folder contains a private key and a public certificate, both are encrypted. Profiles contain encrypted provisioning profiles. Match works with <span class="pre-inline">AES-256-CBC</span>, so to encrypt the provisioning profile you can use <span class="pre-inline">openssl</span>, which comes pre-installed on macOS. </p><h2> Certificate encryption </h2><p> First, you create a certificate in the dev centre. The certificate’s key is then exported from the keychain to the p12 container, and the certificate itself is exported to the cert file. Match expects the key and the certificate to be in separate files, so don't export them both from the keychain to a single p12 container. You need to pick a passphrase that is used to encrypt and later decrypt certificates and profiles. It's recommended to distribute the passphrase to others in some independent way, storing it in the repository (even though private) would make the encryption useless. </p><p> To encrypt key, run: </p><pre><code class="hljs">openssl aes-256-cbc -k "my_secret_password" -in private_key.p12 -out encrypted_key.p12 -a </code></pre><p> To encrypt the certificate: </p><pre><code class="hljs">openssl aes-256-cbc -k "my_secret_password" -in public_cert.cer -out encrypted_cert.cer -a </code></pre><p> You can have multiple certificates of the same kind (developer or distribution) under one account. To assign a provisioning profile to its certificate you need to use a unique identifier generated and linked to the certificate in the developer centre. The following Ruby script lists all the certificates with their generated identifiers. The identifier is used as a name for the key and for the certificate: </p><pre><code class="ruby hljs">require 'spaceship' Spaceship.login('') Spaceship.select_team Spaceship.certificate.all.each do |cert| cert_type = Spaceship::Portal::Certificate::CERTIFICATE_TYPE_IDS[cert.type_display_id].to_s.split("::")[-1] puts "Cert id: #{}, name: #{}, expires: #{cert.expires.strftime("%Y-%m-%d")}, type: #{cert_type}" end </code></pre><h2> Provisioning profiles encryption </h2><p> Provisioning profiles are encrypted in the same way as the certificates: </p><pre><code class="hljs">openssl aes-256-cbc -k "my_secret_password" -in profile.mobileprovision -out encrypted_profile.mobileprovision -a </code></pre><p> Naming is a bit easier: Bundle identifier is prefixed with the type for the provisioning profile like this: </p><pre><code class="hljs"> </code></pre><h2> Good orphans </h2><p> The typical git branching model doesn't make much sense in this scenario. Git repository is used as storage for provisioning profiles and certificates, rather than for its ability for merging one branch into another. It's no exception to having access to multiple dev centres, for instance, one for the company account, one for the enterprise account, and multiple accounts for the companies you develop and deploy apps for. You can use branches for each of those accounts. As those branches have no ambition of merging into each other, you can create orphan branches to keep them clearly separated. Then just use the <span class="pre-inline">git_branch</span> parameter to address them (for both development and distribution): </p><pre><code class="hljs">fastlane match --readonly --git_branch "company" ... fastlane match --readonly --git_branch "enterprise" ... fastlane match --readonly --git_branch "banking_company" ... </code></pre><h2> With great power... </h2><p> As the admin of a team without access to the dev centre, you're going to get a lot of questions on how to install certificates and profiles. It's helpful to set up a README in your codesigning repository that describes which apps are stored under which branches, and even includes <a href="">match documentation</a> and fastlane's <a href="">code signing guides</a>. It's also super cool of you to set up an installation script for each project, and put it under version control of the said project. Then when a new member joins the team and asks how to set stuff up, you just point them to run <span class="pre-inline">./</span>. </p><h2> Match usage (developer perspective) </h2><p> As a developer, you don't have access to the dev centre. You only need access to the git repository and a few commands to download profiles and install them on your machine. You also need to have your device registered in the account and assigned to the provisioning profile you'd like to use. But since you don't have access, you need to ask admins to set it up for you, which is a price paid by the admins for the sake of order and clarity. After that, you're all set and can run the commands to install whatever is necessary. The developer gets asked the passphrase the first time the command is run. You can choose to store it in the keychain if you'd like to skip entering it next time. </p><h2> Development profiles </h2><p> There are but a few inputs to match command: <span class="pre-inline">git_branch</span> reflects which account the app is registered in, <span class="pre-inline">app_identifier</span> is a bundle identifier of the app, and the others are also quite self-explanatory. If you're not sure which branch to use, you can go one by one and browse the profiles folder to see if the bundle identifier is listed there; it is unique across all accounts, so it should only be in one branch. </p><p> For instance, to install a development profile with certificate for the bundle id <span class="pre-inline"></span> you'd run: </p><pre><code class="hljs">fastlane match --readonly --git_branch "company" --git_url "" --app_identifier "" --type development </code></pre><p> You can also store a wildcard profile in the match repository, even if it does not have any real bundle identifier. In such a case you can just choose any identifier and use that, for instance <span class="pre-inline">*</span>: </p><pre><code class="hljs">fastlane match --readonly --git_branch "company" --git_url "" --app_identifier "*" --type development </code></pre><h2> Distribution profiles </h2><p> Distribution of the app to the App Store is basically the same as installing developer profiles, just change the <span class="pre-inline">type</span> from <span class="pre-inline">development</span> to <span class="pre-inline">appstore</span>: </p><pre><code class="hljs">fastlane match --readonly --git_branch "company" --git_url "" --app_identifier "" --type appstore </code></pre><p> Distribution to the App Store is usually scripted in a Fastfile script, which consists of many different actions in addition to match. That is outside the scope of this post and is well explained in other posts on the Internet. </p><h2> Conclusion </h2><p> You can clean up your dev centre and avoid certificates/profiles being revoked accidentally by switching the responsibility to git versioned repository using match. You can trick match to think that the wildcard provisioning profile is just some made-up bundle id in order to store it in git. You can have multiple branches for multiple types of dev centre accounts for an extra level of tidiness. On top of all that, you save your development team a lot of time by distributing the scripts to install whatever they need, and you can make life a bit easier for newcomers as well. </p> #iOS;#code-signing
Building a chatbot, pt. 1: Let's chat a chatbot, pt. 1: Let's chat<p>A few years ago, a client asked us to create an application that allows its users to create bookings for conference rooms and workspaces. That looks quite easy, right? A few database tables, a thin server, and thick web and mobile applications for a smooth user experience. Almost every company has a solution like that, so it should be fairly easy. But wait, there is a catch! The user interface has to be a chatbot!<br></p><p>That’s a completely different situation. How do we build something like that from scratch? We need to adjust our strategy a bit; we are going to need a thick server, thin web, and mobile applications. To limit the scope of this article, we will focus on the server-side.</p><h2>So it begins</h2><p>After a few searches and a fair amount of experiments we stumbled across <a href=""><em>NLP - Natural language processing</em></a>. These three words describe a key component of every modern chatbot platform. The chatbot takes ordinary sentences and transforms them into a data structure that can be easily processed further, without all the noise that surrounds core information.</p><p>Let's look at this example:</p> <img alt="Example of analysis" src="/Blog/PublishingImages/Articles/chatbot-1-01.png" data-themekey="#" /> <p>A simple sentence like this is split into multiple items that can be named and searched for. In this case, the phrase <em>“I need place”</em> is identified as a general intent that can be interpreted as a request for booking. Other items add information to this request. These <em>attributes</em> can carry either simple or complex information. In this example, the word <em>“some”</em> gives us the freedom to select any room from a list of available rooms, and the word <em>“meeting”</em> is interpreted as a request for a meeting room. Those parts were the easiest to classify. Time recognition attributes are more complex.</p><p>This is great for identifying atomic attributes in the sentence, but it's still a text. It took us almost a year to put together a comprehensive training data set for our target languages (English and German), but our bot finally understands the vast majority of user's requests. But how do you connect a room number to a specific room entity, username to the user, or date description to an actual date?</p><p>For that, we had to build an additional layer. Some of the post-processors need a whole blog post to describe it, but in the end, we managed to get a nice set of domain objects that are used in the bot’s decision-making process. In general, it looks like this:</p> <img alt="Cognitive processor overview" src="/Blog/PublishingImages/Articles/chatbot-1-02.png" data-themekey="#" /> <p>Input sentences are processed by the NLP and each <em>intent</em> or <em>attribute</em> is then passed to an <em>interpreter</em> that creates one or more objects that are used in conversation flow.</p><p>The most difficult part - the recognition - was solved (or so we thought). NLP gave us a nice structure with multiple items that can be <em>interpreted</em> as simple data objects.</p><h2>Neurons or no neurons, that’s the question</h2><p>The logic for conversion of recognized data to actions on the database was quite simple at the beginning. We had a few separated, well-defined use cases that were easy to implement. But complexity grew quite rapidly. A few <em>'if'</em>s were not sufficient anymore, so we had to look for a more robust solution.</p><p>After a little bit of research, we found that most of the solutions depend heavily on neural networks. That gives these solutions an edge with multiple short sentences, and general conversations about weather, sport, local natural wonders, etc. This is a robust solution for general use, when the conversations flow naturally from beginning to end. Decision-making is hidden in the neural network, which is trained with a sample data set. Neural networks are easy to start with, and adding new features is simple. Let's use it!</p><p>Well, not so fast... In testing, it works wonders, but as soon as we put it into the hands of test-users, we were bombarded with bugs. There was something we forgot: Real people. Users were giving us only partial information, and we didn't cover every possible angle. We quickly lost control over the conversation flow, with multiple use cases and various responses from the database.</p><p>This was not ideal. If we were aiming for a small-talk bot, a neural network would be ideal, but we were building a single-purpose bot. Users know exactly why they open a conversation with our bot - they want a reservation. We had to regain control of the conversation flow in the code and get all the information which the app needed from the user. The solution had to be simple, maintainable, testable, and scalable.</p><p>And so we rebuilt the application into a state machine where the bot is in control of the conversation flow. Simply put, it gets a <em>state</em>, in our case a point in the conversation with the user, and a list of information that the user already gave to the bot. Based on these data, it transitions into the next state - the next point in the conversation. </p><p>Users seek to achieve a goal, the bot guides them through the conversation and asks for the information needed. We didn't forget to add a few simple small talk phrases as well, but not many users come to our bot for small talk. The core of the state machine deserves its own blog post, but here is a small example:</p><h2>What does the bot say?</h2><p>Let's say we have a user who needs to book a room for an important meeting for next Monday from 9:30 to 10:30. In general, to make such a reservation, you will need a time and room specification. This is how the conversation could unfold:</p><p> <strong style="color:#381457;">User</strong> : <em>Book me room E.3.30 for next Friday from 9:30 to 10:30.</em><br> <strong style="color:#9d9d9d;">Bot</strong> : <em>I have booked room E.3.30 on 26.02. from 09:30 to 10:30 for you.</em> </p><p>Quite simple, right? But the same conversation could look also like this:</p><p> <strong style="color:#381457;">User</strong> : <em>I need a reservation.</em><br> <strong style="color:#9d9d9d;">Bot</strong> : <em>Do you want me to book you a seat or a meeting room?</em><br> <strong style="color:#381457;">User</strong> : <em>Meeting room.</em><br> <strong style="color:#9d9d9d;">Bot</strong> : <em>Alright! Would you like to book a specific meeting room or Shall I just book one that is free?</em><br> <strong style="color:#381457;">User</strong> : <em>Specific meeting room.</em><br> <strong style="color:#9d9d9d;">Bot</strong> : <em>What room would that be?</em><br> <strong style="color:#381457;">User</strong> : <em>R.23.</em><br> <strong style="color:#9d9d9d;">Bot</strong> : <em>For which time shall I reserve the room?</em><br> <strong style="color:#381457;">User</strong> : <em>Next Monday.</em><br> <strong style="color:#9d9d9d;">Bot</strong> : <em>Can you specify the time for me, please?</em><br> <strong style="color:#381457;">User</strong> : <em>9:30 to 10:30.</em><br> <strong style="color:#9d9d9d;">Bot</strong> : <em>I have booked room R.23 for 1st March from 09:30 to 10:30 for you.</em> </p><p>In the first example, the user knew exactly what he wanted. In the second conversation, the bot guides the user. These examples are on the opposite sides of the conversation spectrum, but we also cover everything in the middle. When the user states the date and time earlier in the conversation, the bot should not ask for it again. The main point is that all of these conversations are processed with the same conversation flow (same code, same tests).</p><p>What is neat about this approach is that we can take a part of the conversation and re-use it for multiple intents. For example, time validation can be reused in any conversation where a time specification is needed.</p><p>There is one part of the example that I've excluded, and that's the access to the reservation system itself. Here we simply save the request and call it a day, but in everyday use, there are some limitations - the reservation may very well be refused. All of these possibilities have to be covered, and users have to be properly informed. Again, how to do that is a topic for a whole new blog post.</p><h2>Conclusion</h2><p>As you can see, there are a number of topics to consider when building a chatbot from scratch: from NLP to decision making, to actions in the reservation system, and finally to the answers.</p><p>Thanks to rigorous testing and a clear framework, we are not blocked by bloated training data sets, and multiple devs can develop independently of each other.</p><p>Currently, our application can process multiple base intents like <em>show</em>, <em>cancel</em>, <em>check</em> or <em>book</em> in English and German. Based on these intents, the bot can give the user up to 300 different conversations with multiple responses. More conversations and variations are still in development and we hope to reach 500 in the near future. Our system is currently used by more than 1400 users and on average 2000 interactions happen every week.</p> #chatbot;#ai;#neural-network
Questions to ask before choosing mobile app technology to ask before choosing mobile app technology<p>Embarking on a new project is exciting. So many possibilities, so many choices! But you better get them right from the start, otherwise, your project might suffer in the long run.</p><p>Choosing a platform to build your mobile app can be a daunting task. For some apps, a simple responsive web or PWA will suffice, whereas for others only native solutions will do. And there’s of course a range of popular cross-platform or hybrid technologies like Xamarin, React Native, Flutter, or Kotlin Multiplatform, to name a few.</p><p>Evaluating all these alternatives is difficult. There are no universally right or wrong answers, but to make the choice easier, we offer you a list of questions that, when answered, will help you make the right choice.</p><h2>Lifespan</h2><ol><li><strong>What is the planned lifetime period of your app?</strong> Short-lived marketing or event apps have different requirements than apps that need to live happily for years. </li><li><strong>What is more important: Time to market, or sustainable development over time?</strong> Sometimes quick’n’dirty solutions make perfect business sense, sometimes they are poison. </li><li><strong>Will the chosen technology still exist when your app approaches the end of its life?</strong> Obsolete or abandoned technology will severely hinder your ability to support and expand your app. </li><li><strong>Will the technology be supported by its authors? Will it be supported on target platforms?</strong> Open source technology can be theoretically maintained by anybody, however, in practice, the majority of work often rests on a surprisingly small number of individuals. </li><li><strong>How will the technology evolve over time?</strong> There is a significant difference between a technology that the authors primarily develop to serve their own needs (even if it’s open-sourced), and a technology that is truly meant as a general-purpose tool. </li><li><strong>Is there a risk of vendor lock-in?</strong> If the technology is currently free to use, will it still be free in the future? What is the cost of moving to an alternative solution? </li></ol><h2>Runtime</h2><ol start="7"><li><strong>What runtime environment does the app need?</strong> The app may be compiled to native code, it may need bridges, wrappers, interpreters, etc. Those can differ wildly in various regards, sometimes by an order of magnitude. </li><li><strong>How is the performance?</strong> Nobody wants sluggish, janky apps.</li><li><strong>Is it stable?</strong> Frequent crashes destroy an app's reputation quickly.</li><li><strong>How big are deployed artifacts? Do they need to be installed?</strong> A complicated or slow installation process lowers the chances that users will even <em>launch</em> your app, while every extra megabyte increases churn. </li></ol><h2>UI</h2><ol start="11"><li><strong>Does the technology use native components, or does it draw its own? Can the user tell the difference?</strong> Non-native components may look similar, but users are surprisingly sensitive to even small inconsistencies. </li><li><strong>Does it respect the look’n’feel of each platform?</strong> You don’t want your app to look unintentionally alien on the target platform. </li><li><strong>Are all platform-specific components available?</strong> Custom UI components often demand a lot of work and if many are not available, your app can get very expensive, very quickly. </li><li><strong>How difficult is it to create custom components?</strong> Even if all platform components are available, there will be times when you’ll need to create your own—and it needs to be reasonably effective to do so. </li><li><strong>How difficult is it to create animations?</strong> When done right, animations are a crucial part of the UX, but implementing animations can sometimes be exceedingly difficult. </li><li><strong>How are the components integrated with the target system?</strong> Appearances are not everything—you also need to consider things like gestures, accessibility, support for autocomplete, password managers, etc. </li></ol><h2>Compatibility and interoperability</h2><ol start="17"><li><strong>What level of abstraction does the technology bring?</strong> Some try to completely hide or unify the target platforms, some are very low-level. Both can be good, or bad. </li><li><strong>Which system functionalities does it support directly?</strong> UI is not everything—chances are your app will need to support at least some of the following things: biometry, cryptography, navigation, animations, camera, maps, access to user’s contacts or calendar, OCR, launcher widgets, mobile payment systems, AR/VR, 3D rendering, sensors, various displays, wearables, car, TV, … </li><li><strong>How difficult is it to access native APIs?</strong> Every abstraction is leaky. There will come a time when you’ll need to interact with the underlying platform directly. The difficulty to do so can vary greatly. </li><li><strong>Are cutting-edge platform features available right away?</strong> Especially when using bridges or wrappers, support for the latest features can be delayed. </li><li><strong>What other platforms does the technology support?</strong> The ability to run your app on other platforms can sometimes be very advantageous, just keep in mind that the extra investment required can vary. </li></ol><h2>Paradigm and architecture</h2><ol start="22"><li><strong>How steep is the learning curve?</strong> Your team needs to be up-and-running in a reasonable amount of time. </li><li><strong>How rigid is the technology?</strong> Some frameworks try to manage everything—painting by the numbers can be simple and effective, but at the same time, it may limit your ability to implement things for which the framework doesn’t have first-class support. On the other hand, libraries may be more difficult to wire together, but they grant you greater freedom. </li><li><strong>How distant is the given paradigm from the default way of doing things?</strong> Nonstandard or exotic approaches can steepen the learning curve significantly. </li><li><strong>Is the technology modular? On what levels?</strong> Usually, you need the ability to slice the app across various boundaries (e.g., features, layers), and at various levels (e.g., code, compilation, deployment, etc.). </li><li><strong>How does it scale?</strong> Nowadays, even mobile apps can easily grow to hundreds of screens, and the app mustn’t crumble under that weight for both its developers and users. </li></ol><h2>Tooling</h2><ol start="27"><li><strong>Is there an official IDE? What does it cost? Can it be extended with plugins?</strong> Developer productivity is paramount, and the best tools pay for themselves quickly. </li><li><strong>Which build system does the technology use?</strong> There are many of them, but they’re not all equally simple to use, fast, or extendable. </li><li><strong>How is the CI/CD support?</strong> It needs to integrate smoothly with your CI/CD system of choice. </li><li><strong>What about testing, debugging, instrumentation, or profiling?</strong> Your developers and QA people need to be able to quickly dissect your app to identify and fix potential problems. </li><li><strong>How mature and effective are the tools?</strong> Your developers should focus on your app, they shouldn’t be fighting the tools. </li><li><strong>Does the technology support hot reload, or dynamic feature modules?</strong> These features usually greatly enhance developer productivity. </li></ol><h2>Ecosystem</h2><ol start="33"><li><strong>Is the technology open source?</strong> There are countless advantages when it is. </li><li><strong>What is the availability, quality, and scope of 3rd party libraries?</strong> The ability to reuse existing, well-tested code can make or break projects. </li><li><strong>Is the official documentation up-to-date, complete, and comprehensive?</strong> While learning about particular technology by trial and error can be fun, it certainly isn’t effective. </li><li><strong>Do best practices exist?</strong> If there are many ways to do a thing, chances are some of them will end up with your developers shooting themselves in the foot. </li><li><strong>How accessible is community help? Are there blog posts, talks, or other learning materials?</strong> Search StackOverflow, or try to find newsletters, YouTube channels, podcasts, or conferences dedicated to the technology in question. </li><li><strong>Are consultants available if needed?</strong> Some of them are even helpful.</li><li><strong>What is the overall community sentiment towards the technology?</strong> Dedicated fans are a good sign, but be careful not to fall for marketing tricks. </li><li><strong>Do other similar organizations have experience with the technology?</strong> Learn from the successes and mistakes of others. </li></ol><h2>Human resources</h2><ol start="41"><li><strong>What primary programming language does the technology rely on?</strong> It isn’t enough that developers are able to <em>edit</em> source files to make the machine do something—they need to be able to write idiomatic and expressive code that can be read by human beings. </li><li><strong>Do you already have suitable developers?</strong> Why change a whole team, when you might already have a stable, well-coordinated one? </li><li><strong>Will mobile developers be effective using the language?</strong> There could be great friction when switching developers from one language to another, especially when the new language is significantly different (e.g., statically vs. dynamically typed, compiled vs. interpreted, etc.). </li><li><strong>Will non-mobile developers be effective on mobile platforms?</strong> For example, some technologies try to port web frameworks to mobile platforms, so it might look like a good idea to assign web developers to the project—but the reality is not that simple. </li><li><strong>What is the current market situation? What is the market profile of available developers?</strong> You usually need a suitable mix of junior and senior developers, but they might not be easy to find, or their cost might not be economically feasible. </li></ol><h2>Existing codebase</h2><ol start="46"><li><strong>Do you already have some existing code?</strong> Rewriting from scratch is tempting, but it isn’t always a good idea. </li><li><strong>What have you invested in it so far?</strong> It may be very cheap to throw away, or it may represent a major asset of your organization. </li><li><strong>What is its value to your organization?</strong> It may earn or save you a ton of money, or it may be a giant liability. </li><li><strong>How big is the technical debt?</strong> The value of unmaintainable code is not great, to put it mildly. </li><li><strong>Can it be maintained and evolved?</strong> The software must be, well, soft. If yours is rigid, again, its value is not that great. </li><li><strong>Can it be transformed piece-by-piece?</strong> Some technologies allow gradual migration, some are all-or-nothing propositions. </li></ol><h2>Final questions</h2><p>Each app has different needs, and there will always be tradeoffs. In the end, you’ll need to prioritize the various viewpoints implied by the aforementioned questions.</p><p>Which qualities are most important for your project? Which properties bring you opportunities? Which increase risk?</p><p>When you put the alternatives into the right perspective, you certainly have a much better chance at success. May your apps live long and prosper!</p>#project-management;#android;#iOS
Scrum smells, pt. 3: Panic-driven bug management smells, pt. 3: Panic-driven bug management<p>Bugs create a special atmosphere. They often cause a lot of unrest or outright panic. But does it have to be that way?</p><p>Nearly every developer out there has come across the following scenario: The development team is working on the sprint backlog when suddenly the users report an incident. The marketing manager comes in and puts pressure on the development team or their product owner to urgently fix the bug. The team feels guilty so some of the developers stop working on whatever they've been doing and focus on fixing the bug. They eventually succeed, and now the testers shift their focus as well to verify the fix as soon as possible, so the developers can release a hotfix. The hotfix is deployed, sprint passes by, and the originally planned sprint backlog is only half-done. Everyone is stressed out.</p><p>A similar situation is often created by a product owner: He finds a defect in functionality, created two sprints ago, but demands an immediate repair.</p><p>Is this all really necessary? Sure, some issues have a great impact on the product or service, and then this approach might be justifiable, but rather often this kind of urgent defect whacking is a process that is more emotional than rational. So how to treat bugs systematically?</p><h2>What are bugs and bug fixes?</h2><p>A defect, incident, or simply a “bug” is effectively any deviation of the existing product from its backlog. Any behavior that is different from the one agreed upon between the dev team and a product owner can be called a bug. Bugs aren’t only defects in the conventional meaning (e.g., crashes or computational errors); a technically correct behavior in conflict with a boundary set by a user story can also be considered a defect.</p><p>Some bugs are related to the product increment being implemented in the current sprint. Other bugs are found retrospectively: They are related to the user stories developed in past sprints. These fall into two categories:</p><ol><li>Regressions: When a subsequent development broke a formerly functional part of the code. </li><li>Overlooked bugs: They were always there, but no one had noticed.</li></ol><p>Conversely, a bug fix is something that adds value to the current product by lowering the above-mentioned deviation. It requires a certain amount of effort and it raises the value of the present product. At the end of the day, a bug is just another unit of work, and we can evaluate its cost/benefit ratio. It is the same as any other backlog item.</p><h2>A bit of psychology</h2><p>Scrum teams and stakeholders tend to approach both defect categories differently. They also treat them differently than the “regular” backlog items.</p><p>In my experience, there are two important psychological factors influencing the irrational treatment of defects.</p><p>First of all, there's often a feeling of guilt when a developer is confronted with a bug. The natural response of most people is to try to fix the error as soon as possible so that they feel they are doing a good job. Developers naturally want to get rid of such debts.</p><p>Another factor is how people perceive gains and losses. People are evolutionarily averse to losses because the ability to obtain and preserve resources has always been key to survival. There have been studies concluding that on average, people perceive a loss four times as intensely compared to a gain of the same objective value: If you lose 5 dollars, it is four times as painful compared to the gratification of finding 5 dollars lying on the ground. You need to find 20 dollars to have a comparable intensity of feeling as when you lose the mentioned 5. The bug/defect/incident is perceived as a loss for the team's product, especially if it's a regression. A small bug can therefore be perceived as much more important than a newly delivered valuable feature.</p><p>Don't get me wrong—I am not saying that bugs are not worth fixing or that they don't require any attention. That is obviously not true. One of the key principles of scrum is to deliver a functional, <em>potentially releasable</em> product increment in every sprint. That means that a high development quality is fundamental and teams should always aim at developing a debt-free product. Nonetheless, bugs will always have to be dealt with.</p><h2>Bugs caused by newly added code</h2><p>When working on a sprint backlog, the team needs to set up a system to validate the increment they’ve just developed. The goal is to make sure that at the end of the sprint, a feature is free of debt, and can be potentially released. Our experience shows that during a sprint backlog development, the team should focus on removing any bugs related to the newly developed features as quickly as possible in order to keep the feedback/verification loop as short as possible. This approach maximizes the probability that a newly developed user story is done by the end of the sprint and that it is potentially releasable.</p><p>Sometimes there are just too many bugs and it becomes clear that not everything planned in the sprint backlog can be realistically achieved. The daily scrum is the opportunity to point this out. The development team and the product owner together can then concentrate their efforts on a smaller amount of in-progress user stories (and related bugs). It is always better to make one user story done by the end of the sprint than to have ten stories halfway finished. Of course all bugs should be recorded transparently in the backlog.</p><p>Remember, a user story is an explanation of the user's need that the product tackles, together with a general boundary within which the developed solution must lie. A common pitfall is that the product owner decides on the exact way for developing a (e.g., defines the exact UI or technical workflow) and insists on it, even though it is just her personal preference. This approach not only reduces the development team's options to come up with the most effective solution but also inevitably increases the probability of a deviation, thus increasing the number of bugs as well.</p><h2>Regressions and bugs related to past development</h2><p>I think it's important to treat bugs (or rather their fixes) introduced before the current sprint as regular backlog items and prioritize them accordingly. Whenever an incident or regression is discovered, it must go into the backlog and decisions need to be made: What will be the benefit of that particular bug fix compared to other backlog items we can work on? Has the bug been introduced just now or have the users already lived with it for some time and we just did not know it? Do we know the root cause and are we able to estimate the cost needed to fix it? If not, how much effort is worth putting into that particular bug fix, so that the cost/benefit ratio is still on par with other items on the top of the backlog?</p><p>By following this approach, other backlog items will often be prioritized over the bug fix, which is perfectly fine. Or the impact of the bug might be so negligible that it's not worth keeping it in the backlog at all. One of the main scrum principles is to always invest the team's capacity in stuff that has the best return on invested time/costs. When the complexity of a fix is unknown, we have good experience with putting a limit on the invested capacity. For instance, we said that at the present moment, this particular bug fix is worth investing 5 story points for us. If the developers managed to fix the issue, great. If not, it was abandoned and re-prioritized with this new knowledge. By doing this, we mitigated the situations when developers dwell on a single bug for weeks, not being able to fix it.</p><p>I think keeping a bug-log greatly hinders transparency, and it’s a sign that a product owner gives up on making decisions that really matter and refuses to admit the reality.</p><h2>Final words</h2><p>I believe all backlog items should be approached equally. A bug fix brings value in a similar way as a new functionality does. By keeping bug fixes and new features in one common backlog and constantly questioning their cost/benefit ratio, we can keep the team going forward, and ensure that critical bugs don't fall through.</p>#scrum;#agile;#project-management;#release-management
Jetpack Compose: What you need to know, pt. 2 Compose: What you need to know, pt. 2<p>This is the second and final part of the Jetpack Compose series that combines curious excitement with a healthy dose of cautious skepticism. Let’s go!</p><h2>Ecosystem</h2><p><strong>Official documentation doesn’t cover enough.</strong></p><p>That’s understandable in this phase of development, but it absolutely needs to be significantly expanded before Compose hits 1.0.</p><p>On top of that, Google is once again getting into the bad habits of 1) mistaking developer marketing for advocacy and 2) scattering useful bits of information between <a href="">official docs</a>, KDoc, semi-official <a href="">blogs</a>, <a href="">code samples</a>, or other sources with unknown relevance. Although these can be useful, they’re difficult to find and are not usually kept up-to-date. </p><p><strong>Interoperability is good.</strong></p><p>We can use <a href="">legacy Views</a> in our Compose hierarchy and composables as <a href="">parts</a> of View-based UIs. It works, we can migrate our UIs gradually. This feature is also important in the long term, as I wouldn’t expect a Compose version of WebView or MapView written from scratch any time soon, if ever.</p><p>Compose also plays nicely with other libraries—it integrates well with Jetpack <a href="">ViewModel</a>, <a href="">Navigation</a>, or <a href="">reactive streams</a> (LiveData, RxJava, or Kotlin Flow—<a href="">StateFlow</a> is especially well suited for the role of a stream of states coming from the view model to the root composable). Popular 3rd party libraries such as <a href="">Koin</a> also have support for Compose.</p><p>Compose also gives us additional options. Its simplicity allows for much. For example, it is very well possible to completely get rid of fragments and/or Jetpack Navigation (although in this case, I think one vital piece of the puzzle is still missing—our DI frameworks need the ability to create scopes tied to composable functions), but of course you don’t have to. Choose what’s best for your app.</p><p>All in all, the future of the Compose ecosystem certainly looks bright.</p><p><strong>Tooling is a work in progress, but the fundamentals are already done.</strong></p><p>Compose alphas basically require <a href="">canary builds of Android studio</a>, which are expected to be a little bit unstable and buggy. Nevertheless, specifically for Compose, the Android tooling team has already added custom syntax and error highlighting for composable functions, a bunch of live templates, editor intentions, inspections, file templates, and even color previews in the gutter (Compose has its own color type).</p><p>Compose also supports <a href="">layout previews</a> in the IDE, but these are more cumbersome than their XML counterparts. A true hot reload doesn’t seem to be possible at the moment.</p><p>The IDE also sometimes struggles when a larger file with lots of deeply nested composable functions is opened in the editor. That said, the tooling won’t hinder your progress in a significant way.</p><p><strong>UI testing is perhaps more complicated than it was with the legacy toolkit.</strong></p><p>In Compose, there are no objects with properties in the traditional sense, so to facilitate UI tests, Compose (mis)uses its accessibility framework to expose information to the tests. </p><p>To be honest, it all feels a little bit hacky, but at least we have support for running the tests on JUnit 4 platform (with the help of a custom rule), <a href="">Espresso-like APIs</a> for selecting nodes and asserting things on them, and a helper function to print the UI tree to the console.</p><p>The situation is thus fairly similar to the legacy toolkit, and so is my advice: Mind the <a href="">test pyramid</a>, don’t rely too much on UI tests, and structure your app in such a way that the majority of the code can be tested by simple unit tests executed on the JVM.</p><h2>Performance and stability</h2><p><strong>Build speeds can be surprising.</strong></p><p>In a good way! One would think that adding an additional compiler to the build pipeline would slow things down (and on its own, it would), but Compose replaces the legacy XML layout system, which has its own performance penalties (parsing XMLs, compiling them as resources, etc.). </p><p>It turns out, even now when Compose is still in a very early stage of development, the build time of a project written with Compose is at least comparable to the legacy UI toolkit version—and it might be even faster, as measured <a href="">here</a>. </p><p><strong>Runtime performance is a mixed bag.</strong></p><p>UIs made with Compose can be laggy sometimes, but this is totally expected since we are still in alpha. Further optimizations are promised down the line, and because Compose doesn’t come with the burden of <a href="">tens of thousands of LOC</a> full of compatibility hacks and workarounds in each component, I hope someday Compose will actually be faster than the legacy toolkit.</p><p><strong>It crashes (it’s an alpha, I know).</strong></p><p>In my experience, Compose crashes both at compile time (the compiler plugin) and at runtime (usually because of a corruption of Compose’s internal data structure called “slot table”, especially when animations are involved). When it does crash, it leaves behind a very, very long stack trace that is full of synthetic methods, and which is usually also totally unhelpful. </p><p>We definitely need special debugging facilities for Compose (similar to what coroutines have), and yes, I know, the majority of these bugs will be ironed out before 1.0. The thing is, Compose simply must be reliable and trustworthy at runtime because we are not used to hard crashes from our UI toolkit—for many teams, that would be an adoption blocker. </p><h2>Expectations</h2><p><strong>Compose is meant to be the primary UI toolkit on Android.</strong></p><p>Several Googlers confirmed that if nothing catastrophic happens, this is the plan. Of course, it will take years, and as always, it won’t be smooth sailing all the way, but Google and JetBrains are investing heavily in Compose.</p><p><strong>Compose is no silver bullet.</strong></p><p>Yes, Compose in many ways simplifies UI implementation and alleviates a significant amount of painful points of the legacy UI toolkit.</p><p>At the same time, it’s still possible to repeat some horrible old mistakes regarding Android’s lifecycle (after all, your root composable must still live in some activity, fragment, or view), make a huge untestable and unmaintainable mess eerily similar to the situation when the whole application is written in one single Activity, or even invent completely new and deadly mistakes.</p><p>Compose is <em>not</em> an architecture. Compose is just a UI framework and as such it must be isolated behind strict borders. </p><p><strong>Best practices need to emerge.</strong></p><p>Compose is architecture-agnostic. It is well suited to clean architecture with MVVM, but that certainly isn’t the only possible approach, as it’s evident from the <a href="">official samples repo</a>. However, in the past, certain ideas proved themselves better than others, and we should think very carefully about those lessons and our current choices.</p><p>Just because these are official samples by Google (or by anyone else for that matter), that doesn’t mean you should copy them blindly. We are all new to this thing and as a community, we need to explore the possibilities before we arrive at a set of reasonable, reliable, and tried-and-proven best practices.</p><p>Just because we can do something doesn’t mean we should.</p><p><strong>There are a lot of open questions.</strong></p><p>The aforementioned official samples showcase a variety of approaches, but in my book, some are a little bit arguable or plainly wrong. For example, ask yourself: </p><p>How should the state be transformed while passed through the tree, if ever? How should internal and external states be handled? How smart should the composable functions be? Should a view model be available to any composable function directly? And what about repositories? Should composable functions have their own DI mechanism? Should composable functions know about navigation? And data formatting, or localization? Should they handle the process death themselves? The list goes on.</p><p><strong>Should you use it in production?</strong></p><p>Well, it entirely depends on your project. There are several important factors to consider:</p><ul><li>Being still in alpha, the APIs will change, sometimes significantly. Can you afford to rewrite big parts of your UI, perhaps several times? </li><li>There are features missing. This situation will get better over time, but what you need now matters the most. </li><li>Runtime stability might be an issue. You can work around some things, but there’s no denying that Compose right now is less stable than the legacy toolkit. </li><li>What is the lifespan of your application? If you’re starting an app from scratch next week, with plans to release v1.0 in 2022 and support it for 5 years, then Compose might be a smart bet. Another good use might be for proof of concept apps or prototypes. But should you rewrite all your existing apps in Compose right now? Probably not. </li></ul><p>As always with new technology, all these questions lead us to these: Are you an early adopter? Can you afford to be?</p><h2>Under the hood</h2><p><strong>Compose is very cutting edge (and in certain aspects quite similar to how coroutines work).</strong></p><p>In an ideal world, no matter how deeply composable functions were nested and how complex they were, we could call them all on each and every frame (that’s 16 milliseconds on 60 FPS displays, but faster displays are becoming more prevalent). However, hardware limitations of real world devices make that infeasible, so Compose has to resort to some very intricate optimizations. At the same time, Compose needs to maintain an illusion of simple nested function calls for us developers.</p><p>Together, these two requirements result in a technical solution that’s as radical as it’s powerful—changing language semantics with a custom Kotlin compiler plugin.</p><p><strong>Compose compiler and runtime are actually very interesting, general-purpose tools.</strong></p><p>Kotlin functions annotated with @Composable behave very differently to normal ones (as it’s the case with suspending functions). This is possible thanks to the <a href="">IR code</a> being generated for them by the compiler (Compose uses the Kotlin IR compiler backend, which itself is in alpha).</p><p>Compose compiler tracks input argument changes, inner states, and other stuff in an internal data structure called <em>slot table</em>, with the intention to execute only the necessary composable functions when the need arises (in fact, composable functions can be executed in any order, in parallel, or even not at all).</p><p>As it turns out, there are other use cases when this is very useful—composing and rendering UI trees is just one of them. Compose compiler and runtime can be used for <a href="">any programming task</a> where working efficiently with tree data structures is important.</p><p><strong>Compose is the first big sneak peek at Kotlin’s exciting future regarding compiler plugins.</strong></p><p>Kotlin compiler plugins are still very experimental, with the API being unstable and mostly undocumented (if you’re interested in the details, read <a href="">this blog series</a> before it becomes obsolete), but eventually the technology will mature—and when it does, something very interesting will happen: Kotlin will become a language with more or less stable, fixed <em>syntax</em>, and vastly changeable, explicitly pluggable <em>behavior</em>.</p><p>Just look at what we have at our disposal even now, when the technology is in its infancy: There is Compose, of course (with a <a href="">desktop port</a> in the works), a plugin to <a href="">make classes open</a> to play nice with certain frameworks or tests, <a href="">Parcelable generator</a> for Android, or <a href="">exhaustive when for statements</a>, with <a href="">more plugins</a> coming in the future.</p><p>Last but not least, I think that the possibility to modify the language with external, independent plugins will lower the pressure on language designers, reducing the risk of bloating the language—when part of the community demands some controversial feature, why not test-drive it in the form of a compiler plugin first?</p><h2>Final words</h2><p>Well, there you have it—I hope this series helped you to create an image of Compose in your head that is a little bit sharper than the one you had before. Compose is certainly going to be an exciting ride!</p>#android;#jetpack;#compose;#ui
Jetpack Compose: What you need to know, pt. 1 Compose: What you need to know, pt. 1<p><a href="">Jetpack Compose</a> is coming sometime this year. Although it is still under heavy development, given its significance, I think now is the right time to look at what it brings to the table.</p><p>This isn’t a technical tutorial or introduction to Compose (there are many of these floating around, but be careful, as many of them are already out of date), but rather a collection of more or less random points, notes, and findings. Let’s find out if the hype is justified!</p><h2>Executive summary, but for developers</h2><p><strong>Compose is going to be one of the biggest changes Android development has ever seen.</strong></p><p>Yes, perhaps even bigger than reactive programming, Kotlin, or coroutines. UI is a crucial part of any application and a UI toolkit built on the mindset from the 2010s instead of the 1990s is indeed a very welcome upgrade.</p><p>Also, because it relies on Kotlin-exclusive features, Compose is another nail into Java’s coffin on Android.</p><p><strong>Making UIs is fun again!</strong></p><p>This is Compose’s equivalent of <span class="pre-inline">RecyclerView</span> <em>with different item types</em>:</p><pre><code class="kotlin hljs">LazyColumn { items(rows) { row -> when (row) { is Title -> TitleRow(row.title) is Item -> ItemRow(row.text) } } }</code></pre><p>Of course, everything isn’t that simple, but Compose really excels at its main goal—creating sophisticated and highly reusable custom components and complex layouts in a simple, effective, and safe manner.</p><p><strong>The mental model is radically different from what we are used to in Android.</strong></p><p>Unidirectional UI toolkits were the rage with web folks some time ago, and now they’ve finally arrived on mobile platforms.</p><p>The good news is that because we are late to the party, the paradigm has matured, and perhaps Compose won’t repeat at least some of the mistakes that caught up with early implementations on other platforms. The bad news is that the paradigm requires a significant mindset shift (say on a scale of reactive programming)—but it’s for the better, I promise.</p><p><strong>Compose has a huge multiplatform potential.</strong></p><p>Compose comprises <a href="">several artifacts</a>, and only some of them are Android-specific. JetBrains already work on <a href="">desktop port</a>, and covering other platforms is certainly not impossible.</p><p>Building on a combination of existing platform-specific UI toolkits and Kotlin Multiplatform features such as <span class="pre-inline">expect/actual</span> declarations, one can imagine a distant future where a single UI toolkit provides the holy grail of unified implementation, native performance, and platform-specific look’n’feel.</p><h2>Creating UI</h2><p><strong>There are no XML layouts, no inflaters and no objects representing the UI components.</strong></p><p>There are no setters to mutate the current UI state because there are no objects representing the UI views (<span class="pre-inline">@Composable</span> function calls only <em>look</em> like constructor calls, but don’t let that fool you), which means there cannot even be any internal UI state (well, the last point isn’t entirely true, but we’ll get to that later). Then you must think about states and events traveling through your UI tree in various directions and whatnot. </p><p>If you’ve never experienced a unidirectional toolkit, it will feel alien, strange, and maybe even ineffective, but the benefits are worth it.</p><p><strong>String, font, and drawable resources are staying.</strong></p><p>Compose doesn’t want to get rid of these and works with them just fine. However, only bitmap and vector drawables make sense with Compose. Other types such as layer list drawables, state list drawables, or shape drawables are superseded by more elegant solutions.</p><p><a href="">Colors</a> and <a href="">dimensions</a> should be defined entirely in Kotlin code if possible, but traditional resources still may be used if needed.</p><p><strong>There are no resource qualifiers.</strong></p><p>Compose has the power of Kotlin at its disposal. If you need to provide alternative values depending on the device configuration (or any other factor), simply add a condition to your composable function—it’s an explicit and unambiguous way to specify the value you want.</p><p>And of course remember to keep your code DRY—if you find yourself repeating the same bit of logic in many places, refactor.</p><p><strong>There are no themes and styles (sort of).</strong></p><p>Compose contains basic components that expose a number of parameters to change their look and behavior. Because everything is written in Kotlin, these parameters are rich, and most importantly, type-safe.</p><p>If you need to style a bunch of components in the same way, you simply <a href="">wrap</a> the original composable function call with your own composable, setting the parameters you need to change (or exposing new ones), and use this in your code.</p><p>Simple, efficient (because there is virtually no penalty for nested calls), and without hidden surprises.</p><p><strong>Compose comes with Material Design implementation out of the box.</strong></p><p>Although there are no themes or styles as such, there is a way to create and use application-wide themes.</p><p>Compose comes with <a href="">Material Design implementation</a>. Just wrap your root composable with <a href="">MaterialTheme</a>, customize colors, shapes, or typography to fit your brand, and you’re good to go. You can have different <span class="pre-inline">MaterialTheme</span> wrappers for different parts of your UI, effectively replacing theme overlays from the legacy system.</p><p>Often this is all you’ll ever need, but if your design system is more sophisticated or simply won’t fit the predefined Material Design attributes, you can <a href="">implement your own</a> from scratch. However, this is quite difficult and requires advanced knowledge of Compose to get it right.</p><p>See <a href="">this blog series</a> for valuable insights on custom design systems in Compose and <a href="">this post</a> for a comparison of different theming approaches.</p><p><strong>We can’t completely get rid of the legacy theme system (yet).</strong></p><p>Compose theming reaches only the parts of the UI that are managed by Compose. We might still need to set a legacy theme for our activities (to change window’s background, status bar, and navigation bar colors, etc.), or to style View-based components that don't have Compose counterparts.</p><p><strong>Don’t expect component or feature parity with legacy View-base components or Material Design specs any time soon.</strong></p><p>It’s the old story all over again: Writing a new UI toolkit from scratch means that there is going to be a long period in which a number of components (or at least their features) won’t be officially available.</p><p>For example, Compose’s <a href="">TextField</a> doesn’t have the same features (and API) that <a href="">TextInputLayout</a> has, and both of these implementations aren’t 100 % aligned with the <a href="">Material Design spec</a>.</p><p>This situation may be slightly annoying, but at least with Compose, it’s significantly easier to write custom components yourself.</p><p><strong>Finally, an animation system so simple that you’ll actually use it.</strong></p><p>Animating many things is as simple as wrapping the respective value in a <a href="">function</a> call, and for more complex animations, Compose superbly leverages the power of Kotlin.</p><p>Done right, animations are a great way to enhance user experience. With Compose animation APIs, their implementation is at last effective and fun.</p><h2>Internals</h2><p><strong>Composable functions are like a new language feature.</strong></p><p>Technically, <span class="pre-inline">@Composable</span> is an annotation, but you need to think about it more like a keyword (suspend is a good analogy, more on that later). This “soft keyword” radically changes generated code, and you need to have at least a basic idea of <a href="">what goes on under the hood</a>, otherwise, it’s very well possible to shoot yourself in the foot even with innocent-looking composable functions.</p><p><strong>The knowledge of the internals is important for creating performant UIs.</strong></p><p>Compose compiler does a lot in this regard (like positional memoization, and fine-grained recomposition), but there are situations when the developer has to provide optimization clues, or declare and then actually honor contracts that the compiler cannot infer on its own (such as marking data classes as truly immutable).</p><p>However, I expect the compiler to become smarter in the future, alleviating the need for many of these constructs.</p><h2>States</h2><p><strong>Compose UIs are declarative, but not truly stateless.</strong></p><p>UIs in Compose are declared by constructing deeply nested trees of composable functions where <a href="">states flows down and events up</a>. At the root of the tree, there is a comprehensive, “master” state coming from some external source (the best candidate for this task is the good old view model). When the state changes, parts of the UI affected by the change are re-rendered automatically.</p><p>In theory, we want the UI to be <em>perfectly</em> stateless. The root state should be completely externalized and should contain <em>everything</em> that must be set on the screen. That would mean not just obvious things like text field strings, checkbox states, and so on, but also, for example, <em>all</em> styling attributes for all the views, internal animation states including clock, current animated values, etc.</p><p>In practice, this would be too cumbersome (argument lists would grow unacceptably large and “interesting” arguments like user inputs would get mixed up with purely technical ones like animation states), so besides explicit state that is passed via composable function arguments, Compose has several other ways to provide data down the component tree.</p><p><strong>Composable functions can have their own internal state.</strong></p><p>Yes, function can have state encapsulated in it that survives between its invocations. This is a pragmatic decision that simplifies its signature and enables some optimizations, and is especially handy for animations and other things that don’t need to be changed and/or observed from outside.</p><p><strong>Ambients are like service locators for data passed through the UI tree.</strong></p><p> <a href="">Ambient</a> holds a kind of global variable defined in the tree node somewhere up in the hierarchy, statically accessible to nodes below it. If this rings an alarm bell in your head, you’re right—statically accessed global variables create invisible coupling and other problems.</p><p>However, this is a trade-off that is often worth it. Ambients are best suited for values that we need to set or change explicitly but don’t want to explicitly <em>pass</em> through the tree. Theme attributes and properties are a prime example of such things. </p><p><strong>State management is now more important than ever.</strong></p><p>So we have (at least) 3 ways to store and manipulate state in Compose, and they can even be combined along the way. The question of which method to use for which part of the state becomes essential. Sometimes, the answer can be difficult, and choosing the wrong one can lead to all kinds of messy problems.</p><p>Also, especially for larger screens, both the structure and the content of the state object is crucial.</p><h2>Until next time</h2><p>Well, that concludes part 1. In the second and final part of this series, we’ll look at the ecosystem, performance, stability, and even the magic that makes Compose possible. Take care and stay tuned!</p>#android;#jetpack;#compose;#ui
Truce with fragments with fragments<p>​​​Once upon a time, one could write an entire Android app in a single humongous activity. Google provided us with a bunch of fairly low-level building blocks and basically no guidance on how to use them together in a maintainable and scalable way. (“So you want to build a house? Here is a shovel, a saxophone and a kitten, we thought they might be useful, but do absolutely what you want with them, go on! And please mind the lifecycles, thank you.”)</p><p>Fortunately, things have changed a bit and now the official docs mention stuff like view models, lifecycle observers, navigation graphs, repositories or single-activity applications. And there are even official <em>opinions</em> on how to combine them!</p><p>Nevertheless, activities and fragments, the remnants of those dark times, are apparently here to stay. When you look at API surfaces of these... <em>things</em>, one question surely comes to mind: How do I work with that and stay sane at the same time?</p><h2>A match made in a place other than heaven</h2><p>Before we get to that, a little disclaimer is needed: This article is very opinionated. Your mileage and needs may vary. The following recommendations are best suited to “ordinary” form-based apps containing quite a lot of screens and user flows, with modest performance requirements. Games, specialized single-purpose apps, apps with dynamic plug-ins, high-performance, or UI-less apps might benefit from completely different approaches.</p><p>With that out of the way, the first question we should ask is: Do we even <em>need</em> activities and fragments?</p><p>With activities being the entry points to the app's UI, the answer is obviously yes. There’s no way around the fact that we need at least one activity with <span class="pre-inline">android.intent.action.MAIN</span> and <span class="pre-inline">android.intent.category.LAUNCHER</span> in its intent filter. But do we need <em>more</em> than one? The answer to that is a resounding no and we’ll see why in a future post.</p><p>Fragments are a different matter. First introduced in Android 3.0, when tablets were a thing, they were hastily put together as a kind of reusable mini-activities so that larger tablet layouts could display several of them simultaneously (think master-detail flows and such). Unfortunately, they inherited many design flaws of activities and even added some very interesting new ones. To say they are controversial would be an understatement.</p><p>On top of that, we don’t really need them in the way that we need that one launcher activity. Bigger, reusable pieces of UI can be served using good old views and there are 3rd party frameworks that do just that (and even several others that achieve the same thing in other ways, like using RecyclerViews to compose the UI from different “cells” etc.); and don't even forget that Jetpack Compose is coming...<br></p><p>However! Fragments are still developed, supported, documented, advocated for, integrated with other highly useful libraries (like Jetpack’s Navigation component) and in some places, they’re quite irreplaceable (yes, this is more of a design flaw of such APIs, but we need to work with what we have). Love them or hate them, they are the standard, well-known official solution, so let’s just be pragmatic: We’ll give them a hand, but won’t let them take the whole arm.</p><p>And so, we’ve arrived at the second question: One activity and possibly many fragments—but what can we use them <em>for</em>? And if we can, <em>should</em> we?</p><h2>Less is more</h2><p>This is where the opinions begin, so brace yourself.</p><p>Architecture-wise, what is an activity (and to an extent, a fragment, since they share many similarities)? My answer is this: A textbook violation of the single responsibility principle. </p><p>The main problem with an activity/fragment is that it is:</p><ol><li>a giant callback for dozens of unrelated subsystems</li><li>which is created and destroyed outside of our control</li><li>and we cannot safely pass around or store references to it.</li></ol><p>Typical consequences of these issues (when not handled in a sensible way) include activity subclasses several thousand lines long, full of untestable spaghetti (1), UI glitches and crashes (2) and memory leaks (3).</p><p>Open any activity in your current project, type <span class="pre-inline">this.</span> and marvel at the endless list of methods. The humble activity handles UI lifecycle and components, view models, data and cache directories, action bars and menus, assets and resources, theming, permissions, windows and picture-in-picture, navigation and transitions, IPC, databases and much, much more.</p><p>How Android got to this point isn’t important right now, but your code doesn’t have to suffer the same bloated fate. We need to chip away at the responsibilities and one way to do that is this: Use fragments exclusively for UI duties and that single activity for system call(back)s (and absolutely no UI).</p><h2>Fragments of imagination</h2><p>Each fragment should represent one screen (or a significant part of one) of your application. A fragment should only be responsible for </p><ol><li>rendering the view model state to the UI and</li><li>listening for UI events and sending them to its view model.</li></ol><p>That’s all. Nothing more. Really.</p><p>View model states should be tailored to concrete UI, should be observable and idempotent. It’s alright for view models and fragments to be quite tightly coupled (but view models mustn’t know anything about the fragments).</p><p>Because fragments are much harder to test than view models, the view model should pre-format the displayed data as much as possible, so the fragment can be kept extremely simple and just directly assign state properties to its view properties. There shouldn’t be any traces of formatting or any other logic in the fragments.</p><p>The opposite way should be equally simple—the fragment just attaches listeners to its views (our current favorite is the <a href="">Corbind</a> library which transforms Android callbacks to handy and most importantly unified <span class="pre-inline">Flow</span>s) and sends these events directly to the view model.</p><p>That is what fragments should do. But what they shouldn’t do is perhaps even more important:</p><ul><li>Fragments shouldn’t handle navigation between screens, permissions nor any other system stuff, even if the APIs are conveniently accessible from right inside the fragment. </li><li>Fragments shouldn’t know about each other and shouldn’t depend on or interact with their parent activity. </li><li>Fragments shouldn’t know about how they are instantiated and how they are injected (if your DI framework allows this); they also shouldn’t know about fragment transactions, if possible. </li><li>Data should be passed to fragments <em>exclusively</em> through their view models and that should be just the data to be displayed in the UI—forget about fragment arguments and rich domain models in them. </li><li>This almost goes without saying, but fragments shouldn’t do any file, database or network IO (I know, inside the fragment, the almighty Context is sooooo close… Just a small peak into SharedPrefs, please? No, never!). </li><li>Since Android view models got <span class="pre-inline">SavedStateHandle</span>, fragments even shouldn’t persist their state to handle process death. </li><li>And for heaven’s sake, never ever use abominations such as headless or retained fragments. </li></ul><p>Some other tips include:</p><ul><li>Fragments should handle only the very basic lifecycle callbacks like <span class="pre-inline">onCreate</span>/<span class="pre-inline">onDestroy</span>, <span class="pre-inline">onViewCreated</span>/<span class="pre-inline">onDestroyView</span>, <span class="pre-inline">onStart</span>/<span class="pre-inline">onStop</span> and <span class="pre-inline">onPause</span>/<span class="pre-inline">onResume</span>. If you need the more mysterious ones, you’re probably going to shoot yourself in the foot in the near future. </li><li>If possible, don’t use the original <span class="pre-inline">ViewPager</span> with fragments—that road leads to madness and memory leaks. There's a safer and more convenient <span class="pre-inline">ViewPager2</span> which works much like <span class="pre-inline">RecyclerView</span>. </li><li>Make dialogs with <span class="pre-inline">DialogFragments</span> <em>integrated with Jetpack Navigation component</em>. It’s much easier to handle their lifecycle (those dismissed dialogs popping on the screen again after device rotation, anyone?) and they can have their own view models. This way, there’s almost no difference between part of your UI being a dialog or a whole screen. </li><li>Sometimes it’s OK for fragments to include other fragments (e.g., a screen containing a <span class="pre-inline">MapFragment</span>), but keep them separate—no direct dependencies and communication between them, no shared view models etc. </li><li>To make your life easier, your project probably has some sort of <span class="pre-inline">BaseFragment</span> which simplifies plumbing, sets up scopes, and what have you. That’s fine, but resist the temptation to pollute it with some “handy” little methods for random things like toasts, snackbars, toolbar handling etc. YAGNI! Don’t misuse inheritance as a means to share implementation—that’s what composition is for. </li><li>Our favorite way to access views from fragments is the relatively new and lovely ViewBinding library. It’s simple to integrate, straightforward to use, convenient, type-safe, and greatly reduces boilerplate. No other solution (findViewById, Butter Knife, kotlin-android-extensions plugin or Data Binding library) possesses all these qualities. </li><li>Speaking of Data Binding, even when it isn’t throwing a wrench into your build, we don’t think that making our XMLs smarter than they need to be is a good idea to begin with. And don’t let me start about the testability of such implementations. </li><li>Use <a href="">LeakCanary</a>! The recent versions require practically no setup and automatically watch Activity, Fragment, View and ViewModel instances out of the box. </li></ul><p>After following all this advice (and a little bit of coding), your <em>complete</em> fragment could look like this (the implementation details aren’t important, just look at the amount and <em>intention</em> of the code):</p><pre> <code class="kotlin hljs">internal class ItemDetailFragment : BaseFragment<ImteDetailViewModel, ItemDetailViewModel.State, ItemDetailFragmentBinding>() { // take advantage of reduced visibility if possible so you don’t pollute your project’s global scope // required by DI framework override val viewModelClass = ItemDetailViewModel::class // layout inflation with the lovely ViewBinding library override fun onCreateViewBinding(inflater: LayoutInflater) = ItemDetailFragmentBinding.inflate(inflater) // initialization of view properties that cannot be set in XML override fun ItemDetailFragmentBinding.onInitializeViews() { detailContainer.layoutTransition?.disableTransitionType(DISAPPEARING) } // render the view model state in the UI; kept as simple as possible // state properties should preferably be primitive or primitive-like types // no DataBinding :) // notice the receiver - we don’t have to reference the binding on every single line override fun ItemDetailFragmentBinding.onBindState(state: ItemDetailViewModel.State) { loading.isVisible = state.isLoadingVisible itemTitle.text = state.item.title itemCategory.textResId = state.item.categoryResId itemFavorite.isChecked = state.item.isFavorite itemPrice = state.item.price // price is a String and is already properly formatted /* ... */ } // the other way around: catch UI events and send them to the view model override fun ItemDetailFragmentBinding.onBindViews() { toolbar.navigationClicks().collect { viewModel.onBack() } { viewModel.onCheckout() } favorite.checkedChanges().collect { isChecked -> viewModel.setFavorite(isChecked) } addToWishList.clicks().collect { viewModel.onAddToWishList() } addToCart.clicks().collect { viewModel.onAddToCart() } /* ... */ } } </code></pre><p>That’s not that bad, is it?</p><h2>If you can’t beat them, join them</h2><p>Although hardly an elegant or easy-to-use API, fragments are here to stay. Let’s make the best of this situation: Pragmatically utilize them for their useful integrations and focus on the single real responsibility they have—handling the UI. Ignore the rest and KISS—this principle is extremely important when working with fragments. That way, you’re going to have small, simple, focused fragments—and more importantly, a lot less less headaches.</p>#architecture;#android;#jetpack
Android jumps on Java release train jumps on Java release train<p>​​For many years, Android was stuck with Java 8. Finally, we got a <a href=""> big update</a>. The gap between Java 8 and Java 9 in terms of build compatibility has been overcome and more modern Java versions (up to Java 11) are officially supported on Android. On top of that, Android Gradle Plugin 7.0.0 now requires JDK 11 for running Gradle builds.</p><p>In this post, I’ll describe the technical background of this change and how it might affect your project, even if it’s written exclusively in Kotlin.</p><h2>What is the release train and why have Java 9 and 10 been skipped?</h2><p>Historically, a new major Java version has been released “every once in a while”. This has led to an irregular release schedule and the language not evolving rapidly.</p><p>Beginning with Java 9, it was decided that a new major Java version would be released every 6 months and LTS (long-term support) releases would arrive every 3 years.</p><table cellspacing="0" width="90%" class="ms-rteTable-default" style="margin-left:auto;margin-right:auto;border:1px solid black;"><tbody><tr class="ms-rteTableHeaderRow-default"><th class="ms-rteTableHeaderEvenCol-default" style="width:18%;"> <strong>​​Java version</strong> </th><th class="ms-rteTableHeaderOddCol-default" style="width:18%;"> <strong>​Release date</strong> </th><th class="ms-rteTableHeaderEvenCol-default" style="width:64%;"> <strong>​Selected language features</strong> </th></tr><tr class="ms-rteTableOddRow-default" style="background-color:#f4cccc;"><td class="ms-rteTableEvenCol-default">​Java 6</td><td class="ms-rteTableOddCol-default">​December 2006</td><td class="ms-rteTableEvenCol-default">​No language changes</td></tr><tr class="ms-rteTableEvenRow-default" style="background-color:#f4cccc;"><td class="ms-rteTableEvenCol-default">​Java 7</td><td class="ms-rteTableOddCol-default">​July 2011</td><td class="ms-rteTableEvenCol-default"> <a href=""> ​Project Coin</a>: Diamond operator, Strings in switch, etc. </td></tr><tr class="ms-rteTableOddRow-default" style="background-color:#fff2cc;"><td class="ms-rteTableEvenCol-default">​Java 8 LTS</td><td class="ms-rteTableOddCol-default">March 2014</td><td class="ms-rteTableEvenCol-default"> <a href="">​Lambdas</a><br> <a href="">Type Annotations</a><br> <a href="">Default methods in interfaces</a> </td></tr><tr class="ms-rteTableEvenRow-default" style="background-color:#fff2cc;"><td class="ms-rteTableEvenCol-default">​Java 9</td><td class="ms-rteTableOddCol-default">September 2017</td><td class="ms-rteTableEvenCol-default"> <a href="">Private methods in interfaces</a> </td></tr><tr class="ms-rteTableOddRow-default" style="background-color:#fff2cc;"><td class="ms-rteTableEvenCol-default">​Java 10</td><td class="ms-rteTableOddCol-default">March 2018</td><td class="ms-rteTableEvenCol-default"> <a href="">Local-Variable Type Inference</a> </td></tr><tr class="ms-rteTableEvenRow-default" style="background-color:#d9ead3;"><td class="ms-rteTableEvenCol-default">​Java 11 LTS</td><td class="ms-rteTableOddCol-default">September 2018</td><td class="ms-rteTableEvenCol-default"> <a href="">Local-Variable Syntax for Lambda Parameters</a> </td></tr><tr class="ms-rteTableOddRow-default" style="background-color:#fff2cc;"><td class="ms-rteTableEvenCol-default">​Java 12</td><td class="ms-rteTableOddCol-default">March 2019</td><td class="ms-rteTableEvenCol-default">No stable language features</td></tr><tr class="ms-rteTableEvenRow-default" style="background-color:#fff2cc;"><td class="ms-rteTableEvenCol-default">​Java 13</td><td class="ms-rteTableOddCol-default">September 2019</td><td class="ms-rteTableEvenCol-default">No stable language features</td></tr><tr class="ms-rteTableOddRow-default" style="background-color:#fff2cc;"><td class="ms-rteTableEvenCol-default">​Java 14</td><td class="ms-rteTableOddCol-default">March 2020</td><td class="ms-rteTableEvenCol-default"> <a href="">Switch Expressions</a> </td></tr><tr class="ms-rteTableEvenRow-default" style="background-color:#d9ead3;"><td class="ms-rteTableEvenCol-default">​Java 15</td><td class="ms-rteTableOddCol-default">September 2020</td><td class="ms-rteTableEvenCol-default"> <a href="">Text Blocks</a> </td></tr><tr class="ms-rteTableOddRow-default" style="background-color:#cfe2f3;"><td class="ms-rteTableEvenCol-default">​Java 16</td><td class="ms-rteTableOddCol-default">March 2021</td><td class="ms-rteTableEvenCol-default"> <a href="">Pattern Matching for instanceof</a><br> <a href="">Records</a> </td></tr><tr class="ms-rteTableEvenRow-default" style="background-color:#cfe2f3;"><td class="ms-rteTableEvenCol-default">​Java 17 LTS</td><td class="ms-rteTableOddCol-default">September 2021</td><td class="ms-rteTableEvenCol-default">Nothing announced yet</td></tr></tbody></table> <br> <p>Standard releases have quite a short support period and receive just 2 minor updates exactly 1 and 4 months after their initial release. The LTS releases are guaranteed to be supported till another LTS version is released, in 3 years timeframe (for details about Java release trains, I recommend reading <a href="">Stephen Colebourne's posts</a>).</p><p>Many projects have decided to follow the LTS releases only and now it seems that Google has the same plans for Android. Even though Java 15 is the latest released version, it is a non-LTS version, so Android maintains the latest LTS release, Java 11, as the required minimum.</p><h2>What complicates the update from Java 8 to Java 9 and onwards?</h2><p>Java 9 was the first version released in the new era and brought a lot of new features the community desired. The most significant of these is probably the new modular system known by the codename <a href="">“Project Jigsaw”</a>. It has a concept of dependencies that can define a public API and can keep the implementation private at the same time.</p><p>This feature is first and foremost meant to be used by libraries. As the JDK is a library itself, it has also been modularized. The main advantage is that it is possible to create a smaller runtime with only a subset of necessary modules.</p><p>During this journey, some Java APIs have been made private and others were moved to different packages. This causes trouble for some well-known annotation processors like Dagger. The generated code is usually annotated with <span class="pre-inline">@Generated</span> annotation, which has been moved to a different package in JDK 9. In the case of an Android project written in Kotlin (which has to use kapt to enable Dagger), the build fails on JDK 9 or newer due to a <a href="">missing @Generated annotation class</a>. Dagger itself has a check for the target Java level and uses <span class="pre-inline">@Generated</span> annotation from the correct package. However, there was a <a href="">bug in kapt</a> - it didn’t report configured target Java level to Java compiler, failing the build and leaving poor developers scratching their heads for hours.</p><p>The restructuralization of the JDK was actually wider than just moving Java classes around. The Java compiler needed to be changed as well in order to understand the module system and know how to handle its classpaths appropriately.</p><p>As a result, the <span class="pre-inline">-bootclasspath</span> compiler option (that was used to include the <span class="pre-inline">android.jar</span> to the build) was removed, effectively making all Android classes unavailable to the build. Projects that are written 100% in Kotlin are not affected by this until Android view binding (or similar feature) is enabled. View binding build step generates Java classes that need to be compiled by <span class="pre-inline">javac</span>. As the generated classes have dependencies on <span class="pre-inline">android.jar</span> classes, the compilation fails when the project is configured to target Java 9 or newer. This limitation has been <a href="">known and tracked </a> for quite a long time and now it has finally been resolved as part of AGP 7.0.0.</p><p>Other tools that also needed an update were D8 and R8, as they work directly with new Java versions of class files.</p><h2>What to do to upgrade to Android Gradle Plugin 7.0?</h2><p>When a project with AGP 7.0 build is executed on JDK 8, the build will fail immediately with the following error:</p><pre><code class="hljs"> An exception occurred applying plugin request [id: ''] Failed to apply plugin ''. Android Gradle plugin requires Java 11 to run. You are currently using Java 1.8. You can try some of the following options: - changing the IDE settings. - changing the JAVA_HOME environment variable. - changing `` in ``. </code></pre><p>The only requirement is to use at least JDK 11 for Gradle when building the project. This can be set through <strong>JAVA_HOME</strong> environmental variable, <strong></strong> Gradle property or in <strong>Project Structure</strong> dialog in Android Studio:</p> <img alt="Project Structure dialog in Android Studio" src="/Blog/PublishingImages/Articles/android-java-release-train-01.png" data-themekey="#" /> <h2>Can we use new Java language features?</h2><p>Since Java 9, new language features are usually implemented in the Java compiler without any impact on bytecode or JVM. These can be easily handled by Android’s D8 tool and can be used in Android projects without a problem.</p><p>Example:</p><pre><code class="java hljs"> public void sayIt() { var message = "I am a Java 10 inferred type running on Android"; System.out.println(message); } </code></pre><p>You just need to tell Gradle that the project is targeting new Java versions. This can be configured in <span class="pre-inline">build.gradle.kts</span>:</p><pre><code class="kotlin hljs"> android { compileOptions { sourceCompatibility = JavaVersion.VERSION_11 targetCompatibility = JavaVersion.VERSION_11 } } </code></pre><p>When <span class="pre-inline">compileOptions</span> are not defined, the defaults come into play. Up until AGP 4.1, the default Java compatibility level was set to a very ancient Java 6. Since AGP 4.2, it has been bumped to (only slightly less ancient) Java 8.</p><h2>Can we use new Java APIs?</h2><p>Regrettably, Java library APIs are a completely different thing.</p><p>Java 8 APIs are available starting with Android 8 (API level 26). Some Java 9 APIs (like <a href="">List.of()</a>) are available starting with Android 11 (API level 30). These APIs might also be available on older Android versions through <a href="">Java APIs desugaring</a>.</p><p>Hopefully, every future Android version will adopt more of these new Java APIs and make them available for use on older Android versions via desugaring.</p><h2>Can we use the latest version - Java 15?</h2><p>We can use JDK 15 for running Gradle builds as it supports the latest Java version <a href="">since Gradle 6.7</a>. </p><p>Unfortunately, we cannot use Java 15 language features in our code. In fact, we cannot use Java 14, 13 and 12 language features either, as the highest supported <span class="pre-inline">sourceCompatibility</span> level is still Java 11. However, the limitation of R8 not being able to parse the latest Java version class files <a href="">was resolved</a> at the beginning of December 2020, so we can hope for Java 15 support arriving soon.</p><h2>How does this affect Kotlin projects?</h2><p>Not much. Kotlin compiler and toolchain are not affected by JDK used for Gradle build nor the Java compatibility level set for a project.</p><p>However, when you use JDK 11 for build and Java 11 as a source compatibility level for Java compiler, it is reasonable to use the same level as Kotlin target for JVM. This allows Kotlin to generate code optimized for newer Java language versions:</p><pre><code class="kotlin hljs"> kotlinOptions { jvmTarget = JavaVersion.VERSION_11.toString() } </code></pre><p>When <span class="pre-inline">kotlinOptions</span> are not defined, the default <span class="pre-inline">jvmTarget</span> is again set to a very ancient Java 6. Please define your <span class="pre-inline">kotlinOptions</span>!</p><h2>The bottom line</h2><p>Better late than never, Java 11 has just arrived in the Android world. It won’t much change your day to day work of writing Java code. It may not change your work of writing Kotlin code at all. Nevertheless, it <em>is</em> a big thing for the ecosystem which promises easier upgrades in the future, that will in turn allow for the depreciation of lots of outdated stuff. Both, the Android team at Google and the Kotlin team at JetBrains may finally drop support for Java 6 and Java 8 and focus on more contemporary language versions. That is something we would all profit from.</p><p> <br> </p><p>Pavel Švéda<br></p><p>Twitter: <a href="">@xsveda​</a><br><br></p> ​<br>#android;#java;#kotlin;#gradle