EECS 4443 Mobile User Interfaces - Assignment: Automated UI Testing, Profiling, Benchmarking
EECS 4443
Winter 2023
Assignment
Automated UI Testing, Profiling, Benchmarking
The purpose of the assignment is to let you practice the tools and processes necessary to complete the project. The assignment is done on an individual level, but you are expected to contribute the gained knowledge and experience in the project. You will use the project Demo_Quotation to practice the tools. The assignment consists of three parts. The three parts are in theory independent, but you can use the products of each part to perform the other parts.
Part 1a: Automated UI Testing
You will use the Espresso testing framework from Android Studio (https://developer.android.com/training/testing/espresso) to test the UI of the project.
1. Identify the main use cases of the application (i.e., what the users can do and how they can do it). For each use case and interactions, define test cases using the following template:
You may need to define more than one test case per use case to cover exceptional paths (e.g., what if a user performs an unexpected action, like hitting the back button).
2. Develop in code the necessary test cases. You can use the Espresso Test Recorder (https://developer.android.com/studio/test/other-testing-tools/espresso-test-recorder) following the “Procedure” part of your test cases. Reminder: You may have to go back and adjust the code generated by Espresso to better fit your testing purposes. You can also extend the generated tests to cover assertions and other cases not covered by your replay.
3. Make sure that your tests pass when you execute them.
4. The deliverable for this part is the test code you produced for your project as well as a document with the test cases. You also need to provide the configuration of your testing environment (devices, versions etc.).
Part 1b: Add a new search functionality and test it.
1. Add a functionality to search for famous people and quotes in your app. To implement Search you can follow this guide (https://developer.android.com/develop/ui/views/search). You can add a search text field at the top of your application or a search menu time in the application’s context menu.
2. You can use any and all features that come with the search, for example autocomplete, filter and so on. You can allow users to search for a famous person by a name or by the content of their quotes.
3. Create new test cases for the new functionality and implement the corresponding test classes using Espresso.
4. The deliverable for this part is the extension of the previous deliverable, including the test documentation and the code.
Part 2: Profile your application
1. Use Android Profiler (https://developer.android.com/studio/profile/android-profiler) to profile your application. Follow the necessary steps to configure your build and run configurations for the profiling.
2. We are interested in the CPU, memory and energy profiles of your application, since there is very little network activity.
3. You are supposed to generate traces and visualizations for the three profiles using the Profiler’s tools.
4. You can use your Espresso tests to create “load” for your application. You may want to adjust some actions to create as much load as possible to see some “spikes” in utilization. For example you can use long quizzes or search for something that would return a lot of results.
5. The deliverable for this part is the profiling configuration you uses (including the code) and a document where you present the visualizations with a few comments on the performance of your app.
Part 3: Benchmark your application
1. Based on the profiling you did before, identify the two use cases with the highest utilization in CPU, memory or energy. Do not worry about the absolute numbers. Pick the two most expensive use cases, even if their absolute utilization is low.
2. Further test these use cases by writing benchmarks for them (https://developer.android.com/topic/performance/benchmarking/microbenchmark-write).
3. The deliverable for this part will be the configuration for the benchmark (and the code), and a document where you’d report the process and the results of the benchmarking as returned by the tool.
NOTE: You do not have to provide three separate code bases. One that includes all parts is enough as long as you have properly documented the different interventions (using Javadoc and code comments).