1# Development Self-Test Framework User Guide
2
3
4## Overview
5
6OpenHarmony provides you with a comprehensive development self-test framework **developer_test**. As a part of the OpenHarmony test toolset, the development self-test framework is provided for you to test the development by yourself. You can develop relevant test cases based on your test requirements to discover defects at the development phase, greatly improving the code quality.
7
8This document describes how to use the development self-test framework of OpenHarmony.
9
10
11### Overview
12
13After adding or modifying code, OpenHarmony developers want to quickly verify whether the modified code functions properly, and the system already has a large number of automated test cases of existing functions, such as TDD cases and XTS cases. The development self-test framework aims to help you improve your self-test efficiency so that you can quickly execute the specified automated test cases and conducting development tests at the development phase.
14
15
16### Constraints
17
18When executing test cases using the framework, you must connect to the OpenHarmony device in advance.
19
20
21## Environment Setup
22
23The development self-test framework depends on the Python environment. It is required that the Python version be 3.7.5 or later. Before using the framework, you can refer to the following document for configuration.
24
25Click [here](https://gitee.com/openharmony/docs/blob/master/en/device-dev/get-code/sourcecode-acquire.md) to obtain the source code.
26
27### Basic Self-Test Framework Environment
28
29| Environment Dependency         | Version                                                    | Description                                                    |
30| ----------------- | ------------------------------------------------------------ | ------------------------------------------------------------ |
31| Operating system         | Ubuntu 20.04 or later                                           | Code compilation environment.                                                |
32| Linux extension component| libreadline-dev                                              | Plugin used to read commands.                                              |
33| python            | 3.7.5 or later                                             | Language used by the test framework.                                                |
34| Python plugins       | pyserial 3.3 or later, paramiko 2.7.1 or later, setuptools 40.8.0 or later, and rsa4.0 or later| - **pserial**: supports Python serial port communication.<br>- **paramiko**: allows Python to use SSH. <br>- **setuptools**: allows Python packages to be created and distributed easily. <br>- **rsa**: implements RSA encryption in Python.|
35| NFS Server        | haneWIN NFS Server 1.2.50 or later or NFS v4 or later            | Devices can be connected using serial ports. Mini- and small-system devices are used.                    |
36| HDC               | 1.2.0a                                                        | A tool that enables devices to be connected through the OpenHarmony Device Connector (HDC).                                         |
37
38
39
401. Install Ubuntu.
41
42	 As Ubuntu 20 has built-in Python 3.8.5, you do not need to install Python separately.
43
442. Run the following command to install the Linux extended component readline:
45
46    ```bash
47    sudo apt-get install libreadline-dev
48    ```
49    The installation is successful if the following information is displayed:
50    ```
51    Reading package lists... Done
52    Building dependency tree
53    Reading state information... Done
54    libreadline-dev is already the newest version (7.0-3).
55    0 upgraded, 0 newly installed, 0 to remove and 11 not upgraded.
56    ```
57
582. Run the following command to install the **setuptools** plugin:
59    ```bash
60    pip3 install setuptools
61    ```
62    The installation is successful if the following information is displayed:
63    ```
64    Requirement already satisfied: setuptools in d:\programs\python37\lib\site-packages (41.2.0)
65    ```
66
673. Run the following command to install the **paramiko** plugin:
68    ```bash
69    pip3 install paramiko
70    ```
71    The installation is successful if the following information is displayed:
72    ```
73    Installing collected packages: pycparser, cffi, pynacl, bcrypt, cryptography, paramiko
74    Successfully installed bcrypt-3.2.0 cffi-1.14.4 cryptography-3.3.1 paramiko-2.7.2 pycparser-2.20 pynacl-1.4.0
75    ```
76
774. Run the following command to install the **ras** plugin:
78    ```bash
79    pip3 install rsa
80    ```
81    The installation is successful if the following information is displayed:
82    ```
83    Installing collected packages: pyasn1, rsa
84    Successfully installed pyasn1-0.4.8 rsa-4.7
85    ```
86
875. Run the following command to install the **pyserial** plugin:
88    ```bash
89    pip3 install pyserial
90    ```
91    The installation is successful if the following information is displayed:
92    ```
93    Requirement already satisfied: pyserial in d:\programs\python37\lib\site-packages\pyserial-3.4-py3.7.egg (3.4)
94    ```
95
966. Install the NFS server if the device outputs results only through the serial port.
97
98	> **NOTE**
99	>
100	> This operation applies to small-system or mini-system devices, not standard-system devices.
101
102    - Windows OS: Install the **haneWIN NFS Server1.2.50** package.
103    - Linux OS: Run the following command to install the NFS server:
104    ```bash
105    sudo apt install nfs-kernel-server
106    ```
107    The installation is successful if the following information is displayed:
108    ```
109    Reading package lists... Done
110    Building dependency tree
111    Reading state information... Done
112    nfs-kernel-server is already the newest version (1:1.3.4-2.1ubuntu5.3).
113    0 upgraded, 0 newly installed, 0 to remove and 11 not upgraded.
114    ```
115
1167. If the device supports HDC connection, install the HDC tool. For details about the installation process, see [HDC-OpenHarmony Device Connector](https://gitee.com/openharmony/developtools_hdc/blob/master/README.md).
117
118
119### Environment Dependency Check
120
121| Check Item                                            | Action                                               | Requirement                 |
122| -------------------------------------------------- | --------------------------------------------------- | ------------------------- |
123| Check whether Python is installed successfully.                                | Run the **python --version** command.               | The Python version is 3.7.5 or later.      |
124| Check whether Python plugins are successfully installed.                        | Go to the **test/developertest** directory and run **start.bat** or **start.sh**.| The **>>>** prompt is displayed.|
125| Check the NFS server status (for the devices that support only serial port output).| Log in to the development board through the serial port and run the **mount** command to mount the NFS.           | The file directory can be mounted.   |
126| Check whether the HDC is successfully installed.                                   | Run the **hdc -v** command.                     | The HDC version is 1.2.0a or later.      |
127
128
129## Test Case Preparation
130
131The test framework supports multiple types of tests and provides different test case templates for them.
132
133**TDD Test (C++)**
134
135- Naming rules for source files
136
137    The source file name of test cases must be the same as that of the test suite. The file names must use lowercase letters and in the [Function]\_[Sub-function]\_**test** format. More specific sub-functions can be added as required.
138
139
140- The following uses **calculator_sub_test.cpp** as an example to describe how to compile a single-thread test case:
141    ```
142    /*
143     * Copyright (c) 2023 XXXX Device Co., Ltd.
144     * Licensed under the Apache License, Version 2.0 (the "License");
145     * you may not use this file except in compliance with the License.
146     * You may obtain a copy of the License at
147     *
148     *     http://www.apache.org/licenses/LICENSE-2.0
149     *
150     * Unless required by applicable law or agreed to in writing, software
151     * distributed under the License is distributed on an "AS IS" BASIS,
152     * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
153     * See the License for the specific language governing permissions and
154     * limitations under the License.
155     */
156
157    #include "calculator.h"
158    #include <gtest/gtest.h>
159
160    using namespace testing::ext;
161
162    class CalculatorSubTest : public testing::Test {
163    public:
164        static void SetUpTestCase(void);
165        static void TearDownTestCase(void);
166        void SetUp();
167        void TearDown();
168    };
169
170    void CalculatorSubTest::SetUpTestCase(void)
171    {
172        // Set a setup function, which will be called before all test cases.
173    }
174
175    void CalculatorSubTest::TearDownTestCase(void)
176    {
177        // Set a teardown function, which will be called after all test cases.
178    }
179
180    void CalculatorSubTest::SetUp(void)
181    {
182        // Set a setup function, which will be called before all test cases.
183    }
184
185    void CalculatorSubTest::TearDown(void)
186    {
187        // Set a teardown function, which will be called after all test cases.
188    }
189
190    /**
191     * @tc.name: integer_sub_001
192     * @tc.desc: Verify the sub function.
193     * @tc.type: FUNC
194     * @tc.require: issueNumber
195     */
196    HWTEST_F(CalculatorSubTest, integer_sub_001, TestSize.Level1)
197    {
198        // Step 1 Call the function to obtain the test result.
199        int actual = Sub(4, 0);
200
201        // Step 2 Use an assertion to compare the obtained result with the expected result.
202        EXPECT_EQ(4, actual);
203    }
204    ```
205    The procedure is as follows:
206    1. Add comment information to the test case file header.
207	    ```
208    	/*
209    	 * Copyright (c) 2023 XXXX Device Co., Ltd.
210    	 * Licensed under the Apache License, Version 2.0 (the "License");
211    	 * you may not use this file except in compliance with the License.
212    	 * You may obtain a copy of the License at
213    	 *
214    	 *     http://www.apache.org/licenses/LICENSE-2.0
215    	 *
216    	 * Unless required by applicable law or agreed to in writing, software
217    	 * distributed under the License is distributed on an "AS IS" BASIS,
218    	 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
219    	 * See the License for the specific language governing permissions and
220    	 * limitations under the License.
221    	 */
222    	```
223    2. Add the test framework header file and namespace.
224	    ```
225    	#include <gtest/gtest.h>
226
227    	using namespace testing::ext;
228    	```
229    3. Add the header file of the test class.
230	    ```
231    	#include "calculator.h"
232    	```
233    4. Define the test suite (test class).
234		> When defining a test suite, ensure that the test suite name is the same as the target to build and uses the upper camel case style.
235	    ```
236    	class CalculatorSubTest : public testing::Test {
237    	public:
238    	    static void SetUpTestCase(void);
239    	    static void TearDownTestCase(void);
240    	    void SetUp();
241    	    void TearDown();
242    	};
243
244    	void CalculatorSubTest::SetUpTestCase(void)
245    	{
246    	    // Set a setup function, which will be called before all test cases.
247    	}
248
249    	void CalculatorSubTest::TearDownTestCase(void)
250    	{
251    	    // Set a teardown function, which will be called after all test cases.
252    	}
253
254    	void CalculatorSubTest::SetUp(void)
255    	{
256    	    // Set a setup function, which will be called before all test cases.
257    	}
258
259    	void CalculatorSubTest::TearDown(void)
260    	{
261    	    // Set a teardown function, which will be called after all test cases.
262    	}
263    	```
264
265    5. Add implementation of the test cases, including test case comments and logic.
266	    ```
267    	/**
268    	 * @tc.name: integer_sub_001
269    	 * @tc.desc: Verify the sub function.
270    	 * @tc.type: FUNC
271    	 * @tc.require: issueNumber
272    	 */
273    	HWTEST_F(CalculatorSubTest, integer_sub_001, TestSize.Level1)
274    	{
275    	    // Step 1 Call the function to obtain the test result.
276    	    int actual = Sub(4, 0);
277
278    	    // Step 2 Use an assertion to compare the obtained result with the expected result.
279    	    EXPECT_EQ(4, actual);
280    	}
281    	```
282		> **NOTE**
283		>
284        > The value of **@tc.require** must start with AR/SR or issue, for example, **issueI56WJ7**.
285
286- The following uses **base_object_test.cpp** as an example to describe how to compile a multi-thread test case:
287    ```
288    // The test case file header comment and test case comment are the same as those in the single-thread test case example.
289
290    #include "base_object.h"
291    #include <gtest/gtest.h>
292	#include <gtest/hwext/gtest-multithread.h>
293	#include <unistd.h>
294
295    using namespace testing::ext;
296    using namespace testing::mt;
297
298	namespace OHOS {
299	namespace AAFwk {
300    class AAFwkBaseObjectTest : public testing::Test {......}
301
302	// Step 1 Set the function to be tested to return the factorial result.
303	int factorial(int n)
304	{
305		int result = 1;
306		for (int i = 1; i <= n; i++) {
307			result *= i;
308		}
309		printf("Factorial Function Result : %d! = %d\n", n, result);
310		return result;
311	}
312
313	// Step 2 Use an assertion to compare the obtained result with the expected result.
314	void factorial_test()
315	{
316		int ret = factorial(3); // Call the function to obtain the result.
317		std::thread::id this_id = std::this_thread::get_id();
318		std::ostringstream oss;
319		oss << this_id;
320		std::string this_id_str = oss.str();
321		long int thread_id = atol(this_id_str.c_str());
322		printf("running thread...: %ld\n", thread_id); // Output the ID of the running thread.
323		EXPECT_EQ(ret, 6);
324	}
325
326	HWTEST_F(AAFwkBaseObjectTest, Factorial_test_001, TestSize.Level1)
327	{
328		SET_THREAD_NUM(4);
329		printf("Factorial_test_001 BEGIN\n");
330		GTEST_RUN_TASK(factorial_test);
331		printf("Factorial_test_001 END\n");
332	}
333
334	HWMTEST_F(AAFwkBaseObjectTest, Factorial_test_002, TestSize.Level1, 6)
335	{
336		printf("Factorial_test_002 BEGIN\n");
337		factorial_test();
338		printf("Factorial_test_002 END\n");
339	}
340
341	}  // namespace AAFwk
342	}  // namespace OHOS
343
344    ```
345    The procedure is as follows:
346    1. Add comment information to the test case file header.
347
348	 > **NOTE**
349     >
350     > The standard is the same as that of the single-thread test case.
351
352	    ```
353    	#include <gtest/gtest.h>
354    	#include <gtest/hwext/gtest-multithread.h>
355		#include <unistd.h>
356    	using namespace testing::ext;
357   		using namespace testing::mt;
358    	```
359    3. Add the header file of the test class.
360	    ```
361    	#include "base_object.h"
362    	```
363    4. Define the test suite (test class).
364	    ```
365    	class AAFwkBaseObjectTest : public testing::Test {......}
366
367    	```
368
369
370	      > **NOTE**
371	      >
372	      > The standard is the same as that of the single-thread test case.
373
374	    ```
375		// Step 1 Set the function to be tested to return the factorial result.
376		int factorial(int n)
377		{
378			int result = 1;
379			for (int i = 1; i <= n; i++) {
380				result *= i;
381			}
382			printf("Factorial Function Result : %d! = %d\n", n, result);
383			return result;
384		}
385
386		// Step 2 Use an assertion to compare the obtained result with the expected result.
387		void factorial_test()
388		{
389			int ret = factorial(3); // Call the function to obtain the result.
390			std::thread::id this_id = std::this_thread::get_id();
391			std::ostringstream oss;
392			oss << this_id;
393			std::string this_id_str = oss.str();
394		long int thread_id = atol(this_id_str.c_str());
395		printf("running thread...: %ld\n", thread_id); // Output the ID of the running thread.
396			EXPECT_EQ(ret, 6);
397		}
398
399		// GTEST_RUN_TASK(TestFunction) is a multi-thread startup function. The parameter is a custom function.
400		// If SET_THREAD_NUM() is not called, the default value 10 will be used.
401    	HWTEST_F(AAFwkBaseObjectTest, Factorial_test_001, TestSize.Level1)
402    	{
403			SET_THREAD_NUM(4); // Set the number of threads. It can be dynamically set in the same test suite.
404			printf("Factorial_test_001 BEGIN\n");
405			GTEST_RUN_TASK(factorial_test); // Start the multi-thread execution of the factorial_test task.
406			printf("Factorial_test_001 END\n");
407    	}
408
409		// HWMTEST_F(TEST_SUITE, TEST_TC, TEST_LEVEL, THREAD_NUM)
410	    	// THREAD_NUM can be used to set the number of threads for executing a test case.
411		// HWMTEST_F creates a specified number of threads and executes the tested function.
412    	HWMTEST_F(AAFwkBaseObjectTest, Factorial_test_002, TestSize.Level1, 6)
413    	{
414			printf("Factorial_test_002 BEGIN\n");
415			factorial_test();
416			printf("Factorial_test_002 END\n");
417    	}
418      	       // Add the multi-thread API MTEST_ADD_TASK(THREAD_ID,ThreadTestFunc). Multiple threads are registered but are not executed in this test case. Instead, they are executed later in a unified manner. This API is applicable to the multi-thread test in the scenario where multiple test cases are combined.
419              // THREAD_ID is used to distinguish threads and starts from 0. You can also use a random thread ID by passing in RANDOM_THREAD_ID. In this scenario, each thread ID is unique.
420              // Add the multi-thread API MTEST_POST_RUN() to execute the previously registered threads in a unified manner.
421    	```
422    	> **NOTE**
423    	>
424    	> The comments for multi-thread test cases are the same as those of single-thread test cases.
425
426- About C++ test case templates:
427
428	The following test case templates are provided for your reference.
429
430	|      Type|    Description|
431	| ------------| ------------|
432	| HWTEST(A,B,C)| Use this template if the test case execution does not depend on setup or teardown.|
433	| HWTEST_F(A,B,C)| Use this template if the test case execution (excluding parameters) depends on setup and teardown.|
434	| HWMTEST_F(A,B,C,D)| Use this template if the multi-thread test case execution depends on setup and teardown.|
435	| HWTEST_P(A,B,C)| Use this template if the test case execution (including parameters) depends on setup and teardown.|
436
437
438	In the template names:
439
440	- **A** indicates the test suite name.
441
442	- **B** indicates the test case name, which is in the *Function*_*No.* format. The *No.* is a three-digit number starting from **001**.
443
444	- **C** indicates the test case level. There are five test case levels: guard-control level 0 and non-guard-control level 1 to level 4. Of levels 1 to 4, a smaller value indicates a more important function verified by the test case.
445
446	- **D** indicates the number of threads for executing the multi-thread test case.
447
448	**Note**:
449
450	- The expected result of each test case must have an assertion.
451
452	- The test case level must be specified.
453
454	- It is recommended that the test be implemented step by step according to the template.
455
456	- The test case description is in the standard @tc.*xxx* *value* format. The comment must contain the test case name, description, type, and requirement number. The test case type @tc.type can be any of the following:
457
458
459		| Test Case Type | Code |
460		| ------------ | -------- |
461		| Function test | FUNC |
462		| Performance Test | PERF |
463		| Reliability test | RELI |
464		| Security Test | SECU |
465		| Fuzz test | FUZZ |
466
467**TDD Test (JavaScript)**
468
469- Naming rules for source files
470
471
472	The source file name of a test case must be in the [Function][Sub-function]**Test** format, and each part must use the upper camel case style. More specific sub-functions can be added as required.
473	Example:
474	```
475	AppInfoTest.js
476	```
477
478- Test case example
479
480	```js
481	/*
482	* Copyright (C) 2023 XXXX Device Co., Ltd.
483	* Licensed under the Apache License, Version 2.0 (the "License");
484	* you may not use this file except in compliance with the License.
485	* You may obtain a copy of the License at
486	*
487	*     http://www.apache.org/licenses/LICENSE-2.0
488	*
489	* Unless required by applicable law or agreed to in writing, software
490	* distributed under the License is distributed on an "AS IS" BASIS,
491	* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
492	* See the License for the specific language governing permissions and
493	* limitations under the License.
494	*/
495	import app from '@system.app'
496
497	import {describe, beforeAll, beforeEach, afterEach, afterAll, it, expect} from 'deccjsunit/index'
498
499	describe("AppInfoTest", function () {
500		beforeAll(function() {
501			// Set a setup function, which will be called before all test cases.
502			console.info('beforeAll caled')
503		})
504
505		afterAll(function() {
506			// Set a teardown function, which will be called after all test cases.
507			console.info('afterAll caled')
508		})
509
510		beforeEach(function() {
511			// Set a setup function, which will be called before all test cases.
512			console.info('beforeEach caled')
513		})
514
515		afterEach(function() {
516			// Set a teardown function, which will be called after all test cases.
517			console.info('afterEach caled')
518		})
519
520		/*
521		* @tc.name:appInfoTest001
522		* @tc.desc:verify app info is not null
523		* @tc.type: FUNC
524		* @tc.require: issueNumber
525		*/
526		it("appInfoTest001", 0, function () {
527			// Step 1 Call the function to obtain the test result.
528			var info = app.getInfo()
529
530			// Step 2 Use an assertion to compare the obtained result with the expected result.
531			expect(info != null).assertEqual(true)
532		})
533	})
534	```
535	The procedure is as follows:
536	1. Add comment information to the test case file header.
537		```
538		/*
539		* Copyright (C) 2023 XXXX Device Co., Ltd.
540		* Licensed under the Apache License, Version 2.0 (the "License");
541		* you may not use this file except in compliance with the License.
542		* You may obtain a copy of the License at
543		*
544		*     http://www.apache.org/licenses/LICENSE-2.0
545		*
546		* Unless required by applicable law or agreed to in writing, software
547		* distributed under the License is distributed on an "AS IS" BASIS,
548		* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
549		* See the License for the specific language governing permissions and
550		* limitations under the License.
551		*/
552		```
553	2. Import the APIs and JSUnit test library to test.
554		```js
555		import app from '@system.app'
556
557		import {describe, beforeAll, beforeEach, afterEach, afterAll, it, expect} from 'deccjsunit/index'
558		```
559	3. Define the test suite (test class).
560		```js
561		describe("AppInfoTest", function () {
562			beforeAll(function() {
563				// Set a setup function, which will be called before all test cases.
564				console.info('beforeAll caled')
565			})
566
567			afterAll(function() {
568				// Set a teardown function, which will be called after all test cases.
569				console.info('afterAll caled')
570			})
571
572			beforeEach(function() {
573				// Set a setup function, which will be called before all test cases.
574				console.info('beforeEach caled')
575			})
576
577			afterEach(function() {
578				// Set a teardown function, which will be called after all test cases.
579				console.info('afterEach caled')
580			})
581		```
582	4. Write test cases
583		```JS
584		/*
585		* @tc.name:appInfoTest001
586		* @tc.desc:verify app info is not null
587		* @tc.type: FUNC
588		* @tc.require: issueNumber
589		*/
590		it("appInfoTest001", 0, function () {
591			// Step 1 Call the function to obtain the test result.
592			var info = app.getInfo()
593
594			// Step 2 Use an assertion to compare the obtained result with the expected result.
595			expect(info != null).assertEqual(true)
596		})
597		```
598		> **NOTE**
599		>
600		> The value of **@tc.require** must start with AR/SR or issue, for example, **issueI56WJ7**.
601
602**Fuzzing Test**
603
604[Fuzzing case specifications](https://gitee.com/openharmony/testfwk_developer_test/blob/master/libs/fuzzlib/README_zh.md)
605
606
607**Benchmark Test**
608
609[Benchmark case specifications](https://gitee.com/openharmony/testfwk_developer_test/blob/master/libs/benchmark/README_zh.md)
610
611## **Test Case Building**
612
613When a test case is executed, the test framework searches for the build file of the test case in the test case directory and builds the test case located. The following describes how to write build files (GN files) in different programming languages.
614
615**TDD Test**
616
617The following provides templates for different languages for your reference.
618
619- **Test case build file example (C++)**
620
621	```
622	# Copyright (c) 2023 XXXX Device Co., Ltd.
623
624	import("//build/test.gni")
625
626	module_output_path = "developertest/calculator"
627
628	config("module_private_config") {
629	visibility = [ ":*" ]
630
631	include_dirs = [ "../../../include" ]
632	}
633
634	ohos_unittest("CalculatorSubTest") {
635	module_out_path = module_output_path
636
637	sources = [
638		"../../../include/calculator.h",
639		"../../../src/calculator.cpp",
640	]
641
642	sources += [ "calculator_sub_test.cpp" ]
643
644	configs = [ ":module_private_config" ]
645
646	deps = [ "//third_party/googletest:gtest_main" ]
647	}
648
649	group("unittest") {
650	testonly = true
651	deps = [":CalculatorSubTest"]
652	}
653	```
654	The procedure is as follows:
655
656	1. Add comment information for the file header.
657		```
658		# Copyright (c) 2023 XXXX Device Co., Ltd.
659		```
660	2. Import the build template.
661		```
662		import("//build/test.gni")
663		```
664	3. Specify the file output path.
665		```
666		module_output_path = "developertest/calculator"
667		```
668		> **NOTE**
669		>
670		> The output path is the *Part name*/*Module name*.
671
672	4. Configure the directories for dependencies.
673
674		```
675		config("module_private_config") {
676		visibility = [ ":*" ]
677
678		include_dirs = [ "../../../include" ]
679		}
680		```
681		> **NOTE**
682		>
683		> Generally, the dependency directories are configured here and directly referenced in the build script of the test case.
684
685	5. Set the output build file for the test cases.
686
687		```
688		ohos_unittest("CalculatorSubTest") {
689		}
690		```
691	6. Write the build script (add the source file, configuration, and dependencies) for the test cases.
692		```
693		ohos_unittest("CalculatorSubTest") {
694		module_out_path = module_output_path
695		sources = [
696			"../../../include/calculator.h",
697			"../../../src/calculator.cpp",
698			"../../../test/calculator_sub_test.cpp"
699		]
700		sources += [ "calculator_sub_test.cpp" ]
701		configs = [ ":module_private_config" ]
702		deps = [ "//third_party/googletest:gtest_main" ]
703		}
704		```
705
706		> **NOTE**
707		>
708		> Set the test type based on actual requirements. The following test types are available:
709		> - **ohos_unittest**: unit test
710        > - **ohos_js_unittest**: FA model JS unit test
711        > - **ohos_js_stage_unittest**: stage model ArkTS unit test
712		> - **ohos_moduletest**: module test
713		> - **ohos_systemtest**: system test
714		> - **ohos_performancetest**: performance test
715		> - **ohos_securitytest**: security test
716		> - **ohos_reliabilitytest**: reliability test
717		> - **ohos_distributedtest**: distributed test
718
719	7. Group the test case files by test type.
720
721		```
722		group("unittest") {
723		testonly = true
724		deps = [":CalculatorSubTest"]
725		}
726		```
727		> **NOTE**
728		>
729		> Grouping test cases by test type allows you to execute a specific type of test cases when required.
730
731- **Test case build file example (JavaScript)**
732
733	```
734	# Copyright (C) 2023 XXXX Device Co., Ltd.
735
736	import("//build/test.gni")
737
738	module_output_path = "developertest/app_info"
739
740	ohos_js_unittest("GetAppInfoJsTest") {
741	module_out_path = module_output_path
742
743	hap_profile = "./config.json"
744	certificate_profile = "//test/developertest/signature/openharmony_sx.p7b"
745	}
746
747	group("unittest") {
748	testonly = true
749	deps = [ ":GetAppInfoJsTest" ]
750	}
751	```
752
753	The procedure is as follows:
754
755	1. Add comment information for the file header.
756
757		```
758		# Copyright (C) 2023 XXXX Device Co., Ltd.
759		```
760
761	2. Import the build template.
762
763		```
764		import("//build/test.gni")
765		```
766
767	3. Specify the file output path.
768
769		```
770		module_output_path = "developertest/app_info"
771		```
772		> **NOTE**
773		>
774		> The output path is the *Part name*/*Module name*.
775
776	4. Set the output build file for the test cases.
777
778		```
779		ohos_js_unittest("GetAppInfoJsTest") {
780		}
781		```
782		> **NOTE**
783		>
784		> - Use the **ohos_js_unittest** template to define the JavaScript test suite. Pay attention to the difference between JavaScript and C++.
785		> - The file generated for the JavaScript test suite must be in .hap format and named after the test suite name defined here. The test suite name must end with **JsTest**.
786
787	5. Configure the **config.json** file and signature file, which are mandatory.
788
789		```
790		ohos_js_unittest("GetAppInfoJsTest") {
791		module_out_path = module_output_path
792
793		hap_profile = "./config.json"
794		certificate_profile = "//test/developertest/signature/openharmony_sx.p7b"
795		}
796		```
797		**config.json** is the configuration file required for HAP build. You need to set **target** based on the tested SDK version. Default values can be retained for other items. The following is an example:
798
799		```json
800		{
801		"app": {
802			"bundleName": "com.example.myapplication",
803			"vendor": "example",
804			"version": {
805			"code": 1,
806			"name": "1.0"
807			},
808			"apiVersion": {
809				"compatible": 4,
810				"target": 5     // Set it based on the tested SDK version. In this example, SDK5 is used.
811			}
812		},
813		"deviceConfig": {},
814		"module": {
815			"package": "com.example.myapplication",
816			"name": ".MyApplication",
817			"deviceType": [
818			"phone"
819			],
820			"distro": {
821			"deliveryWithInstall": true,
822			"moduleName": "entry",
823			"moduleType": "entry"
824			},
825			"abilities": [
826			{
827			"skills": [
828				{
829					"entities": [
830					"entity.system.home"
831					],
832					"actions": [
833					"action.system.home"
834					]
835				}
836				],
837				"name": "com.example.myapplication.MainAbility",
838				"icon": "$media:icon",
839				"description": "$string:mainability_description",
840				"label": "MyApplication",
841				"type": "page",
842				"launchType": "standard"
843			}
844			],
845			"js": [
846			{
847				"pages": [
848				"pages/index/index"
849				],
850				"name": "default",
851				"window": {
852					"designWidth": 720,
853					"autoDesignWidth": false
854				}
855				}
856			]
857			}
858		}
859		```
860
861	6. Group the test case files by test type.
862
863		```
864		group("unittest") {
865		testonly = true
866		deps = [ ":GetAppInfoJsTest" ]
867		}
868		```
869		> **NOTE**
870		>
871		> Grouping test cases by test type allows you to execute a specific type of test cases when required.
872
873- **Example of ArkTS case compilation configuration for the stage model**
874
875    ```
876    # Copyright (C) 2023 XXXX Device Co., Ltd.
877
878    import("//build/test.gni")
879
880    want_output_path = "developertest/stage_test"
881
882    ohos_js_stage_unittest("ActsBundleMgrStageEtsTest") {
883    hap_profile = "entry/src/main/module.json"
884    deps = [
885        ":actbmsstageetstest_js_assets",
886        ":actbmsstageetstest_resources",
887    ]
888    ets2abc = true
889    certificate_profile = "signature/openharmony_sx.p7b"
890    hap_name = "ActsBundleMgrStageEtsTest"
891    subsystem_name = "developertest"
892    part_name = "stage_test"
893    module_out_path = want_output_path
894    }
895    ohos_app_scope("actbmsstageetstest_app_profile") {
896    app_profile = "AppScope/app.json"
897    sources = [ "AppScope/resources" ]
898    }
899    ohos_js_assets("actbmsstageetstest_js_assets") {
900    source_dir = "entry/src/main/ets"
901    }
902    ohos_resources("actbmsstageetstest_resources") {
903    sources = [ "entry/src/main/resources" ]
904    deps = [ ":actbmsstageetstest_app_profile" ]
905    hap_profile = "entry/src/main/module.json"
906    }
907    group("unittest") {
908    testonly = true
909    deps = []
910    deps += [ ":ActsBundleMgrStageEtsTest" ]
911    }
912    ```
913	The procedure is as follows:
914
915	1. Add comment information for the file header.
916
917		```
918		# Copyright (C) 2023 XXXX Device Co., Ltd.
919		```
920
921	2. Import the build template.
922
923		```
924		import("//build/test.gni")
925		```
926
927	3. Specify the file output path.
928
929		```
930		want_output_path = "developertest/stage_test"
931		```
932		> **NOTE**
933		>
934		> The output path is the *Part name*/*Module name*.
935
936	4. Set the output build file for the test cases.
937
938		```
939		ohos_js_stage_unittest("ActsBundleMgrStageEtsTest") {
940		}
941		```
942		>  **NOTE**
943		>
944		> Use the **ohos_js_stage_unittest** template to define the ArkTS test suite for the stage model.
945
946	5. Specify the configuration file **module.json**, signature file, part name, and compilation output path, which are all mandatory.
947
948		```
949		ohos_js_stage_unittest("ActsBundleMgrStageEtsTest") {
950        hap_profile = "entry/src/main/module.json"
951        certificate_profile = "signature/openharmony_sx.p7b"
952        subsystem_name = "developertest"
953        part_name = "stage_test" // Part name
954		}
955		```
956
957	6. Specify the configuration resource file (add the source files, configurations, and dependencies).
958		```
959		# Declare an AppScope module for the HAP. Those specified by app_profile and sources will be combined to a specific entry file for compilation.
960		ohos_app_scope("actbmsstageetstest_app_profile") {
961        app_profile = "AppScope/app.json"
962        sources = [ "AppScope/resources" ]
963		}
964
965		# Place the test case code for the stage model in the ets directory.
966		ohos_js_assets("actbmsstageetstest_js_assets") {
967        source_dir = "entry/src/main/ets"
968		}
969
970		# Source files are stored in the resources directory after compilation in the stage model.
971		ohos_resources("actbmsstageetstest_resources") {
972        sources = [ "entry/src/main/resources" ]
973        deps = [ ":actbmsstageetstest_app_profile" ]
974        hap_profile = "entry/src/main/module.json"
975		}
976
977		```
978
979   7. Group the test case files by test type.
980
981       ```
982       group("unittest") {
983       testonly = true
984       deps = [ ":GetAppInfoJsTest" ]
985       }
986       ```
987		> **NOTE**
988		>
989		> Grouping test cases by test type allows you to execute a specific type of test cases when required.
990
991**Configuring bundle.json**
992
993Configure the part build file to associate with specific test cases.
994```
995"build": {
996    "sub_component": [
997		"//test/testfwk/developer_test/examples/app_info:app_info",
998		"//test/testfwk/developer_test/examples/detector:detector",
999		"//test/testfwk/developer_test/examples/calculator:calculator"
1000    ],
1001    "inner_list": [
1002		{
1003			"header": {
1004				"header_base": "////test/testfwk/developer_test/examples/detector/include",
1005				"header_files": [
1006					"detector.h"
1007				]
1008		},
1009		"name": "//test/testfwk/developer_test/examples/detector:detector"
1010	  }
1011    ],
1012    "test": [ // Test under configuration module calculator.
1013      "//test/testfwk/developer_test/examples/app_info/test:unittest",
1014      "//test/testfwk/developer_test/examples/calculator/test:unittest",
1015      "//test/testfwk/developer_test/examples/calculator/test:fuzztest"
1016 }
1017```
1018> **NOTE**
1019>
1020> **test_list** contains the test cases of the corresponding module.
1021
1022## Configuring Test Resources
1023
1024The test resources mainly include external file resources such as image files, video files, and third-party libraries required during test case execution. Currently, only static resources can be configured.
1025
1026Perform the following steps:
1027
10281. Create the **resource** directory in the **test** directory of the part, and create a directory for the module in the **resource** directory to store resource files of the module.
1029
10302. In the module directory under **resource**, create the **ohos_test.xml** file in the following format:
1031
1032	```xml
1033	<?xml version="1.0" encoding="UTF-8"?>
1034	<configuration ver="2.0">
1035		<target name="CalculatorSubTest">
1036			<preparer>
1037				<option name="push" value="test.jpg -> /data/test/resource" src="res"/>
1038				<option name="push" value="libc++.z.so -> /data/test/resource" src="out"/>
1039			</preparer>
1040		</target>
1041	</configuration>
1042	```
1043
1044 3. In the build file of the test cases, configure **resource_config_file** to point to the resource file **ohos_test.xml**.
1045
1046	```
1047	ohos_unittest("CalculatorSubTest") {
1048	resource_config_file = "//system/subsystem/partA/test/resource/calculator/ohos_test.xml"
1049	}
1050	```
1051	> **NOTE**
1052	>
1053	>- **target_name** indicates the test suite name defined in the **BUILD.gn** file in the **test** directory. **preparer** indicates the action to perform before the test suite is executed.
1054	>- **src="res"** indicates that the test resources are in the **resource** directory under the **test** directory. **src="out"** indicates that the test resources are in the **out/release/$(*part*)** directory.
1055
1056## Test Case Execution
1057
1058### Configuration File user_config.xml
1059
1060Before executing test cases, you need to modify the configuration in **developer_test\config\user_config.xml** based on the device used.
1061
1062```xml
1063<user_config>
1064  <build>
1065    <!-- Whether to build a demo case. The default value is false. If a demo case is required, change the value to true. -->
1066    <example>false</example>
1067    <!-- Whether to build the version. The default value is false. -->
1068    <version>false</version>
1069    <!-- Whether to build the test cases. The default value is true. If the build is already complete, change the value to false before executing the test cases.-->
1070    <testcase>true</testcase>
1071	<!--When compiling test cases, select whether the target CPU is of the 64-bit or 32-bit. The default value is null (32-bit). You can select arm64. -->
1072    <parameter>
1073       <target_cpu></target_cpu>
1074    </parameter>
1075  </build>
1076  <environment>
1077    <!-- Configure the IP address and port number of the remote server to support connection to the device through the OpenHarmony Device Connector (HDC).-->
1078    <device type="usb-hdc">
1079      <ip></ip>
1080      <port></port>
1081      <sn></sn>
1082    </device>
1083    <!-- Configure the serial port information of the device to enable connection through the serial port.-->
1084    <device type="com" label="ipcamera">
1085      <serial>
1086        <com></com>
1087        <type>cmd</type>
1088        <baud_rate>115200</baud_rate>
1089        <data_bits>8</data_bits>
1090        <stop_bits>1</stop_bits>
1091        <timeout>1</timeout>
1092      </serial>
1093    </device>
1094  </environment>
1095  <!-- Configure the test case path. If the test cases have not been built (<testcase> is true), leave this parameter blank. If the build is complete, enter the path of the test cases.-->
1096  <test_cases>
1097    <dir></dir>
1098  </test_cases>
1099  <!-- Configure the coverage output path.-->
1100  <coverage>
1101    <outpath></outpath>
1102  </coverage>
1103  <!-- Configure the NFS mount information when the tested device supports only the serial port connection. Specify the NFS mapping path. host_dir indicates the NFS directory on the PC, and board_dir indicates the directory created on the board. -->
1104  <NFS>
1105    <host_dir></host_dir>
1106    <mnt_cmd></mnt_cmd>
1107    <board_dir></board_dir>
1108  </NFS>
1109</user_config>
1110```
1111> **NOTE**
1112>
1113> If HDC is connected to the device before the test cases are executed, you only need to configure the device IP address and port number, and retain the default settings for other parameters.
1114
1115### Executing Test Cases on Windows
1116#### **Test Case Building**
1117
1118Test cases cannot be built on Windows. You need to run the following command to build test cases on Linux:
1119```
1120./build.sh --product-name {product_name} --build-target make_test
1121```
1122
1123> **NOTE**
1124>
1125>- **product-name**: specifies the name of the product to be compiled.
1126>- **build-target**: specifies the test case to build. **make_test** indicates all test cases. You can specify the test cases based on requirements.
1127
1128When the build is complete, the test cases are automatically saved in **out/ohos-arm-release/packages/phone/tests**.
1129
1130#### Setting Up the Execution Environment
11311. On Windows, create the **Test** directory in the test framework and then create the **testcase** directory in the **Test** directory.
1132
11332. Copy **developertest** and **xdevice** from the Linux environment to the **Test** directory on Windows, and copy the test cases to the **testcase** directory.
1134
1135	> **NOTE**
1136	>
1137	> Port the test framework and test cases from the Linux environment to the Windows environment for subsequent execution.
1138
11393. Modify the **user_config.xml** file.
1140	```xml
1141	<build>
1142	  <!-- Because the test cases have been built, change the value to false. -->
1143	  <testcase>false</testcase>
1144	</build>
1145	<test_cases>
1146	  <!-- The test cases are copied to the Windows environment. Change the test case output path to the path of the test cases in the Windows environment.-->
1147	  <dir>D:\Test\testcase\tests</dir>
1148	</test_cases>
1149	```
1150	> **NOTE**
1151	>
1152	> **\<testcase>** indicates whether to build test cases. **\<dir>** indicates the path for searching for test cases.
1153
1154#### Executing Test Cases
1155
11561. Start the test framework.
1157	```
1158	start.bat
1159	```
11602. Select the product.
1161
1162    After the test framework starts, you are asked to select a product. Select the development board to test.
1163
1164	If you need to manually add a product, add it within the **\<productform\>** tag to **config/framework_config.xml**.
1165
11663. Execute the test cases.
1167
1168    Run the following commands to execute test cases:
1169	```
1170	run -t UT
1171	run -t UT -tp PartName
1172	run -t UT -tp PartName -tm TestModuleName
1173	run -t UT -tp ability_base -ts base_object_test
1174	run -t UT -tp PartName -tm TestModuleName -ts CalculatorSubTest
1175	run -t UT -ts base_object_test
1176	run -t UT -ts base_object_test -tc AAFwkBaseObjectTest.BaseObject_test_001
1177	run -t UT -ts CalculatorSubTest -tc CalculatorSubTest.interger_sub_00l
1178	run -t UT -cov coverage
1179	run -t UT -ra random
1180	run -t UT -ts base_object_test --repeat 5
1181	run -hl
1182	run -rh 3
1183	run --retry
1184	```
1185
1186
1187	In the command:
1188	```
1189	-**t [TESTTYPE]**: specifies the test type, which can be **UT**, **MST**, **ST**, **PERF**, **FUZZ**, **BENCHMARK**, **ACTS**, **HATS**, and more. This parameter is mandatory.
1190	-**tp [TESTPART]**: specifies the part to test. This parameter can be used independently.
1191	-**tm [TESTMODULE]**: specifies the module to test. This parameter must be specified together with **-tp**.
1192	-**ts [TESTSUITE]**: specifies the test suite. This parameter can be used independently.
1193	-**tc [TESTCASE]**: specifies the test case. This parameter must be specified together with **-ts** to indicate the test suite.
1194	-**cov [COVERAGE]**: specifies the coverage.
1195	-**h**: displays help information.
1196	-**ra [random]**: specifies the out-of-order execution for C++ cases.
1197	--**repeat**: specifies the number of case execution times.
1198	-**hl [HISTORYLIST]**: enables the display of the latest 10 test cases. If there are more than 10 test cases, only the latest 10 test cases are displayed.
1199	-**rh [RUNHISTORY]**: specifies the sequence number of the historical record to execute.
1200	--**retry**: checks the last execution result and re-runs the failed test cases, if any.
1201	```
1202
1203
1204### Executing Test Cases on Linux
1205
1206
1207#### Configuring Remote Port Mapping and Modifying Configuration File
1208To enable test cases to be executed on a remote Linux server or a Linux VM, map the port to enable communication between the device and the remote server or VM. Configure port mapping as follows:
12091. On the HDC server, run the following commands:
1210	```
1211	hdc_std kill
1212	hdc_std -m -s 0.0.0.0:8710
1213	```
1214	> **NOTE**
1215	>
1216	> The IP address and port number are default values.
1217
12182. On the HDC client, run the following command:
1219	```
1220	hdc_std -s xx.xx.xx.xx:8710 list targets
1221	```
1222	> **NOTE**
1223	>
1224	> Enter the IP address of the device to test.
1225
12263. Modify the **user_config.xml** file.
1227	```xml
1228	<build>
1229	  <!--If a test case needs to be compiled, set this attribute is true. Otherwise, set it to false. -->
1230	  <testcase>true</testcase>
1231	</build>
1232	<environment>
1233    <!-- Configure the IP address, port number, and SN of the remote server to support connection to the device through HDC. -->
1234    <device type="usb-hdc">
1235      <ip></ip>
1236      <port></port>
1237      <sn></sn>
1238    </device>
1239	<environment>
1240	```
1241
1242
1243#### Executing Test Cases
12441. Start the test framework.
1245	```
1246	./start.sh
1247	```
12482. Select the product.
1249
1250    After the test framework starts, you are asked to select a product. Select the development board to test.
1251
1252	If the displayed product list does not contain the target one, you can add it in the **\<productform\>** tag in **config/framework_config.xml**.
1253
1254	```
1255	<framework_config>
1256	 <productform>
1257	  <option name="ipcamera_hispark_aries" />
1258	  <option name="ipcamera_hispark_taurus" />
1259	  <option name="wifiiot_hispark_pegasus" />
1260	  <option name="" />
1261	 </productform>
1262	</framework_config>
1263
1264	```
1265
12663. Execute the test cases.
1267
1268    1. TDD commands
1269
1270    The test framework locates the test cases based on the command, and automatically builds and executes the test cases.
1271	```
1272	run -t UT
1273	run -t UT -tp PartName
1274	run -t UT -tp PartName -tm TestModuleName
1275	run -t UT -tp ability_base -ts base_object_test
1276	run -t UT -tp PartName -tm TestModuleName -ts CalculatorSubTest
1277	run -t UT -ts base_object_test
1278	run -t UT -ts base_object_test -tc AAFwkBaseObjectTest.BaseObject_test_001
1279	run -t UT -ts CalculatorSubTest -tc CalculatorSubTest.interger_sub_00l
1280	run -t -cov coverage
1281	run -t UT -ra random
1282	run -t UT -tp PartName -pd partdeps
1283	run -t UT -ts base_object_test --repeat 5
1284	run -hl
1285	run -rh 3
1286	run --retry
1287	```
1288	In the command:
1289	```
1290	-**t [TESTTYPE]**: specifies the test type, which can be **UT**, **MST**, **ST**, **PERF**, **FUZZ**, and **BENCHMARK**. This parameter is mandatory.
1291	-**tp [TESTPART]**: specifies the part to test. This parameter can be used independently.
1292	-**tm [TESTMODULE]**: specifies the module to test. This parameter must be specified together with **-tp**.
1293	-**ts [TESTSUITE]**: specifies the test suite. This parameter can be used independently.
1294	-**tc [TESTCASE]**: specifies the test case. This parameter must be specified together with **-ts** to indicate the test suite.
1295	-**cov [COVERAGE]**: specifies the coverage.
1296	-**h**: displays help information.
1297	-**ra [random]**: specifies the out-of-order execution for C++ cases.
1298	-**pd [partdeps]**: specifies execution parameter of the level-2 part dependencies.
1299	--**repeat**: specifies the number of case execution times.
1300	-**hl [HISTORYLIST]**: enables the display of the latest 10 test cases. If there are more than 10 test cases, only the latest 10 test cases are displayed.
1301	-**rh [RUNHISTORY]**: specifies the sequence number of the historical record to execute.
1302	--**retry**: checks the last execution result and re-runs the failed test cases, if any.
1303	```
1304
1305	In Linux, you can run the following commands to view the supported product forms, test types, subsystems, and parts.
1306	```
1307	To view the help information, run **help**.
1308	To view the **show** command, run **help show**.
1309	To view the supported product forms, run **show productlist**.
1310	To view the supported test types, run **show typelist**.
1311	To view the supported test subsystems, run **show subsystemlist** .
1312	To view the supported test parts, run **show partlist**.
1313	```
1314	2. ACTS/HATS commands
1315
1316	After selecting the product, you can refer to the following to execute the ACTS or HATS test cases.
1317	```
1318	run -t ACTS
1319	run -t HATS
1320	run -t ACTS -ss arkui
1321	run -t ACTS -ss arkui, modulemanager
1322	run -t ACTS -ss arkui -ts ActsAceEtsTest
1323	run -t HATS -ss telephony -ts HatsHdfV1RilServiceTest
1324	run -t ACTS -ss arkui -tp ActsPartName
1325	run -t ACTS -ss arkui -ts ActsAceEtsTest,ActsAceEtsResultTest
1326	run -t HATS -ss powermgr -ts HatsPowermgrBatteryTest,HatsPowermgrThermalTest
1327	run -t ACTS -ss arkui -ts ActsAceEtsTest -ta class:alphabetIndexerTest#alphabetIndexerTest001
1328	run -t ACTS -ss arkui -ts ActsAceEtsTest -ta class:alphabetIndexerTest#alphabetIndexerTest001 --repeat 2
1329	run -hl
1330	run -rh 1
1331	run --retry
1332	```
1333	The parameters in the ACTS and HATS commands are the same, but are different from those in TDD commands.
1334	```
1335	-**t [TESTTYPE]**: specifies the test case type, which can be **ACTS** or **HATS**. This parameter is mandatory.
1336	-**ss [SUBSYSTEM]**: specifies the subsystem to test. This parameter can be used independently. To specify multiple subsystems, separate them with commas (,).
1337	-**tp [TESTPART]**: specifies the part to test. This parameter can be used independently.
1338	-**ts [TESTSUITE]**: specifies the test suite. This parameter can be used independently. To specify multiple test suites, separate them with commas (,).
1339	-**ta [TESTARGS]**: specifies the test method. This parameter must be used together with **-ts**.
1340	--**repeat**: specifies the number of case execution times.
1341	-**hl [HISTORYLIST]**: enables the display of the latest 10 test cases. If there are more than 10 test cases, only the latest 10 test cases are displayed.
1342	-**rh [RUNHISTORY]**: specifies the sequence number of the historical record to execute.
1343	--**retry**: checks the last execution result and re-runs the failed test cases, if any.
1344	```
1345
1346## Viewing the Test Result
1347
1348
1349After the test is executed, the console automatically generates the test result.
1350
1351You can obtain the test result in the following directory:
1352```
1353test/developertest/reports/xxxx_xx_xx_xx_xx_xx
1354```
1355> **NOTE**
1356>
1357> The test report folder is automatically generated.
1358
1359The folder contains the following files:
1360| Type                                | Description              |
1361| ------------------------------------ | ------------------ |
1362| result/                              | Test cases in standard format.|
1363| log/plan_log_xxxx_xx_xx_xx_xx_xx.log | Test case logs.               |
1364| summary_report.html                  | Test report summary.          |
1365| details_report.html                  | Detailed test report.         |
1366
1367
1368
1369## Executing Coverage Cases
1370When GCDA data is available, you can execute the test cases as follows for subsystems to generate a coverage report:
1371
13721. (Optional) To block redundant branch data generated by non-core code, run the following command in the **/test/testfwk/developer_test/localCoverage/restore_comment** directory before source code compilation:
1373
1374       python3 build_before_generate.py
1375
1376   Run the following command to select the parts to be blocked during compilation:
1377
1378       run -tp partname
1379       run -tp partname1 partname2
1380
13812. Before compiling the version, modify the compilation options. Add **-- coverage** to the **cflags**, **cflags_cc**, and **ldflags** options in the **build.gn** file of the involved subsystem.
1382
1383       ldflags = [ "--coverage" ]
1384       C:   cflags = [ "--coverage" ]
1385       C++: cflags_cc = [ "--coverage" ]
1386
1387   **Recommended**: You can also refer to the mode for the window subsystem. For details, see the files in this [pull request](https://gitee.com/openharmony/window_window_manager/pulls/1274/files).
1388
13893. To execute coverage test cases, perform the following to install the dependencies:
1390
1391       1. Run the **sudo apt install lcov** command to install lcov.
1392       2. Run the **apt install dos2unix** command to install dos2unix.
1393       3. Run the **pip install lxml** command to install lxml.
1394       4. Run the **pip install selectolax** command to install selectolax.
1395       5. Run the **pip install CppHeaderParser** command to install CppHeaderParser.
1396
13974. To map a remote device, set its IP address in the **usr_config.xml** file. For details about device mapping, see [Configuring Remote Port Mapping and Modifying Configuration File](#configuring-remote-port-mapping-and-modifying-configuration-file).
1398
1399       <!-- Set the IP address of the remote host to map (IP address of the PC to which the device is mounted).-->
1400       <device type="usb-hdc">
1401         <ip></ip>
1402         <port></port>
1403         <sn></sn>
1404       </device>
1405
14065. Run the **./start.sh** command. Below are examples:
1407
1408       run -t UT -tp *Part name* -cov coverage
1409       run -t UT -ss *Subsystem name* -cov coverage
1410       run -t UT -ss *Subsystem name* -tp **Part name** -cov coverage
1411       run -t UT MST ST -tp *Part name* -cov coverage
1412
1413   > **NOTE**
1414   >
1415   > The **-cov coverage** parameter must be added to the preceding commands.
1416
14176. Obtain the coverage report from the following paths:
1418
1419   Code coverage report: **/test/testfwk/developer_test/localCoverage/codeCoverage/results/coverage/reports/cxx/html**
1420
1421   API coverage report: **/test/testfwk/developer_test/localCoverage/interfaceCoverage/results/coverage/interface_kits/html**
1422