Testing plugins: minor

This commit is contained in:
Yann Cébron 2021-09-08 18:01:06 +02:00
parent 0e840bc3fe
commit 1ff05cc7f8
4 changed files with 8 additions and 2 deletions

View File

@ -12,6 +12,8 @@ The test project files exist either in a temporary directory or in an in-memory
> If you get an unexpected error after a series of successful runs, **try rerunning the test**, and if that doesn't help, **delete the "system" subdirectory** in your [sandbox directory](ide_development_instance.md#the-development-instance-sandbox-directory). > If you get an unexpected error after a series of successful runs, **try rerunning the test**, and if that doesn't help, **delete the "system" subdirectory** in your [sandbox directory](ide_development_instance.md#the-development-instance-sandbox-directory).
> >
{type="warning"} {type="warning"}
## Testdata Files
In your plugin, you usually store the test data for your tests (such as files on which plugin features will be executed and expected output files) in the <path>testdata</path> directory. In your plugin, you usually store the test data for your tests (such as files on which plugin features will be executed and expected output files) in the <path>testdata</path> directory.
This is just a directory under your plugin's content root, but not under a source root. This is just a directory under your plugin's content root, but not under a source root.

View File

@ -8,6 +8,8 @@ The IntelliJ Platform provides a dedicated utility and markup format for this ta
To test the highlighting for the file currently loaded into the in-memory editor, you invoke the `checkHighlighting()` method. To test the highlighting for the file currently loaded into the in-memory editor, you invoke the `checkHighlighting()` method.
The parameters to the method specify which severities should be taken into account when comparing the results with the expected results: errors are always taken into account, whereas warnings, weak warnings, and infos are optional. The parameters to the method specify which severities should be taken into account when comparing the results with the expected results: errors are always taken into account, whereas warnings, weak warnings, and infos are optional.
Alternatively, you can use the `testHighlighting()` method, which loads a <path>testdata</path> file into the in-memory editor and highlights it as a single operation. Alternatively, you can use the `testHighlighting()` method, which loads a <path>testdata</path> file into the in-memory editor and highlights it as a single operation.
## Inspections
If you need to test inspections (rather than generic highlighting provided by a highlighting lexer or annotator), you need to enable inspections that you're testing. If you need to test inspections (rather than generic highlighting provided by a highlighting lexer or annotator), you need to enable inspections that you're testing.
This is done by calling `CodeInsightTestFixture.enableInspections()` in the setup method of your test or directly in a test method, before the call to `checkHighlighting()`. This is done by calling `CodeInsightTestFixture.enableInspections()` in the setup method of your test or directly in a test method, before the call to `checkHighlighting()`.

View File

@ -27,7 +27,7 @@ We recommend working with real components instead.
Please see the dedicated [intellij-ui-test-robot](https://github.com/JetBrains/intellij-ui-test-robot) library. Please see the dedicated [intellij-ui-test-robot](https://github.com/JetBrains/intellij-ui-test-robot) library.
It is fully integrated with Gradle-based setup via `runIdeForUiTests` task. It is fully integrated with Gradle-based setup via `runIdeForUiTests` task.
Please do not use _platform/testGuiFramework_; it is reserved for internal use. Please do not use <path>platform/testGuiFramework</path> it is reserved for internal use.
## Topics ## Topics

View File

@ -14,4 +14,6 @@ However, for many common cases, the framework provides helper methods that can m
To compare the results of executing the action with the expected results, you can use the `checkResultByFile()` method. To compare the results of executing the action with the expected results, you can use the `checkResultByFile()` method.
The file with the expected results can also contain [markup](test_project_and_testdata_directories.md#special-markup) to specify the expected caret position or selected text range. The file with the expected results can also contain [markup](test_project_and_testdata_directories.md#special-markup) to specify the expected caret position or selected text range.
Suppose you're testing an action that modifies multiple files (a project-wide refactoring, for example). In that case, you can compare an entire directory under the test project with the expected output using `PlatformTestUtil.assertDirectoriesEqual()`. Suppose you're testing an action that modifies multiple files (a project-wide refactoring, for example). In that case, you can compare an entire directory under the test project with the expected output using `PlatformTestUtil.assertDirectoriesEqual()`.
See [Useful Classes](testing_faq.md#useful-classes) for other common testing functionality.