Making open source more inclusive
Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.
1. Introduction
1.1. About the CLI Guide
This guide is for engineers, consultants, and others who want to use the Migration Toolkit for Applications ({ProductShortName}) to migrate Java applications or other components. It describes how to install and run the CLI, review the generated reports, and take advantage of additional features.
1.2. About the Migration Toolkit for Applications
What is the Migration Toolkit for Applications?
The Migration Toolkit for Applications ({ProductShortName}) is an extensible and customizable rule-based tool that simplifies the migration and modernization of Java applications.
{ProductShortName} examines application artifacts, including project source directories and application archives, and then produces an HTML report highlighting areas needing changes. {ProductShortName} supports many migration paths including the following examples:
-
Upgrading to the latest release of Red Hat JBoss Enterprise Application Platform
-
Migrating from Oracle WebLogic or IBM WebSphere Application Server to Red Hat JBoss Enterprise Application Platform
-
Containerizing applications and making them cloud-ready
-
Migrating from Java Spring Boot to Quarkus
-
Updating from Oracle JDK to OpenJDK
For more information about use cases and migration paths, see the {ProductShortName} for developers web page.
How does the Migration Toolkit for Applications simplify migration?
The Migration Toolkit for Applications looks for common resources and known trouble spots when migrating applications. It provides a high-level view of the technologies used by the application.
{ProductShortName} generates a detailed report evaluating a migration or modernization path. This report can help you to estimate the effort required for large-scale projects and to reduce the work involved.
How do I learn more?
See the Introduction to the Migration Toolkit for Applications to learn more about the features, supported configurations, system requirements, and available tools in the Migration Toolkit for Applications.
1.3. About the CLI
The CLI is a command-line tool in the Migration Toolkit for Applications that allows users to assess and prioritize migration and modernization efforts for applications. It provides numerous reports that highlight the analysis without the overhead of the other tools. The CLI includes a wide array of customization options, and allows you to finely tune {ProductShortName} analysis options or integrate with external automation tools.
2. Installing and Running the CLI
2.1. Installing the CLI
You can install the CLI on Linux, Windows, or macOS operating systems.
Prerequisites
-
Java Development Kit (JDK) installed. {ProductShortName} supports the following JDKs:
-
OpenJDK 1.8
-
OpenJDK 11
-
Oracle JDK 1.8
-
Oracle JDK 11
-
-
8 GB RAM
-
If you are installing on macOS, the value of
maxproc
must be2048
or greater.
-
Navigate to the {ProductShortName} Download page and download the
Migration Toolkit CLI
file. -
Extract the
.zip
file to a directory of your choice.NoteIf you are installing on a Windows operating system:
-
Extract the
.zip
file to a folder namedmta
to avoid aPath too long
error. -
If a Confirm file replace window is displayed during extraction, click Yes to all.
The installation directory is referred to as
<{ProductShortName}_HOME>
in this guide. -
2.2. Running the CLI
You can run {ProductShortName} against your application.
-
Open a terminal and navigate to the
<{ProductShortName}_HOME>/bin/
directory. -
Execute the
mta-cli
script, ormta-cli.bat
for Windows, and specify the appropriate arguments:$ ./mta-cli --input /path/to/jee-example-app-1.0.0.ear \ --output /path/to/output --source weblogic --target eap:6 \ --packages com.acme org.apache
-
--input
: The application to be evaluated. -
--output
: The output directory for the generated reports. -
--source
: The source technology for the application migration. -
--target
: The target technology for the application migration. -
--packages
: The packages to be evaluated. This argument is highly recommended to improve performance.
-
-
Access the report.
2.2.1. {ProductShortName} command examples
Running {ProductShortName} on an application archive
The following command analyzes the com.acme
and org.apache
packages of the jee-example-app-1.0.0.ear example EAR archive for migrating from JBoss EAP 5 to JBoss EAP 7:
$ <{ProductShortName}_HOME>/bin/mta-cli \
--input /path/to/jee-example-app-1.0.0.ear \
--output /path/to/report-output/ --source eap:5 --target eap:7 \
--packages com.acme org.apache
Running {ProductShortName} on source code
The following command analyzes the org.jboss.seam
packages of the seam-booking-5.2 example source code for migrating to JBoss EAP 6.
$ <{ProductShortName}_HOME>/bin/mta-cli --sourceMode --input /path/to/seam-booking-5.2/ \
--output /path/to/report-output/ --target eap:6 --packages org.jboss.seam
Running cloud-readiness rules
The following command analyzes the com.acme
and org.apache
packages of the jee-example-app-1.0.0.ear example EAR archive for migrating to JBoss EAP 7. It also evaluates for cloud readiness:
$ <{ProductShortName}_HOME>/bin/mta-cli --input /path/to/jee-example-app-1.0.0.ear \
--output /path/to/report-output/ \
--target eap:7 --target cloud-readiness --packages com.acme org.apache
Overriding {ProductShortName} properties
To override the default Fernflower decompiler, pass the -Dwindup.decompiler
argument on the command line. For example, to use the Procyon decompiler, use the following syntax:
$ <{ProductShortName}_HOME>/bin/mta-cli -Dwindup.decompiler=procyon --input \
<INPUT_ARCHIVE_OR_DIRECTORY> --output <OUTPUT_REPORT_DIRECTORY> \
--target <TARGET_TECHNOLOGY> --packages <PACKAGE_1> <PACKAGE_2>
2.2.2. Refactoring source code using OpenRewrite
OpenRewrite uses recipes to automate large-scale, distributed source code refactoring. It can be used with the {ProductShortName}.
The first OpenRewrite recipe, which is shipped with {ProductShortName} 5.2.1, renames imported javax
packages to their jakarta
equivalents.
You can use OpenRewrite in the {ProductShortName} CLI to prepare Java applications for migration.
-
Add the
openrewrite
argument when you execute the 'mta-cli' script and specify the other arguments appropriately:./mta-cli --openrewrite "-DactiveRecipes=<recipe name>" --input /path/to/source/project --goal dryRun
-
--openrewrite
: Flag specifying to run an OpenRewrite migration, instead of an {ProductShortName} analysis. -
"-DactiveRecipes=org.<recipe name>"
: The OpenRewrite recipe to apply to the input project.JavaxtoJakarta
is the default shipped recipe but you can add your own to the shippedrewrite.yml
and run that recipe instead. Therewrite.yml file
is located in therules/openrewrite/`
folder in the uncompressed {ProductShortName} distribution. -
--input
: The application to be evaluated. -
--goal
: Optional. The OpenRewrite maven goal to run. Parameters:-
dryRun
: The script returns a list of proposed changes, creates a patch file which can be applied later, and returns the following message:Run 'mvn rewrite:run' to apply the recipes.
-
run
: The script makes all changes automatically without listing them first.If you do not enter a parameter for
goal
, the script executes it asdryRun
.
-
-
-
To apply the recipe, run the command using the
run
parameter for the 'goal' argument:./mta-cli --openrewrite "-DactiveRecipes=<recipe name>" --input /path/to/source/project --goal run
Important
|
OpenRewrite recipe support is provided as Technology Preview only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend to use them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. See Technology Preview features support scope on the Red Hat Customer Portal for information about the support scope for Technology Preview features. |
2.2.3. {ProductShortName} CLI Bash completion
The {ProductShortName} CLI provides an option to enable Bash completion for Linux systems, allowing the {ProductShortName} command-line arguments to be auto completed by pressing the Tab key when entering the commands. For instance, when Bash completion is enabled, entering the following displays a list of available arguments.
$ <{ProductShortName}_HOME>/bin/mta-cli [TAB]
Enabling Bash completion
To enable Bash completion for the current shell, execute the following command:
$ source <{ProductShortName}_HOME>/bash-completion/mta-cli
Enabling persistent Bash completion
The following commands allow Bash completion to persist across restarts:
-
To enable Bash completion for a specific user across system restarts, include the following line in that user’s
~/.bashrc
file.source <{ProductShortName}_HOME>/bash-completion/mta-cli
-
To enable Bash completion for all users across system restarts, copy the Migration Toolkit for Applications CLI Bash completion file to the
/etc/bash_completion.d/
directory as the root user.# cp <{ProductShortName}_HOME>/bash-completion/mta-cli /etc/bash_completion.d/
2.2.4. Accessing {ProductShortName} help
To see the complete list of available arguments for the mta-cli
command, open a terminal, navigate to the <{ProductShortName}_HOME>
directory, and execute the following command:
$ <{ProductShortName}_HOME>/bin/mta-cli --help
2.3. Accessing reports
When you run the Migration Toolkit for Applications, a report is generated in the <OUTPUT_REPORT_DIRECTORY>
that you specify using the --output
argument in the command line.
The output directory contains the following files and subdirectories:
<OUTPUT_REPORT_DIRECTORY>/ ├── index.html // Landing page for the report ├── <EXPORT_FILE>.csv // Optional export of data in CSV format ├── archives/ // Archives extracted from the application ├── mavenized/ // Optional Maven project structure ├── reports/ // Generated HTML reports ├── stats/ // Performance statistics
-
Obtain the path of the
index.html
file of your report from the output that appears after you run {ProductShortName}:Report created: <OUTPUT_REPORT_DIRECTORY>/index.html Access it at this URL: file:///<OUTPUT_REPORT_DIRECTORY>/index.html
-
Open the
index.html
file by using a browser.The generated report is displayed.
3. Reviewing the reports
The report examples shown in the following sections are a result of analyzing the com.acme
and org.apache
packages in the jee-example-app-1.0.0.ear example application, which is located in the {ProductShortName} GitHub source repository.
The report was generated using the following command.
$ <{ProductShortName}_HOME>/bin/mta-cli --input /home/username/mta-cli-source/test-files/jee-example-app-1.0.0.ear/ --output /home/username/mta-cli-reports/jee-example-app-1.0.0.ear-report --target eap:6 --packages com.acme org.apache
Use a browser to open the index.html
file located in the report output directory. This opens a landing page that lists the applications that were processed. Each row contains a high-level overview of the story points, number of incidents, and technologies encountered in that application.
Note
|
The incidents and estimated story points change as new rules are added to {ProductShortName}. The values here may not match what you see when you test this application. |
The following table lists all of the reports and pages that can be accessed from this main {ProductShortName} landing page. Click the name of the application, jee-example-app-1.0.0.ear, to view the application report.
Page | How to Access |
---|---|
Application |
Click the name of the application. |
Technologies report |
Click the Technologies link at the top of the page. |
Dependencies graph report |
Click the Dependencies Graph link at the top of the page. |
Archives shared by multiple applications |
Click the Archives shared by multiple applications link. Note that this link is only available when there are shared archives across multiple applications. |
Rule providers execution overview |
Click the Rule providers execution overview link at the bottom of the page. |
Used FreeMarker functions and directives |
Click the FreeMarker methods link at the bottom of the page. |
Send feedback form |
Click the Send Feedback link in the top navigation bar to open a form that allows you to submit feedback to the {ProductShortName} team. |
Note that if an application shares archives with other analyzed applications, you will see a breakdown of how many story points are from shared archives and how many are unique to this application.
Information about the archives that are shared among applications can be found in the Archives Shared by Multiple Applications reports.
3.1. Application report
3.1.1. Dashboard
Access this report from the report landing page by clicking on the application name in the Application List.
The dashboard gives an overview of the entire application migration effort. It summarizes:
-
The incidents and story points by category
-
The incidents and story points by level of effort of the suggested changes
-
The incidents by package
The top navigation bar lists the various reports that contain additional details about the migration of this application. Note that only those reports that are applicable to the current application will be available.
Report | Description |
---|---|
Issues |
Provides a concise summary of all issues that require attention. |
Application details |
Provides a detailed overview of all resources found within the application that may need attention during the migration. |
Technologies |
Displays all embedded libraries grouped by functionality, allowing you to quickly view the technologies used in each application. |
Dependencies graph |
Displays a graph of all Java-packaged dependencies found within the analyzed applications. This graph also demonstrates the relations of each dependency, allowing you to view nested and multiple dependencies. |
Dependencies |
Displays all Java-packaged dependencies found within the application. |
Unparsable |
Shows all files that {ProductShortName} could not parse in the expected format. For instance, a file with a |
Remote services |
Displays all remote services references that were found within the application. |
EJBs |
Contains a list of EJBs found within the application. |
JBPM |
Contains all of the JBPM-related resources that were discovered during analysis. |
JPA |
Contains details on all JPA-related resources that were found in the application. |
Hibernate |
Contains details on all Hibernate-related resources that were found in the application. |
Server resources |
Displays all server resources (for example, JNDI resources) in the input application. |
Spring Beans |
Contains a list of Spring Beans found during the analysis. |
Hard-coded IP addresses |
Provides a list of all hard-coded IP addresses that were found in the application. |
Ignored files |
Lists the files found in the application that, based on certain rules and {ProductShortName} configuration, were not processed. See the |
About |
Describes the current version of {ProductShortName} and provides helpful links for further assistance. |
3.1.2. Issues report
Access this report from the dashboard by clicking the Issues link.
This report includes details about every issue that was raised by the selected migration paths. The following information is provided for each issue encountered:
-
A title to summarize the issue.
-
The total number of incidents, or times the issue was encountered.
-
The rule story points to resolve a single instance of the issue.
-
The estimated level of effort to resolve the issue.
-
The total story points to resolve every instance encountered. This is calculated by multiplying the number of incidents found by the story points per incident.
Each reported issue may be expanded, by clicking on the title, to obtain additional details. The following information is provided.
-
A list of files where the incidents occurred, along with the number of incidents within each file. If the file is a Java source file, then clicking the filename will direct you to the corresponding Source report.
-
A detailed description of the issue. This description outlines the problem, provides any known solutions, and references supporting documentation regarding either the issue or resolution.
-
A direct link, entitled Show Rule, to the rule that generated the issue.
Issues are sorted into four categories by default. Information on these categories is available at ask Category.
3.1.3. Application details report
Access this report from the dashboard by clicking the Application Details link.
The report lists the story points, the Java incidents by package, and a count of the occurrences of the technologies found in the application. Next is a display of application messages generated during the migration process. Finally, there is a breakdown of this information for each archive analyzed during the process.
Expand the jee-example-app-1.0.0.ear/jee-example-services.jar
to review the story points, Java incidents by package, and a count of the occurrences of the technologies found in this archive. This summary begins with a total of the story points assigned to its migration, followed by a table detailing the changes required for each file in the archive. The report contains the following columns.
Column Name | Description |
---|---|
Name |
The name of the file being analyzed. |
Technology |
The type of file being analyzed, for example, Decompiled Java File or Properties. |
Issues |
Warnings about areas of code that need review or changes. |
Story Points |
Level of effort required to migrate the file. |
Note that if an archive is duplicated several times in an application, it will be listed just once in the report and will be tagged with [Included multiple times]
.
The story points for archives that are duplicated within an application will be counted only once in the total story point count for that application.
3.1.4. Technologies report
Access this report from the dashboard by clicking the Technologies link.
The report lists the occurrences of technologies, grouped by function, in the analyzed application. It is an overview of the technologies found in the application, and is designed to assist users in quickly understanding each application’s purpose.
The image below shows the technologies used in the jee-example-app
.
3.1.5. Application dependencies graph report
The analyzed applications' dependencies are shown in this report, accessible from the Dependencies Graph link from the dashboard.
It includes a list of all WARs and JARs, including third-party JARs, and graphs the relations between each of the included files. Each circle in the graph represents a unique dependency defined in the application.
The below image shows the dependencies used in the jee-example-app
, with the selected application in the center of the graph.
The dependencies graph may be adjusted by using any of the following.
-
Clicking a dependency will display the name of the application in the upper-left corner. While selected the dependency will have a shaded circle identifying it, as seen on the center in the above image.
-
Clicking and dragging a circle will reposition it. Releasing the mouse will fix the dependency to the cursor’s location.
-
Clicking on a fixed dependency will release it, returning the dependency to its default distance from the application.
-
Double clicking anywhere will return the entire graph to the default state.
-
Clicking on any item in the legend will enable or disable all items of the selected type. For instance, selecting the embedded WARs icon will disable all embedded WARs if these are enabled, and will enable these dependencies if they are disabled.
3.1.6. Source report
The analysis of the jee-example-services.jar
lists the files in the JAR and the warnings and story points assigned to each one. Notice the com.acme.anvil.listener.AnvilWebLifecycleListener
file, at the time of this test, has 22 warnings and is assigned 16 story points. Click the file link to see the detail.
-
The Information section provides a summary of the story points.
-
This is followed by the file source code. Warnings appear in the file at the point where migration is required.
In this example, warnings appear at various import statements, declarations, and method calls. Each warning describes the issue and the action that should be taken.
3.2. Technologies report
Access this report from the report landing page by clicking the Technologies link.
This report provides an aggregate listing of the technologies used, grouped by function, for the analyzed applications. It shows how the technologies are distributed, and is typically reviewed after analyzing a large number of applications to group the applications and identify patterns. It also shows the size, number of libraries, and story point totals of each application.
Clicking any of the headers, such as Markup, sorts the results in descending order. Selecting the same header again will resort the results in ascending order. The currently selected header is identified in bold, next to a directional arrow, indicating the direction of the sort.
3.3. Dependencies graph report
Access this report from the report landing page by clicking the Dependencies Graph link.
It includes a list of all WARs and JARs, and graphs the relations between each of the included files. Each circle in the graph represents a unique dependency defined in the application. If a file is included as a dependency in multiple applications, these are linked in the graph.
In the below image we can see two distinct groups. On the left half we see a single WAR that defines several JARs as dependencies. On the right half we see the same dependencies used by multiple WARs, one of which is the selected overlord-commons-auth-2.0.11.Final.jar
.
The dependencies graph may be adjusted by using any of the following.
-
Clicking a dependency will display the name of the application in the upper-left corner. While selected the dependency will have a shaded circle identifying it, as seen on the center in the above image.
-
Clicking and dragging a circle will reposition it. Releasing the mouse will fix the dependency to the cursor’s location.
-
Clicking on a fixed dependency will release it, returning the dependency to its default distance from the application.
-
Double clicking anywhere will return the entire graph to the default state.
-
Clicking on any item in the legend will enable or disable all items of the selected type. For instance, selecting the embedded WARs icon will disable all embedded WARs if these are enabled, and will enable these dependencies if they are disabled.
3.4. Archives shared by multiple applications
Access these reports from the report landing page by clicking the Archives shared by multiple applications link. Note that this link is only available if there are applicable shared archives.
This allows you to view the detailed reports for all archives that are shared across multiple applications.
3.5. Rule providers execution overview
Access this report from the report landing page by clicking the Rule providers execution overview link.
This report provides the list of rules that executed when running the {ProductShortName} migration command against the application.
3.6. Used FreeMarker functions and directives
Access this report from the report landing page by clicking the FreeMarker methods link.
This report lists all the registered functions and directives that were used to build the report. It is useful for debugging purposes or if you plan to build your own custom report.
3.7. Send feedback form
Access this feedback form from the report landing page by clicking the Send feedback link.
This form allows you to rate the product, talk about what you like, and make suggestions for improvements.
4. Exporting the report in CSV format
{ProductShortName} provides the ability to export the report data, including the classifications and hints, to a flat file on your local file system. The export function currently supports the CSV file format, which presents the report data as fields separated by commas (,
).
The CSV file can be imported and manipulated by spreadsheet software such as Microsoft Excel, OpenOffice Calc, or LibreOffice Calc. Spreadsheet software provides the ability to sort, analyze, evaluate, and manage the result data from an {ProductShortName} report.
4.1. Exporting the report
To export the report as a CSV file, run {ProductShortName} with the --exportCSV
argument. A CSV file is created in the directory specified by the --output
argument for each application analyzed.
All discovered issues, spanning all the analyzed applications, are included in the AllIssues.csv
file that is exported to the root directory of the report.
Accessing the report from the application report
If you have exported the CSV report, you can download all of the CSV issues in the Issues Report. To download these issues, click Download All Issues CSV in the Issues Report.
4.2. Importing the CSV file into a spreadsheet program
-
Launch the spreadsheet software, for example, Microsoft Excel.
-
Choose File → Open.
-
Browse to the CSV exported file and select it.
-
The data is now ready to analyze in the spreadsheet software.
4.3. About the CSV data structure
The CSV formatted output file contains the following data fields:
- Rule Id
-
The ID of the rule that generated the given item.
- Problem type
-
hint or classification
- Title
-
The title of the classification or hint. This field summarizes the issue for the given item.
- Description
-
The detailed description of the issue for the given item.
- Links
-
URLs that provide additional information about the issue. A link consists of two attributes: the link and a description of the link.
- Application
-
The name of the application for which this item was generated.
- File Name
-
The name of the file for the given item.
- File Path
-
The file path for the given item.
- Line
-
The line number of the file for the given item.
- Story points
-
The number of story points, which represent the level of effort, assigned to the given item.
5. Mavenizing your application
{ProductShortName} provides the ability to generate an Apache Maven project structure based on the application provided. This will create a directory structure with the necessary Maven Project Object Model (POM) files that specify the appropriate dependencies.
Note that this feature is not intended to create a final solution for your project. It is meant to give you a starting point and identify the necessary dependencies and APIs for your application. Your project may require further customization.
5.1. Generating the Maven project structure
You can generate a Maven project structure for the provided application by passing in the --mavenize
flag when executing {ProductShortName}.
The following example runs {ProductShortName} using the jee-example-app-1.0.0.ear test application:
$ <{ProductShortName}_HOME>/bin/mta-cli --input /path/to/jee-example-app-1.0.0.ear --output /path/to/output --target eap:6 --packages com.acme org.apache --mavenize
This generates the Maven project structure in the /path/to/output/mavenized
directory.
Note
|
You can only use the --mavenize option when providing a compiled application for the --input argument. This feature is not available when running {ProductShortName} against source code.
|
You can also use the --mavenizeGroupId
option to specify the <groupId>
to be used for the POM files. If unspecified, {ProductShortName} will attempt to identify an appropriate <groupId>
for the application, or will default to com.mycompany.mavenized
.
5.2. Reviewing the Maven project structure
The /path/to/output/mavenized/<APPLICATION_NAME>/
directory contains the following items:
-
A root
POM
file. This is thepom.xml
file at the top-level directory. -
A BOM file. This is the
POM
file in the directory ending with-bom
. -
One or more application
POM
files. Each module has itsPOM
file in a directory named after the archive.
The example jee-example-app-1.0.0.ear
application is an EAR archive that contains a WAR and several JARs. There is a separate directory created for each of these artifacts. Below is the Maven project structure created for this application.
/path/to/output/mavenized/jee-example-app/
jee-example-app-bom/pom.xml
jee-example-app-ear/pom.xml
jee-example-services2-jar/pom.xml
jee-example-services-jar/pom.xml
jee-example-web-war/pom.xml
pom.xml
Review each of the generated files and customize as appropriate for your project. To learn more about Maven POM files, see the Introduction to the POM section of the Apache Maven documentation.
Root POM file
The root POM file for the jee-example-app-1.0.0.ear
application can be found at /path/to/output/mavenized/jee-example-app/pom.xml
. This file identifies the directories for all of the project modules.
The following modules are listed in the root POM for the example jee-example-app-1.0.0.ear
application.
<modules>
<module>jee-example-app-bom</module>
<module>jee-example-services2-jar</module>
<module>jee-example-services-jar</module>
<module>jee-example-web-war</module>
<module>jee-example-app-ear</module>
</modules>
Note
|
Be sure to reorder the list of modules if necessary so that they are listed in an appropriate build order for your project. |
The root POM is also configured to use the Red Hat JBoss Enterprise Application Platform Maven repository to download project dependencies.
BOM file
The Bill of Materials (BOM) file is generated in the directory ending in -bom
. For the example jee-example-app-1.0.0.ear
application, the BOM file can be found at /path/to/output/mavenized/jee-example-app/jee-example-app-bom/pom.xml
. The purpose of this BOM is to have the versions of third-party dependencies used by the project defined in one place. For more information on using a BOM, see the Introduction to the dependency mechanism section of the Apache Maven documentation.
The following dependencies are listed in the BOM for the example jee-example-app-1.0.0.ear
application
<dependencyManagement>
<dependencies>
<dependency>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
<version>1.2.6</version>
</dependency>
<dependency>
<groupId>commons-lang</groupId>
<artifactId>commons-lang</artifactId>
<version>2.5</version>
</dependency>
</dependencies>
</dependencyManagement>
Application POM files
Each application module that can be mavenized has a separate directory containing its POM file. The directory name contains the name of the archive and ends in a -jar
, -war
, or -ear
suffix, depending on the archive type.
Each application POM file lists that module’s dependencies, including:
-
Third-party libraries
-
Java EE APIs
-
Application submodules
For example, the POM file for the jee-example-app-1.0.0.ear
EAR, /path/to/output/mavenized/jee-example-app/jee-example-app-ear/pom.xml
, lists the following dependencies.
<dependencies>
<dependency>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
<version>1.2.6</version>
</dependency>
<dependency>
<groupId>org.jboss.seam</groupId>
<artifactId>jee-example-web-war</artifactId>
<version>1.0</version>
<type>war</type>
</dependency>
<dependency>
<groupId>org.jboss.seam</groupId>
<artifactId>jee-example-services-jar</artifactId>
<version>1.0</version>
</dependency>
<dependency>
<groupId>org.jboss.seam</groupId>
<artifactId>jee-example-services2-jar</artifactId>
<version>1.0</version>
</dependency>
</dependencies>
6. Optimizing {ProductShortName} performance
{ProductShortName} performance depends on a number of factors, including hardware configuration, the number and types of files in the application, the size and number of applications to be evaluated, and whether the application contains source or compiled code. For example, a file that is larger than 10 MB may need a lot of time to process.
In general, {ProductShortName} spends about 40% of the time decompiling classes, 40% of the time executing rules, and the remainder of the time processing other tasks and generating reports. This section describes what you can do to improve the performance of {ProductShortName}.
6.1. Deploying and running the application
Try these suggestions first before upgrading hardware.
-
If possible, execute {ProductShortName} against the source code instead of the archives. This eliminates the need to decompile additional JARs and archives.
-
Specify a comma-separated list of the packages to be evaluated by {ProductShortName} using the
--packages
argument on the<{ProductShortName}_HOME>/bin/mta-cli
command line. If you omit this argument, {ProductShortName} will decompile everything, which has a big impact on performance. -
Specify the
--excludeTags
argument where possible to exclude them from processing. -
Avoid decompiling and analyzing any unnecessary packages and files, such as proprietary packages or included dependencies.
-
Increase your ulimit when analyzing large applications. See this Red Hat Knowledgebase article for instructions on how to do this for Red Hat Enterprise Linux.
-
If you have access to a server that has better resources than your laptop or desktop machine, you may want to consider running {ProductShortName} on that server.
6.2. Upgrading hardware
If the application and command-line suggestions above do not improve performance, you may need to upgrade your hardware.
-
If you have access to a server that has better resources than your laptop/desktop, then you may want to consider running {ProductShortName} on that server.
-
Very large applications that require decompilation have large memory requirements. 8 GB RAM is recommended. This allows 3 - 4 GB RAM for use by the JVM.
-
An upgrade from a single or dual-core to a quad-core CPU processor provides better performance.
-
Disk space and fragmentation can impact performance. A fast disk, especially a solid-state drive (SSD), with greater than 4 GB of defragmented disk space should improve performance.
6.3. Configuring {ProductShortName} to exclude packages and files
6.3.1. Excluding packages
You can exclude packages during decompilation and analysis to increase performance. References to these packages remain in the application’s source code but excluding them avoids the decompilation and analysis of proprietary classes.
Any packages that match the defined value are excluded. For example, you can use com.acme
to exclude both com.acme.example
and com.acme.roadrunner
.
You can exclude packages by either of the following methods:
-
Using the
--excludePackages
argument. -
Specifying the packages in a file contained within one of the ignored locations. Each package should be included on a separate line, and the file must end in
.package-ignore.txt
. For example, see<{ProductShortName}_HOME>/ignore/proprietary.package-ignore.txt
.
6.3.2. Excluding files
{ProductShortName} can exclude specific files, such as included libraries or dependencies, during scanning and report generation. Excluded files are defined in a file with the .{LC_PSN}-ignore.txt
or .windup-ignore.txt
extension within one of the ignored locations.
These files contain a regex string detailing the name to exclude, with one file listed per line. For example, you can exclude the library ant.jar
and any Java source files beginning with Example
with a file containing the following:
.*ant.jar .*Example.*\.java
6.3.3. Searching locations for exclusion
{ProductShortName} searches the following locations:
-
~/.mta/ignore/
-
~/.windup/ignore/
-
<{ProductShortName}_HOME>/ignore/
-
Any files and folders specified by the
--userIgnorePath
argument
Each of these files must conform to the rules specified for excluding packages or files, depending on the type of content to be excluded.
Appendix A: Reference material
A.1. About {ProductShortName} command-line arguments
The following is a detailed description of the available {ProductShortName} command line arguments.
Note
|
To run the {ProductShortName} command without prompting, for example when executing from a script, you must use the following arguments:
|
Argument | Description | ||
---|---|---|---|
--additionalClassPath |
A space-delimited list of additional JAR files or directories to add to the class path so that they are available for decompilation or other analysis. |
||
--addonDir |
Add the specified directory as a custom add-on repository. |
||
--batchMode |
Flag to specify that {ProductShortName} should be run in a non-interactive mode without prompting for confirmation. This mode takes the default values for any parameters not passed in to the command line. |
||
--debug |
Flag to run {ProductShortName} in debug mode. |
||
--disableTattletale |
Flag to disable generation of the Tattletale report. If both |
||
--discoverPackages |
Flag to list all available packages in the input binary application. |
||
--enableClassNotFoundAnalysis |
Flag to enable analysis of Java files that are not available on the class path. This should not be used if some classes will be unavailable at analysis time. |
||
--enableCompatibleFilesReport |
Flag to enable generation of the Compatible Files report. Due to processing all files without found issues, this report may take a long time for large applications. |
||
--enableTattletale |
Flag to enable generation of a Tattletale report for each application. This option is enabled by default when |
||
--excludePackages |
A space-delimited list of packages to exclude from evaluation. For example, entering |
||
--excludeTags |
A space-delimited list of tags to exclude. When specified, rules with these tags will not be processed. To see the full list of tags, use the |
||
--explodedApp |
Flag to indicate that the provided input directory contains source files for a single application. |
||
--exportCSV |
Flag to export the report data to a CSV file on your local file system. {ProductShortName} creates the file in the directory specified by the |
||
--help |
Display the {ProductShortName} help message. |
||
--immutableAddonDir |
Add the specified directory as a custom read-only add-on repository. |
||
--includeTags |
A space-delimited list of tags to use. When specified, only rules with these tags will be processed. To see the full list of tags, use the |
||
--input |
A space-delimited list of the path to the file or directory containing one or more applications to be analyzed. This argument is required. |
||
--install |
Specify add-ons to install. The syntax is |
||
--keepWorkDirs |
Flag to instruct {ProductShortName} to not delete temporary working files, such as the graph database and extracted archive files. This is useful for debugging purposes. |
||
--list |
Flag to list installed add-ons. |
||
--listSourceTechnologies |
Flag to list all available source technologies. |
||
--listTags |
Flag to list all available tags. |
||
--listTargetTechnologies |
Flag to list all available target technologies. |
||
--mavenize |
Flag to create a Maven project directory structure based on the structure and content of the application. This creates |
||
--mavenizeGroupId |
When used with the |
||
--online |
Flag to allow network access for features that require it. Currently only validating XML schemas against external resources relies on Internet access. Note that this comes with a performance penalty. |
||
--output |
Specify the path to the directory to output the report information generated by {ProductShortName}. |
||
--overwrite |
Flag to force delete the existing output directory specified by
|
||
--packages |
A space-delimited list of the packages to be evaluated by {ProductShortName}. It is highly recommended to use this argument. |
||
--remove |
Remove the specified add-ons. The syntax is |
||
--skipReports |
Flag to indicate that HTML reports should not be generated. A common use of this argument is when exporting report data to a CSV file using |
||
--source |
A space-delimited list of one or more source technologies, servers, platforms, or frameworks to migrate from. This argument, in conjunction with the |
||
--sourceMode |
Flag to indicate that the application to be evaluated contains source files rather than compiled binaries. |
||
--target |
A space-delimited list of one or more target technologies, servers, platforms, or frameworks to migrate to. This argument, in conjunction with the |
||
--userIgnorePath |
Specify a location, in addition to |
||
--userLabelsDirectory |
Specify a location for {ProductShortName} to look for custom Target Runtime Labels. The value can be a directory containing label files or a single label file. The Target Runtime Label files must use either the |
||
--userRulesDirectory |
Specify a location, in addition to |
||
--version |
Display the {ProductShortName} version. |
A.1.1. Specifying the input
A space-delimited list of the path to the file or directory containing one or more applications to be analyzed. This argument is required.
--input <INPUT_ARCHIVE_OR_DIRECTORY> [...]
Depending on whether the input file type provided to the --input
argument is a file or directory, it will be evaluated as follows depending on the additional arguments provided.
- Directory
-
--explodedApp --sourceMode Neither Argument The directory is evaluated as a single application.
The directory is evaluated as a single application.
Each subdirectory is evaluated as an application.
- File
-
--explodedApp --sourceMode Neither Argument Argument is ignored; the file is evaluated as a single application.
The file is evaluated as a compressed project.
The file is evaluated as a single application.
A.1.2. Specifying the output directory
Specify the path to the directory to output the report information generated by {ProductShortName}.
--output <OUTPUT_REPORT_DIRECTORY>
-
If omitted, the report will be generated in an
<INPUT_ARCHIVE_OR_DIRECTORY>.report
directory. -
If the output directory exists, you will be prompted with the following (with a default of N).
Overwrite all contents of "/home/username/<OUTPUT_REPORT_DIRECTORY>" (anything already in the directory will be deleted)? [y,N]
However, if you specify the --overwrite
argument, {ProductShortName} will proceed to delete and recreate the directory. See the description of this argument for more information.
A.1.3. Setting the source technology
A space-delimited list of one or more source technologies, servers, platforms, or frameworks to migrate from. This argument, in conjunction with the --target
argument, helps to determine which rulesets are used. Use the --listSourceTechnologies
argument to list all available sources.
--source <SOURCE_1> <SOURCE_2>
The --source
argument now provides version support, which follows the Maven version range syntax. This instructs {ProductShortName} to only run the rulesets matching the specified versions. For example, --source eap:5
.
Warning
|
When migrating to JBoss EAP, be sure to specify the version, for example, See Supported migration paths in Introduction to the Migration Toolkit for Applications for the appropriate JBoss EAP version. |
A.1.4. Setting the target technology
A space-delimited list of one or more target technologies, servers, platforms, or frameworks to migrate to. This argument, in conjunction with the --source
argument, helps to determine which rulesets are used. If you do not specify this option, you are prompted to select a target. Use the --listTargetTechnologies
argument to list all available targets.
--target <TARGET_1> <TARGET_2>
The --target
argument now provides version support, which follows the Maven version range syntax. This instructs {ProductShortName} to only run the rulesets matching the specified versions. For example, --target eap:7
.
Warning
|
When migrating to JBoss EAP, be sure to specify the version in the target, for example, See Supported migration paths in Introduction to the Migration Toolkit for Applications for the appropriate JBoss EAP version. |
A.1.5. Selecting packages
A space-delimited list of the packages to be evaluated by {ProductShortName}. It is highly recommended to use this argument.
--packages <PACKAGE_1> <PACKAGE_2> <PACKAGE_N>
-
In most cases, you are interested only in evaluating custom application class packages and not standard Java EE or third party packages. The
<PACKAGE_N>
argument is a package prefix; all subpackages will be scanned. For example, to scan the packagescom.mycustomapp
andcom.myotherapp
, use--packages com.mycustomapp com.myotherapp
argument on the command line. -
While you can provide package names for standard Java EE third party software like
org.apache
, it is usually best not to include them as they should not impact the migration effort.
Warning
|
If you omit the |
A.2. Supported technology tags
The following technology tags are supported in {ProductShortName} 5.2.1:
-
0MQ Client (embedded)
-
3scale (embedded)
-
Acegi Security (embedded)
-
AcrIS Security (embedded)
-
ActiveMQ (embedded)
-
Airframe (embedded)
-
Airlift Log Manager (embedded)
-
AKKA JTA (embedded)
-
Akka Testkit (embedded)
-
Amazon SQS Client (embedded)
-
AMQP Client (embedded)
-
Anakia (embedded)
-
AngularFaces (embedded)
-
ANTLR StringTemplate (embedded)
-
AOP Alliance (embedded)
-
Apache Accumulo Client
-
Apache Aries (embedded)
-
Apache Axis (embedded)
-
Apache Axis2 (embedded)
-
Apache Camel (embedded)
-
Apache Commons JCS (embedded)
-
Apache Commons Logging (embedded)
-
Apache Commons Validator (embedded)
-
Apache CXF (embedded)
-
Apache Flume (embedded)
-
Apache Geronimo (embedded)
-
Apache Hadoop (embedded)
-
Apache HBase Client
-
Apache Ignite (embedded)
-
Apache Karaf (embedded)
-
Apache Log4J (embedded)
-
Apache Mahout (embedded)
-
Apache Meecrowave JTA (embedded)
-
Apache Santuario (embedded)
-
Apache Shiro (embedded)
-
Apache Sirona JTA (embedded)
-
Apache Struts (embedded)
-
Apache Synapse (embedded)
-
Apache Tapestry (embedded)
-
Apache Wicket (embedded)
-
Apiman (embedded)
-
Arquillian (embedded)
-
AspectJ (embedded)
-
Atomikos JTA (embedded)
-
Avalon Logkit (embedded)
-
Axion Driver
-
BabbageFaces (embedded)
-
Bean Validation
-
BeanInject (embedded)
-
Blaze (embedded)
-
Blitz4j (embedded)
-
BootsFaces (embedded)
-
Bouncy Castle (embedded)
-
ButterFaces (embedded)
-
Cache API (embedded)
-
Cactus (embedded)
-
Camel Messaging Client (embedded)
-
Camunda (embedded)
-
Cassandra Client
-
CDI
-
CDI (embedded)
-
Cfg Engine (embedded)
-
Chunk Templates (embedded)
-
Cloudera (embedded)
-
Clustering EJB
-
Clustering Web Session
-
Coherence (embedded)
-
Common Annotations
-
Composite Logging JCL (embedded)
-
Concordion (embedded)
-
Cucumber (embedded)
-
Dagger (embedded)
-
DbUnit (embedded)
-
Debugging Support for Other Languages
-
Decompiled Java File
-
Demoiselle JTA (embedded)
-
Derby Driver
-
Drools (embedded)
-
DVSL (embedded)
-
Dynacache (embedded)
-
EAR
-
Easy Rules (embedded)
-
EasyMock (embedded)
-
EclipseLink (embedded)
-
EJB
-
EJB XML
-
Ehcache (embedded)
-
Elasticsearch (embedded)
-
Enterprise Web Services
-
Entity Bean
-
EtlUnit (embedded)
-
Everit JTA (embedded)
-
Evo JTA (embedded)
-
FreeMarker (embedded)
-
Geronimo JTA (embedded)
-
GFC Logging (embedded)
-
GIN (embedded)
-
GlassFish JTA (embedded)
-
Google Guice (embedded)
-
Grails (embedded)
-
Grapht DI (embedded)
-
Guava Testing (embedded)
-
GWT (embedded)
-
H2 Driver
-
Hamcrest (embedded)
-
Handlebars (embedded)
-
HavaRunner (embedded)
-
Hazelcast (embedded)
-
Hdiv (embedded)
-
Hibernate (embedded)
-
Hibernate Cfg
-
Hibernate Mapping
-
Hibernate OGM (embedded)
-
HighFaces (embedded)
-
HornetQ Client (embedded)
-
HSQLDB Driver
-
HTTP Client (embedded)
-
HttpUnit (embedded)
-
ICEfaces (embedded)
-
Ickenham (embedded)
-
Ignite JTA (embedded)
-
Ikasan (embedded)
-
iLog (embedded)
-
Infinispan (embedded)
-
Injekt for Kotlin (embedded)
-
Iroh (embedded)
-
Istio (embedded)
-
JACC
-
Jamon (embedded)
-
Jasypt (embedded)
-
Java EE
-
Java EE Batch
-
Java EE Batch API
-
Java EE JSON-P
-
Java EE Security
-
Java Source
-
Java Transaction API (embedded)
-
JavaMail
-
Javax Inject (embedded)
-
JAX-RPC
-
JAX-RS
-
JAX-WS
-
JAXB
-
JAXR
-
JayWire (embedded)
-
JBehave (embedded)
-
JBoss Cache (embedded)
-
JBoss EJB XML
-
JBoss logging (embedded)
-
JBoss Transactions (embedded)
-
JBoss Web XML
-
JBossMQ Client (embedded)
-
JBPM (embedded)
-
JCA
-
Jcabi Log (embedded)
-
JCache (embedded)
-
JCunit (embedded)
-
JDBC (embedded)
-
JDBC datasources
-
JDBC XA datasources
-
Jersey (embedded)
-
Jetbrick Template (embedded)
-
Jetty (embedded)
-
JFreeChart (embedded)
-
JFunk (embedded)
-
JMock (embedded)
-
JMockit (embedded)
-
JMS
-
JMS Connection Factory
-
JMS Queue
-
JMS Topic
-
JMustache (embedded)
-
JPA
-
JPA entities
-
JPA Matchers (embedded)
-
JPA named queries
-
JPA XML
-
JSecurity (embedded)
-
JSF (embedded)
-
JSF Page
-
JSilver (embedded)
-
JSON-B
-
JSP Page
-
JSTL (embedded)
-
JTA
-
Jukito (embedded)
-
JUnit (embedded)
-
Ka DI (embedded)
-
Keyczar (embedded)
-
Kibana (embedded)
-
KLogger (embedded)
-
Kodein (embedded)
-
Kotlin Logging (embedded)
-
KouInject (embedded)
-
KumuluzEE JTA (embedded)
-
LevelDB Client
-
Liferay (embedded)
-
LiferayFaces (embedded)
-
Lift JTA (embedded)
-
Log.io (embedded)
-
Log4s (embedded)
-
Logback (embedded)
-
Logging to file system
-
Logging to Socket Handler
-
Logging Utils (embedded)
-
Logstash (embedded)
-
Lumberjack (embedded)
-
Macros (embedded)
-
Manifest
-
MapR (embedded)
-
Maven XML
-
MckoiSQLDB Driver
-
MEJB
-
Memcached client (embedded)
-
Message (MDB)
-
Micro DI (embedded)
-
Microsoft SQL Driver
-
MinLog (embedded)
-
Mixer (embedded)
-
Mockito (embedded)
-
MongoDB Client
-
Monolog (embedded)
-
Morphia
-
MRules (embedded)
-
Mule (embedded)
-
Mule Functional Test Framework (embedded)
-
MultithreadedTC (embedded)
-
Mycontainer JTA (embedded)
-
MyFaces (embedded)
-
MySQL Driver
-
Narayana Arjuna (embedded)
-
Needle (embedded)
-
Neo4j (embedded)
-
NLOG4J (embedded)
-
Nuxeo JTA/JCA (embedded)
-
OACC (embedded)
-
OAUTH (embedded)
-
OCPsoft Logging Utils (embedded)
-
OmniFaces (embedded)
-
OpenFaces (embedded)
-
OpenPojo (embedded)
-
OpenSAML (embedded)
-
OpenWS (embedded)
-
OPS4J Pax Logging Service (embedded)
-
Oracle ADF (embedded)
-
Oracle DB Driver
-
Oracle Forms (embedded)
-
Orion EJB XML
-
Orion Web XML
-
Oscache (embedded)
-
OTR4J (embedded)
-
OW2 JTA (embedded)
-
OW2 Log Util (embedded)
-
OWASP CSRF Guard (embedded)
-
OWASP ESAPI (embedded)
-
Peaberry (embedded)
-
Pega (embedded)
-
Persistence units
-
Petals EIP (embedded)
-
PicketBox (embedded)
-
PicketLink (embedded)
-
PicoContainer (embedded)
-
Play (embedded)
-
Play Test (embedded)
-
Plexus Container (embedded)
-
Polyforms DI (embedded)
-
Portlet (embedded)
-
PostgreSQL Driver
-
PowerMock (embedded)
-
PrimeFaces (embedded)
-
Properties
-
Qpid Client (embedded)
-
RabbitMQ Client (embedded)
-
RandomizedTesting Runner (embedded)
-
Resource Adapter (embedded)
-
REST Assured (embedded)
-
Restito (embedded)
-
RichFaces (embedded)
-
RMI
-
RocketMQ Client (embedded)
-
Rythm Template Engine (embedded)
-
SAML (embedded)
-
Scalate (embedded)
-
Scaldi (embedded)
-
Scribe (embedded)
-
Seam (embedded)
-
ServiceMix (embedded)
-
Servlet
-
ShiftOne (embedded)
-
Silk DI (embedded)
-
SLF4J (embedded)
-
Snippetory Template Engine (embedded)
-
SNMP4J (embedded)
-
SOAP (SAAJ)
-
Spark (embedded)
-
Specsy (embedded)
-
Spock (embedded)
-
Spring (embedded)
-
Spring Batch (embedded)
-
Spring Boot (embedded)
-
Spring Data (embedded)
-
Spring Integration (embedded)
-
Spring Messaging Client (embedded)
-
Spring MVC (embedded)
-
Spring Security (embedded)
-
Spring Test (embedded)
-
Spring Transactions (embedded)
-
Spring XML
-
SQLite Driver
-
SSL (embedded)
-
Stateful (SFSB)
-
Stateless (SLSB)
-
Sticky Configured (embedded)
-
Stripes (embedded)
-
SubCut (embedded)
-
Swagger (embedded)
-
SwarmCache (embedded)
-
SwitchYard (embedded)
-
Syringe (embedded)
-
Talend ESB (embedded)
-
Teiid (embedded)
-
TensorFlow (embedded)
-
Test Interface (embedded)
-
TestNG (embedded)
-
Thymeleaf (embedded)
-
TieFaces (embedded)
-
tinylog (embedded)
-
Tomcat (embedded)
-
Tornado Inject (embedded)
-
Trimou (embedded)
-
Trunk JGuard (embedded)
-
Twirl (embedded)
-
Twitter Util Logging (embedded)
-
UberFire (embedded)
-
Unirest (embedded)
-
Unitils (embedded)
-
Vaadin (embedded)
-
Velocity (embedded)
-
Vlad (embedded)
-
Water Template Engine (embedded)
-
Web XML
-
WebLogic Web XML
-
Webmacro (embedded)
-
WebSphere EJB
-
WebSphere EJB Ext
-
WebSphere Web XML
-
WebSphere WS Binding
-
WebSphere WS Extension
-
Weka (embedded)
-
Weld (embedded)
-
WF Core JTA (embedded)
-
Winter (embedded)
-
WS Metadata
-
WSDL (embedded)
-
WSO2 (embedded)
-
WSS4J (embedded)
-
XACML (embedded)
-
XFire (embedded)
-
XMLUnit (embedded)
-
Zbus Client (embedded)
A.3. About rule story points
A.3.1. What are story points?
Story points are an abstract metric commonly used in Agile software development to estimate the level of effort needed to implement a feature or change.
The Migration Toolkit for Applications uses story points to express the level of effort needed to migrate particular application constructs, and the application as a whole. It does not necessarily translate to man-hours, but the value should be consistent across tasks.
A.3.2. How story points are estimated in rules
Estimating the level of effort for the story points for a rule can be tricky. The following are the general guidelines {ProductShortName} uses when estimating the level of effort required for a rule.
Level of Effort | Story Points | Description |
---|---|---|
Information |
0 |
An informational warning with very low or no priority for migration. |
Trivial |
1 |
The migration is a trivial change or a simple library swap with no or minimal API changes. |
Complex |
3 |
The changes required for the migration task are complex, but have a documented solution. |
Redesign |
5 |
The migration task requires a redesign or a complete library change, with significant API changes. |
Rearchitecture |
7 |
The migration requires a complete rearchitecture of the component or subsystem. |
Unknown |
13 |
The migration solution is not known and may need a complete rewrite. |
A.3.3. Task category
In addition to the level of effort, you can categorize migration tasks to indicate the severity of the task. The following categories are used to group issues to help prioritize the migration effort.
- Mandatory
-
The task must be completed for a successful migration. If the changes are not made, the resulting application will not build or run successfully. Examples include replacement of proprietary APIs that are not supported in the target platform.
- Optional
-
If the migration task is not completed, the application should work, but the results may not be optimal. If the change is not made at the time of migration, it is recommended to put it on the schedule soon after your migration is completed. An example of this would be the upgrade of EJB 2.x code to EJB 3.
- Potential
-
The task should be examined during the migration process, but there is not enough detailed information to determine if the task is mandatory for the migration to succeed. An example of this would be migrating a third-party proprietary type where there is no directly compatible type.
- Information
-
The task is included to inform you of the existence of certain files. These may need to be examined or modified as part of the modernization effort, but changes are typically not required. An example of this would be the presence of a logging dependency or a Maven
pom.xml
.
For more information on categorizing tasks, see Using custom rule categories.
A.4. Additional Resources
A.4.1. Getting involved
To help the Migration Toolkit for Applications cover most application constructs and server configurations, including yours, you can help with any of the following items.
-
Send an email to jboss-migration-feedback@redhat.com and let us know what {ProductShortName} migration rules should cover.
-
Provide example applications to test migration rules.
-
Identify application components and problem areas that may be difficult to migrate.
-
Write a short description of these problem migration areas.
-
Write a brief overview describing how to solve the problem migration areas.
-
-
Try Migration Toolkit for Applications on your application. Be sure to report any issues you encounter.
-
Contribute to the Migration Toolkit for Applications rules repository.
-
Write a Migration Toolkit for Applications rule to identify or automate a migration process.
-
Create a test for the new rule.
-
Details are provided in the Rules Development Guide.
-
-
Contribute to the project source code.
-
Create a core rule.
-
Improve {ProductShortName} performance or efficiency.
-
See the Core Development Guide for information about how to configure your environment and set up the project.
-
Any level of involvement is greatly appreciated!
A.4.2. Resources
-
{ProductShortName} forums: https://developer.jboss.org/en/windup
-
{ProductShortName} Jira issue trackers
-
Core {ProductShortName}: https://issues.redhat.com/projects/WINDUP
-
{ProductShortName} Rules: https://issues.redhat.com/projects/WINDUPRULE
-
-
{ProductShortName} mailing list: jboss-migration-feedback@redhat.com
-
{ProductShortName} IRC channel: Server FreeNode (
irc.freenode.net
), channel#windup
(transcripts).
A.4.3. Reporting issues
{ProductShortName} uses Jira as its issue tracking system. If you encounter an issue executing {ProductShortName}, submit a Jira issue.
Revised on 2021-12-06 11:34:10 +0200