diff --git a/docs/about/installation/installation-on-macos.md b/docs/about/installation/installation-on-macos.md index 100e5f0d..84174c4a 100644 --- a/docs/about/installation/installation-on-macos.md +++ b/docs/about/installation/installation-on-macos.md @@ -13,8 +13,8 @@ To install OpenMS on macOS, run the following steps: 2.Double click on the downloaded file. It will start to open the `OpenMS--macOS.pkg` installer file. ```{image} /_images/installations/macos/Warning-openMS-3.3.0-macOS-Silicon.pkg-Not-Opened.png -:alt: macOS warning message when opening OpenMS--macOS.pkg -:width: 500px +:alt: macOS warning message when opening OpenMS--macOS.pkg +:width: 500px ``` **Why This Warning Appears:** @@ -24,39 +24,39 @@ The warning indicates that the OpenMS installer hasn't been notarized or recogni Bypassing Gatekeeper to Install OpenMS on macOS -A. Bypassing Gatekeeper Using System Settings +A. Bypassing Gatekeeper Using System Settings -1. Open **System Settings**. -2. Navigate to **Privacy & Security**. -3. Under the **Security** section, locate the message about the blocked application. -4. Click the **Open Anyway** button . +1. Open **System Settings**. +2. Navigate to **Privacy & Security**. +3. Under the **Security** section, locate the message about the blocked application. +4. Click the **Open Anyway** button . ```{image} /_images/installations/macos/Bypassing-Gatekeeper-to-Install-OpenMS-on-macOS.png -:alt: Bypassing Gatekeeper on macOS -:width: 500px +:alt: Bypassing Gatekeeper on macOS +:width: 500px ``` -B. Bypassing Gatekeeper Using Command-Line +B. Bypassing Gatekeeper Using Command-Line -For users comfortable with the command line, you can bypass the security warning using Terminal: +For users comfortable with the command line, you can bypass the security warning using Terminal: -1. Open **Terminal**. -2. Navigate to the directory containing the installer using the `cd` command: +1. Open **Terminal**. +2. Navigate to the directory containing the installer using the `cd` command: ```bash cd /path/to/installer ``` -3. Run the following command to remove the quarantine attribute: +3. Run the following command to remove the quarantine attribute: ```bash xattr -d com.apple.quarantine OpenMS--macOS.pkg ``` -By following these steps, you’re instructing macOS to trust the OpenMS installer and allow its execution. Ensure that you’ve downloaded the installer from a **trusted source** (i.e., build archive of the Unversity of Tübingen or OpenMS' GitHub artifacts) before proceeding. +By following these steps, you’re instructing macOS to trust the OpenMS installer and allow its execution. Ensure that you’ve downloaded the installer from a **trusted source** (i.e., build archive of the Unversity of Tübingen or OpenMS' GitHub artifacts) before proceeding. -4. Install OpenMS +4. Install OpenMS ```{image} /_images/installations/macos/Installation-successful-message.png -:alt: OpenMS installation started on macOS -:width: 500px +:alt: OpenMS installation started on macOS +:width: 500px ``` 5. Agree to the license agreements. @@ -69,8 +69,8 @@ By following these steps, you’re instructing macOS to trust the OpenMS install 6. Installation Confirmation ```{image} /_images/installations/macos/Installation-successful-message.png -:alt: OpenMS installation successful -:width: 500px +:alt: OpenMS installation successful +:width: 500px ``` To use {term}`TOPP` as regular app in the shell, add the following lines to the `~/.profile` file. @@ -78,7 +78,7 @@ To use {term}`TOPP` as regular app in the shell, add the following lines to the :::{warning} Known Installer Issues 1. Nothing happens when you click OpenMS apps or the validity of the developer could not be confirmed. - + This usually means the OpenMS software lands in quarantine even after installation of the `.pkg`. This was more common with our older `.dmg` image but may still happen. Since macOS Catalina (maybe also Mojave) all apps and executables have to be officially notarized by Apple but we currently do not have the resources for a streamlined notarization workflow. @@ -91,7 +91,7 @@ To use {term}`TOPP` as regular app in the shell, add the following lines to the cd /Applications/OpenMS- sudo xattr -r -d com.apple.quarantine * ``` - + 2. Bug with running Java based thirdparty tools like {term}`MSGFPlusAdapter` and {term}`LuciphorAdapter` from within **TOPPAS.app** If you face issues while running Java based thirdparty tools from within {term}`TOPPAS.app `, run the {term}`TOPPAS.app ` diff --git a/docs/about/installation/installation-on-windows.md b/docs/about/installation/installation-on-windows.md index 21aef1e8..468bcee6 100644 --- a/docs/about/installation/installation-on-windows.md +++ b/docs/about/installation/installation-on-windows.md @@ -5,13 +5,13 @@ Windows To Install the binary package of OpenMS & {term}`TOPP`: -1. Download the installer `OpenMS--Win64.exe` from the [archive](https://abibuilder.cs.uni-tuebingen.de/archive/openms/OpenMSInstaller/release/latest/) +1. Download the installer `OpenMS--Win64.exe` from the [archive](https://abibuilder.cs.uni-tuebingen.de/archive/openms/OpenMSInstaller/release/latest/) 2. Execute the installer under the user account that later runs OpenMS and follow its instructions. - + You may see a Windows Defender Warning, since our installer is not digitally signed. - + Click on "More Info", and then "Run anyways". - + ![](/_images/installations/win/smartscreen.gif) When asked for an admin authentication, please enter the credentials (it is not advised to directly invoke the installer using an admin account). @@ -47,4 +47,4 @@ The windows installer works with Windows 10 and 11 (older versions might still w 4. For Win8 or later, Windows will report an error while installing `.net4` as it's mostly included. But it might occur that `.net3.5` does not get properly installed during the process. - Fix is to enable the .NET Framework 3.5 yourself through Control Panel. See this [Microsoft help page](https://docs.microsoft.com/en-us/dotnet/framework/install/dotnet-35-windows).aspx#ControlPanel) for detailed information. Even if this step fails, this does not affect the functionality of OpenMS, except for the executability of included third party tools (ProteoWizard). + Fix is to enable the .NET Framework 3.5 yourself through Control Panel. See this [Microsoft help page](https://docs.microsoft.com/en-us/dotnet/framework/install/dotnet-35-windows#enable-the-net-framework-35-in-control-panel) for detailed information. Even if this step fails, this does not affect the functionality of OpenMS, except for the executability of included third party tools (ProteoWizard). diff --git a/docs/about/installation/installation-with-conda.md b/docs/about/installation/installation-with-conda.md index 5c5b837a..39a5bf9d 100644 --- a/docs/about/installation/installation-with-conda.md +++ b/docs/about/installation/installation-with-conda.md @@ -81,7 +81,7 @@ obtain release versions (`bioconda` channel) and nightly versions (`openms` chan :::{tab-item} openms :sync: openms - ```{code-block} bash + ```{code-block} bash conda install openms ``` ::: @@ -119,7 +119,7 @@ obtain release versions (`bioconda` channel) and nightly versions (`openms` chan :::{tab-item} openms :sync: openms - ```{code-block} bash + ```{code-block} bash conda install -c openms openms ``` ::: diff --git a/docs/about/learning/background.md b/docs/about/learning/background.md index 6b2ee385..a9290f3b 100644 --- a/docs/about/learning/background.md +++ b/docs/about/learning/background.md @@ -1,9 +1,9 @@ Learning ======== -Proteomics and metabolomics focus on complex interactions within biological systems; the former is centered on proteins while the latter is based on metabolites. To understand these interactions, we need to accurately identify the different biological components involved. +Proteomics and metabolomics focus on complex interactions within biological systems; the former is centered on proteins while the latter is based on metabolites. To understand these interactions, we need to accurately identify the different biological components involved. -{term}`Liquid chromatography` (LC) and {term}`mass spectrometry` (MS) are the analytical techniques used to isolate and identify biological components in proteomics and metabolomics. LC-MS data can be difficult to analyze manually given its amount and complexity. Therefore, we need specialized software that can analyze high-throughput LC-MS data quickly and accurately. +{term}`Liquid chromatography` (LC) and {term}`mass spectrometry` (MS) are the analytical techniques used to isolate and identify biological components in proteomics and metabolomics. LC-MS data can be difficult to analyze manually given its amount and complexity. Therefore, we need specialized software that can analyze high-throughput LC-MS data quickly and accurately. **Why use OpenMS** @@ -13,7 +13,7 @@ OpenMS is an open-source, C++ framework for analyzing large volumes of mass spec OpenMS in recent times has been expanded to support a wide variety of mass spectrometry experiments. To design your analysis solution, [contact the OpenMS team](https://openms.de/communication/) today. ``` -To use OpenMS effectively, an understanding of chromatography and mass spectrometry is required as many of the algorithms are based on these techniques. +To use OpenMS effectively, an understanding of chromatography and mass spectrometry is required as many of the algorithms are based on these techniques. This section provides a detailed explanation on LC and MS, and how they are combined to identify and quantify substances. diff --git a/docs/about/learning/lc-chromatography.md b/docs/about/learning/lc-chromatography.md index c8cab830..9f726655 100644 --- a/docs/about/learning/lc-chromatography.md +++ b/docs/about/learning/lc-chromatography.md @@ -1,33 +1,33 @@ Liquid chromatography (LC) ========================== -Chromatography is a technique used by life scientists to separate molecules based on a specific physical or chemical property. +Chromatography is a technique used by life scientists to separate molecules based on a specific physical or chemical property.

**Video**

For more information on chromatography, [view this video](https://timms.uni-tuebingen.de:/tp/UT_20141028_001_cpm_0001?t=210.00).
-There are many types of chromatography, but this section focuses on LC as it is widely used in proteomics and metabolomics. +There are many types of chromatography, but this section focuses on LC as it is widely used in proteomics and metabolomics. LC separates molecules based on a specific physical or chemical property by mixing a sample containing the molecules of interest (otherwise known as **analytes**) in a liquid solution. ## Key components of LC An LC setup is made up of the following components: -- **A liquid solution**, known as the **mobile phase**, containing the analytes. +- **A liquid solution**, known as the **mobile phase**, containing the analytes. - **A pump** which transports the liquid solution. - **A stationary phase** which is a solid, homogeneous substance. -- **A column** that contains the stationary phase. +- **A column** that contains the stationary phase. - **A detector** that plots the time it takes for the analyte to escape the column (retention time) against the analyte's concentration. This plot is called a **chromatogram**. -Refer to the image below for a diagrammatic representation of an LC setup. +Refer to the image below for a diagrammatic representation of an LC setup. ![schematic illustration of an LC setup](/_images/introduction/lc-components.png) ## How does LC work? -The liquid solution containing the analytes is pumped through a column that is attached to the stationary phase. Analytes are separated based on how strongly they interact with each phase. Some analytes will interact strongly with the mobile phase while others will be strongly attracted to the stationary phase, depending on their physical or chemical properties. The stronger an analyte's attraction is to the mobile phase, the faster it will leave the column. The time it takes for an analyte to escape from the column is called the analyte's {term}`retention time`. As a result of their differing attractions to the mobile and stationary phases, different analytes will have different retention times, which is how separation occurs. +The liquid solution containing the analytes is pumped through a column that is attached to the stationary phase. Analytes are separated based on how strongly they interact with each phase. Some analytes will interact strongly with the mobile phase while others will be strongly attracted to the stationary phase, depending on their physical or chemical properties. The stronger an analyte's attraction is to the mobile phase, the faster it will leave the column. The time it takes for an analyte to escape from the column is called the analyte's {term}`retention time`. As a result of their differing attractions to the mobile and stationary phases, different analytes will have different retention times, which is how separation occurs. The retention times for each analyte are recorded by a detector. The most common detector used is the mass spectrometer, which we discuss later. However, other detection methods exist, such as: - Light absorption (photometric detector) @@ -36,7 +36,7 @@ The retention times for each analyte are recorded by a detector. The most common ## High performance liquid chromatography (HPLC) -HPLC is the most commonly used technique for separating proteins and metabolites. In HPLC, a high-pressured pump is used to transport a liquid (solvent) containing the molecules of interest through a thin capillary column. The stationary phase is ‘packed’ into the column. +HPLC is the most commonly used technique for separating proteins and metabolites. In HPLC, a high-pressured pump is used to transport a liquid (solvent) containing the molecules of interest through a thin capillary column. The stationary phase is ‘packed’ into the column.

**Video**

diff --git a/docs/getting-started/nextflow-get-started.md b/docs/getting-started/nextflow-get-started.md index 328b1769..220ebc48 100644 --- a/docs/getting-started/nextflow-get-started.md +++ b/docs/getting-started/nextflow-get-started.md @@ -1,15 +1,15 @@ NextFlow ======== -Nextflow is a workflow system for creating scalable, portable, and reproducible workflows. -It is based on the dataflow programming model, which greatly simplifies the writing of parallel and distributed pipelines, -allowing you to focus on the flow of data and computation. -Nextflow can deploy workflows on a variety of execution platforms, including your local machine, HPC schedulers, -AWS Batch, Azure Batch, Google Cloud Batch, and Kubernetes. +Nextflow is a workflow system for creating scalable, portable, and reproducible workflows. +It is based on the dataflow programming model, which greatly simplifies the writing of parallel and distributed pipelines, +allowing you to focus on the flow of data and computation. +Nextflow can deploy workflows on a variety of execution platforms, including your local machine, HPC schedulers, +AWS Batch, Azure Batch, Google Cloud Batch, and Kubernetes. Additionally, it supports many ways to manage your software dependencies, including Conda, Spack, Docker, Podman, Singularity, and more.[^1] ## Installation -Click [here](https://www.nextflow.io/docs/latest/getstarted.html#installation) to install Nextflow only. +Click [here](https://www.nextflow.io/docs/latest/getstarted.html#installation) to install Nextflow only. Alternatively click [here](https://nf-co.re/docs/usage/installation) to follow the instructions for using nf-core curated pipelines in Nextflow. ## Ready-made OpenMS nextflow workflows diff --git a/docs/getting-started/visualize-with-openms.md b/docs/getting-started/visualize-with-openms.md index a7ec89cd..9f35c4ef 100644 --- a/docs/getting-started/visualize-with-openms.md +++ b/docs/getting-started/visualize-with-openms.md @@ -91,7 +91,7 @@ To filter your data: 1. Select a layer from the **Layers window**. - ![display selected layer](/_images/tutorials/toppview/layers-window.png) + ![display selected layer](/_images/tutorials/toppview/layers-window.png) 2. Open the **Data filters window** by clicking the tab at the bottom of the screen. diff --git a/docs/manual/contribute.md b/docs/manual/contribute.md index 4f2497ae..7e32fe28 100644 --- a/docs/manual/contribute.md +++ b/docs/manual/contribute.md @@ -3,7 +3,7 @@ Contribute ## Reporting Bugs and Issues -A list of known issues in the current OpenMS release can be found [here](https://abibuilder.cs.uni-tuebingen.de/archive/openms/Documentation/nightly/html/known_dev_bugs.html). +A list of known issues in the current OpenMS release can be found [here](https://abibuilder.cs.uni-tuebingen.de/archive/openms/Documentation/nightly/html/known_dev_bugs.html). Please check if your OpenMS version matches the current version and if the bug has already been reported. In order to report a new bug, please create a [GitHub issue](manual/contribute.md#Write and Label GitHub Issues) or [contact us](/about/communication.md). diff --git a/docs/manual/contribute/openms-git-workflow.md b/docs/manual/contribute/openms-git-workflow.md index b322c5e7..31e44137 100644 --- a/docs/manual/contribute/openms-git-workflow.md +++ b/docs/manual/contribute/openms-git-workflow.md @@ -13,7 +13,7 @@ Naming conventions for the following apply: * A **local repository** is the repository that lies on your hard drive after cloning. * A **remote repository** is a repository on a git server such as GitHub. -* A **fork** is a copy of a repository. Forking a repository allows you to freely experiment with changes without +* A **fork** is a copy of a repository. Forking a repository allows you to freely experiment with changes without affecting the original project. * **Origin** refers to a remote repository that you have forked. Call this repository `https://github.com/_YOURUSERNAME_/OpenMS`. * **Upstream** refers to the original remote OpenMS repository. Call this repository `https://github.com/OpenMS/OpenMS`. @@ -66,7 +66,7 @@ $ git remote -v ``` -Fetch changes and new branches from your fork (`origin`) as well as from the central, upstream OpenMS repository by +Fetch changes and new branches from your fork (`origin`) as well as from the central, upstream OpenMS repository by executing: ```bash @@ -161,7 +161,7 @@ The above commands: 2. Applies all commits that have been integrated into `develop`. 3. Reapplies your commits on top of the commits integrated into `develop`. -For more information, refer to a [visual explanation of rebasing](http://git-scm.com/book/en/v2/Git-Branching-Rebasing). +For more information, refer to a [visual explanation of rebasing](https://git-scm.com/book/en/v2/Git-Branching-Rebasing). ```{tip} Do not rebase published branches (e.g. branches that are part of a pull request). If you created a pull request, you diff --git a/docs/manual/contribute/pull-request-checklist.md b/docs/manual/contribute/pull-request-checklist.md index 3731713d..e65479d2 100644 --- a/docs/manual/contribute/pull-request-checklist.md +++ b/docs/manual/contribute/pull-request-checklist.md @@ -4,7 +4,7 @@ Pull Request Checklist Before opening a pull request, check the following: 1. **Does the code build?** - Execute `make` (or your build system's equivalent, e.g., `cmake --build . --target ALL_BUILD --config Release` on + Execute `make` (or your build system's equivalent, e.g., `cmake --build . --target ALL_BUILD --config Release` on Windows). 2. **Do all tests pass?** To check if all tests have passed, execute `ctest`. @@ -15,7 +15,7 @@ Before opening a pull request, check the following: It is also recommended to document non-public members and methods. 4. **Does the code introduce changes to the API?** If the code introduces changes to the API, make sure that the documentation is up-to-date and that the Python bindings - (pyOpenMS) still work. For each change in the C++ API, make a change in the Python API wrapper via + (pyOpenMS) still work. For each change in the C++ API, make a change in the Python API wrapper via the `pyOpenMS/pxds/` files. 5. **Have you completed regression testing?** Make sure that you include a test in the test suite for: @@ -43,5 +43,5 @@ Make sure to: * **Describe what you have changed in your pull request.** When opening the pull request, give a detailed overview of what has changed and why. Include a clear rationale for the - changes and add benchmark data if available. See [this request](https://github.com/bitly/dablooms/pull/19) for + changes and add benchmark data if available. See [this request](https://github.com/bitly/dablooms/pull/19) for an example. diff --git a/docs/manual/develop.md b/docs/manual/develop.md index 67b8298c..eb9ca2c8 100644 --- a/docs/manual/develop.md +++ b/docs/manual/develop.md @@ -27,7 +27,7 @@ code base. ### Development model -OpenMS follows the [Gitflow development workflow](http://nvie.com/posts/a-successful-git-branching-model/). +OpenMS follows the [Gitflow development workflow](https://nvie.com/posts/a-successful-git-branching-model/). Every contributor is encouraged to create their own fork (even if they are eligible to push directly to OpenMS). To create a fork: diff --git a/docs/manual/develop/adding-new-tool-to-topp.md b/docs/manual/develop/adding-new-tool-to-topp.md index eeee0cb0..a7cf6052 100644 --- a/docs/manual/develop/adding-new-tool-to-topp.md +++ b/docs/manual/develop/adding-new-tool-to-topp.md @@ -70,7 +70,7 @@ otherwise. - Create a resource file: Create a text file named `.rc` (e.g. TOPPView.rc) Insert the following line: 101 ICON "TOPPView.ico" , replacing TOPPView with your binary name. Put both files in `OpenMS/source/APPLICATIONS/TOPP/` - (similar files for other TOPP tools already present). Re-run cmake and re-link your TOPP tool. + (similar files for other TOPP tools already present). Re-run cmake and re-link your TOPP tool. Voila. You should have an iconized TOPP tool. diff --git a/docs/manual/develop/developer-faq.md b/docs/manual/develop/developer-faq.md index b9cdd854..7e93e924 100644 --- a/docs/manual/develop/developer-faq.md +++ b/docs/manual/develop/developer-faq.md @@ -14,7 +14,7 @@ The following section provides general information to new contributors. * Read the [OpenMS Coding Conventions](https://abibuilder.cs.uni-tuebingen.de/archive/openms/Documentation/nightly/html/coding_conventions.html) * Read the [OpenMS User Tutorial](/tutorials/knime-user-tutorial.md). * Create a GitHub account. -* Subscribe to the [open-ms-general](https://sourceforge.net/projects/open-ms/lists/open-ms-general) +* Subscribe to the [open-ms-general](https://sourceforge.net/projects/open-ms/lists/open-ms-general) or [contact-us](/about/communication.md). ### I have written a class for OpenMS. What should I do? @@ -31,7 +31,7 @@ Please open a pull request and follow the [pull request guidelines](/manual/cont ### Can I use QT designer to create GUI widgets? Yes. Create a class called `Widget: Create .ui-File` with `QT designer` and store it as `Widget.ui.`, add the class to -`sources.cmake`. From the .ui-File the file `include/OpenMS/VISUAL/UIC/ClassTemplate.h` is generated by the build +`sources.cmake`. From the .ui-File the file `include/OpenMS/VISUAL/UIC/ClassTemplate.h` is generated by the build system. ```{note} @@ -47,7 +47,7 @@ Insert round brackets around the method declaration. ### Where can I find the binary installers created? View the binary installers at the [build archive](https://abibuilder.cs.uni-tuebingen.de/archive/openms/OpenMSInstaller/nightly/). -Please verify the creation date of the individual installers, as there may have been an error while creating +Please verify the creation date of the individual installers, as there may have been an error while creating the installer. ## Troubleshooting @@ -66,10 +66,10 @@ The following questions are related to the build system. ### What is CMake? `CMake` builds BuildSystems for different platforms, e.g. VisualStudio Solutions on Windows, Makefiles on Linux etc. -This allows to define in one central location (namely `CMakeLists.txt`) how OpenMS is build and have the platform +This allows to define in one central location (namely `CMakeLists.txt`) how OpenMS is build and have the platform specific stuff handled by `CMake`. -View the [cmake website](http://www.cmake.org) for more information. +View the [cmake website](https://cmake.org) for more information. ### How do I use CMake? @@ -161,23 +161,23 @@ This happens whenever the Build-System calls `CMake` (which can be quite often, ### How do I add a new class to the build system? -1. Create the new class in the corresponding sub-folder of the sub-project. The header has to be created - in `src//include/OpenMS` and the `.cpp` file in `src//source`, +1. Create the new class in the corresponding sub-folder of the sub-project. The header has to be created + in `src//include/OpenMS` and the `.cpp` file in `src//source`, e.g., `src/openms/include/OpenMS/FORMAT/NewFileFormat.h` and `src/openms/source/FORMAT/NewFileFormat.cpp`. -2. Add both to the respective `sources.cmake` file in the same directory (e.g., `src/openms/source/FORMAT/` +2. Add both to the respective `sources.cmake` file in the same directory (e.g., `src/openms/source/FORMAT/` and `src/openms/include/OpenMS/FORMAT/`). -3. Add the corresponding class test to `src/tests/class_tests//` +3. Add the corresponding class test to `src/tests/class_tests//` (e.g., `src/tests/class_tests/openms/source/NewFileFormat_test.cpp`). -4. Add the test to the `executables.cmake` file in the test folder +4. Add the test to the `executables.cmake` file in the test folder (e.g., `src/tests/class_tests/openms/executables.cmake`). 5. Add them to git by using the command `git add`. ### How do I add a new directory to the build system? -1. Create two new `sources.cmake` files (one for `src//include/OpenMS/MYDIR`, +1. Create two new `sources.cmake` files (one for `src//include/OpenMS/MYDIR`, one for `src//source/MYDIR`), using existing `sources.cmake` files as template. 2. Add the new `sources.cmake` files to `src//includes.cmake` -3. If you created a new directory directly under `src/openms/source`, then have a look +3. If you created a new directory directly under `src/openms/source`, then have a look at `src/tests/class_tests/openms/executables.cmake`. 4. Add a new section that makes the unit testing system aware of the new (upcoming) tests. 5. Look at the very bottom and augment `TEST_executables`. @@ -193,7 +193,7 @@ available cores of the machine. ## Release -View [preparation of a new OpenMS release](https://github.com/OpenMS/OpenMS/wiki/Preparation-of-a-new-OpenMS-release#release_developer) +View [preparation of a new OpenMS release](https://github.com/OpenMS/OpenMS/wiki/Preparation-of-a-new-OpenMS-release#release_developer) to learn more about contributing to releases. @@ -306,7 +306,7 @@ Imagine you want to debug the TOPPView application and you want it to stop at li Linux: Use `ldd`. -Windows (Visual studio console): See [Dependency Walker](http://www.dependencywalker.com/) (use x86 for 32 bit builds +Windows (Visual studio console): See [Dependency Walker](https://www.dependencywalker.com/) (use x86 for 32 bit builds and the x64 version for 64bit builds. Using the wrong version of depends.exe will give the wrong results) or ``dumpbin /DEPENDENTS OpenMS.dll``. @@ -314,7 +314,7 @@ and the x64 version for 64bit builds. Using the wrong version of depends.exe wil Linux: Use `nm `. -Use `nm -C` to switch on demangling of low-level symbols into their C++-equivalent names. `nm` also accepts .a and .o +Use `nm -C` to switch on demangling of low-level symbols into their C++-equivalent names. `nm` also accepts .a and .o files. Windows (Visual studio console): Use ``dumpbin /ALL ``. @@ -323,13 +323,13 @@ Use dumpbin on object files (.o) or (shared) library files (.lib) or the DLL its ## Cross-platform thoughts -OpenMS runs on three major platforms.. Here are the most prominent causes of "it runs on Platform A, but not on B. +OpenMS runs on three major platforms.. Here are the most prominent causes of "it runs on Platform A, but not on B. What now?" ### Reading or writing binary files Reading or writing binary files causes different behaviour. Usually Linux does not make a difference between text-mode -and binary-mode when reading files. This is quite different on Windows as some bytes are interpreted as `EOF`, which +and binary-mode when reading files. This is quite different on Windows as some bytes are interpreted as `EOF`, which lead might to a premature end of the reading process. If reading binary files, make sure that you explicitly state that the file is binary when opening it. @@ -361,8 +361,8 @@ Add a new module [here](https://github.com/OpenMS/OpenMS/edit/develop/doc/doxyge ### How is the parameter documentation for classes derived from DefaultParamHandler created? -Add your class to the program ``OpenMS/doc/doxygen/parameters/DefaultParamHandlerDocumenter.cpp``. This program -generates a html table with the parameters. This table can then be included in the class documentation using the +Add your class to the program ``OpenMS/doc/doxygen/parameters/DefaultParamHandlerDocumenter.cpp``. This program +generates a html table with the parameters. This table can then be included in the class documentation using the following `doxygen` command:`@htmlinclude OpenMS_.parameters`. ```{note} @@ -379,7 +379,7 @@ Test if everything worked by calling `make doc_param_internal`. The parameters d ### How is the command line documentation for TOPP tools created? The program `OpenMS/doc/doxygen/parameters/TOPPDocumenter.cpp` creates the command line documentation for all classes -that are included in the static `ToolHandler.cpp` tools list. It can be included in the documentation using the +that are included in the static `ToolHandler.cpp` tools list. It can be included in the documentation using the following `doxygen` command: `@verbinclude TOPP_.cli` @@ -397,7 +397,7 @@ Read [contributor quickstart guide](/manual/contribute.md). IBM's profiler, available for all platforms (and free for academic use): Purify(Plus) and/or Quantify. -Windows: this is directly supported by Visual Studio (Depending on the edition: Team and above). Follow their +Windows: this is directly supported by Visual Studio (Depending on the edition: Team and above). Follow their documentation. Linux: @@ -425,4 +425,4 @@ Common errors are: * ``'... definitely lost'`` - Memory leak that has to be fixed * ``'... possibly lost'`` - Possible memory leak, so have a look at the code -For more information see the [`valgrind` documentation](http://valgrind.org/docs/manual/) . +For more information see the [`valgrind` documentation](https://valgrind.org/docs/manual/) . diff --git a/docs/manual/develop/developer-guidelines-for-adding-new-dependent-libraries.md b/docs/manual/develop/developer-guidelines-for-adding-new-dependent-libraries.md index 0c38ff15..a4e12d00 100644 --- a/docs/manual/develop/developer-guidelines-for-adding-new-dependent-libraries.md +++ b/docs/manual/develop/developer-guidelines-for-adding-new-dependent-libraries.md @@ -11,8 +11,8 @@ In short, requirements for adding a new library are: ## Indispensable functionality In general, adding a new dependency library (of which we currently have more than a handful, e.g. Xerces-C or ZLib) -imposes a significant integration and maintenance effort. Thus, the new library should add -**indispensable functionality**. If the added value does not compensate for the overhead, alternative solutions +imposes a significant integration and maintenance effort. Thus, the new library should add +**indispensable functionality**. If the added value does not compensate for the overhead, alternative solutions encompass: @@ -43,12 +43,12 @@ these platforms. - on **macOS** it should be ensured that the library can be build on recent macOS versions (> 10.10) compiled using the mac specific _libc++_. Ideally the package should be available via **HomeBrew** or **MacPorts** so we can directly use - those libraries instead of shipping them via the contrib. Additionally, the MacPorts and HomeBrew formulas for - building the libraries can serve as blueprints on how to compile the library in a generic setting inside the contrib + those libraries instead of shipping them via the contrib. Additionally, the MacPorts and HomeBrew formulas for + building the libraries can serve as blueprints on how to compile the library in a generic setting inside the contrib which should also be present. - on **Linux** since we (among other distributions) feature an OpenMS Debian package which requires that all -dependencies of OpenMS are available as Debian package as well, the new library must be available (or made available) as +dependencies of OpenMS are available as Debian package as well, the new library must be available (or made available) as Debian package or linked statically during the OpenMS packaging build. @@ -56,9 +56,9 @@ Debian package or linked statically during the OpenMS packaging build. Add a CMake file to `OpenMS/contrib` into the `libraries.cmake` folder on how to build the library. Preferably of course the library supports building with CMake (see Xerces) which makes the script really easy. It should support static and -dynamic builds on every platform. Add the compile flag for position independent code (e.g. `-fpic`) in the static -version. Add patches in the *patches* folder and call them with the macros in the `macros.cmake` file. Create patches -with `diff -Naur original_file my_file > patch.txt`. If there are problems during applying a patch, make sure to double +dynamic builds on every platform. Add the compile flag for position independent code (e.g. `-fpic`) in the static +version. Add patches in the *patches* folder and call them with the macros in the `macros.cmake` file. Create patches +with `diff -Naur original_file my_file > patch.txt`. If there are problems during applying a patch, make sure to double check filepaths in the head of the patch and the call of the patching macro in CMake. - All the libraries need to go into (e.g. copied/installed/moved) to `$buildfolder/lib` diff --git a/docs/manual/develop/link-external-code-to-openms.md b/docs/manual/develop/link-external-code-to-openms.md index 03174608..94a49741 100644 --- a/docs/manual/develop/link-external-code-to-openms.md +++ b/docs/manual/develop/link-external-code-to-openms.md @@ -2,13 +2,13 @@ External Code using OpenMS ========================== If OpenMS' TOPP tools are not enough in a certain scenario, you can either request a change to OpenMS, if you -feel this functionality is useful for others as well, or modify/extend OpenMS privately. For the latter, there are +feel this functionality is useful for others as well, or modify/extend OpenMS privately. For the latter, there are multiple ways to do this: - Modify the developer version of OpenMS by changing existing tools or adding new ones. - Use an **External Project** to write a new tool, while not touching OpenMS itself (see below on how to do that). -Once you've finished your new tool, and it only needs to run on the development machine. To ship it to a new client +Once you've finished your new tool, and it only needs to run on the development machine. To ship it to a new client machine, see, read further in this document. ## Compiling external code @@ -17,7 +17,7 @@ It is very easy to set up an environment to write your own programs using OpenMS the source package of OpenMS/TOPP properly. ```{note} -You cannot use the `install` target when working with the development version of OpenMS, it must be built and used +You cannot use the `install` target when working with the development version of OpenMS, it must be built and used within the build tree. ``` @@ -74,9 +74,9 @@ endif(OpenMS_FOUND) ``` The command `project` defines the name of the project, the name is only of interest of you're working in an IDE or want -to export this project's targets. To compile the program, append it to the `my_executables` list. If you use object -files (classes which do not contain a main program), append them to the `my_sources` list. In the next step CMake -creates a statically linked library of the object files, listed in `my_sources`. This simple CMakeLists.txt example can +to export this project's targets. To compile the program, append it to the `my_executables` list. If you use object +files (classes which do not contain a main program), append them to the `my_sources` list. In the next step CMake +creates a statically linked library of the object files, listed in `my_sources`. This simple CMakeLists.txt example can be extended to also build shared libraries, include other external libraries and so on. An example external project can be found in `OpenMS/share/OpenMS/examples/external_code`. Copy these files to a separate @@ -112,7 +112,7 @@ In short: - copy the `OpenMS/share/OpenMS` directory to the client machine (e.g `/share`) and set the environment variable `OPENMS_DATA_PATH` to this directory - copy the OpenMS library (`OpenMS.dll` for Windows or `OpenMS.so/.dylib` for Linux/macOS) to `/bin`. -- copy all Qt4 libraries to the client `/bin` or on Linux/macOS make sure you have installed the Qt4 +- copy all Qt4 libraries to the client `/bin` or on Linux/macOS make sure you have installed the Qt4 package. - [Windows only] copy Xerces dll (see `contrib/lib`) to `/bin` - [Windows only] install the VS redistributable package (see Microsoft Homepage) on the client machine which corresponds diff --git a/docs/manual/glossary.md b/docs/manual/glossary.md index abc36a99..dc57b2ee 100644 --- a/docs/manual/glossary.md +++ b/docs/manual/glossary.md @@ -127,7 +127,7 @@ spectra Plural of spectrum. mass spectrum - A mass spectrum is a plot of the ion signal as a function of the mass-to-charge ratio. A mass spectrum is produced by a single mass spectrometry run. These spectra are used to determine the elemental or isotopic signature of a sample, the masses of particles and of molecules, and to elucidate the chemical identity or structure of molecules and other chemical compounds. OpenMS represents a one dimensional mass spectrum using the class [MSSpectrum](https://openms.de/current_doxygen/html/classOpenMS_1_1MSSpectrum.html). + A mass spectrum is a plot of the ion signal as a function of the mass-to-charge ratio. A mass spectrum is produced by a single mass spectrometry run. These spectra are used to determine the elemental or isotopic signature of a sample, the masses of particles and of molecules, and to elucidate the chemical identity or structure of molecules and other chemical compounds. OpenMS represents a one dimensional mass spectrum using the class [MSSpectrum](https://openms.de/current_doxygen/html/classOpenMS_1_1MSSpectrum.html). m/z mass to charge ratio. @@ -175,7 +175,7 @@ SWATH Stands for 'Sequential acquisition of all theoretical fragment ion spectra'. OpenMS API - An interface that allows developers to use OpenMS core library classes and methods. + An interface that allows developers to use OpenMS core library classes and methods. RT Retention time. diff --git a/docs/tutorials/knime-user-tutorial.md b/docs/tutorials/knime-user-tutorial.md index 4163099b..3fd82fe8 100644 --- a/docs/tutorials/knime-user-tutorial.md +++ b/docs/tutorials/knime-user-tutorial.md @@ -245,7 +245,7 @@ knime-user-tutorial/quality-control.md [^1]: OpenMS, OpenMS home page [online]. [^2]: M. Sturm, A. Bertsch, C. Gröpl, A. Hildebrandt, R. Hussong, E. Lange, N. Pfeifer, -O. Schulz-Trieglaff, A. Zerck, K. Reinert, and O. Kohlbacher, OpenMS - an opensource software framework for mass spectrometry., BMC bioinformatics 9(1) +O. Schulz-Trieglaff, A. Zerck, K. Reinert, and O. Kohlbacher, OpenMS - an opensource software framework for mass spectrometry., BMC bioinformatics 9(1) (2008), doi:10.1186/1471-2105-9-163. 7, 83 [^3]: H. L. Röst, T. Sachsenberg, S. Aiche, C. Bielow, H. Weisser, F. Aicheler, S. Andreotti, @@ -254,9 +254,9 @@ software platform for mass spectrometry data analysis, Nature Methods 13(9), 741–748 (2016). 7 [^4]: O. Kohlbacher, K. Reinert, C. Gröpl, E. Lange, N. Pfeifer, O. Schulz-Trieglaff, and -M. Sturm, TOPP–the OpenMS proteomics pipeline., Bioinformatics 23(2) (Jan. +M. Sturm, TOPP–the OpenMS proteomics pipeline., Bioinformatics 23(2) (Jan. 2007). 7, 83 [^5]: M. R. Berthold, N. Cebron, F. Dill, T. R. Gabriel, T. Kötter, T. Meinl, P. Ohl, C. Sieb, K. Thiel, and B. Wiswedel, KNIME: The Konstanz Information Miner, in Studies in Classification, Data Analysis, and Knowledge Organization (GfKL 2007), Springer, 2007. -[^6]: M. Sturm and O. Kohlbacher, TOPPView: An Open-Source Viewer for Mass Spectrometry Data, Journal of proteome research 8(7), 3760–3763 (July 2009), doi:10.1021/pr900171m. 7 +[^6]: M. Sturm and O. Kohlbacher, TOPPView: An Open-Source Viewer for Mass Spectrometry Data, Journal of proteome research 8(7), 3760–3763 (July 2009), doi:10.1021/pr900171m. 7 diff --git a/docs/tutorials/knime-user-tutorial/lfq-metabolites.md b/docs/tutorials/knime-user-tutorial/lfq-metabolites.md index 01cac768..4eeb84fc 100644 --- a/docs/tutorials/knime-user-tutorial/lfq-metabolites.md +++ b/docs/tutorials/knime-user-tutorial/lfq-metabolites.md @@ -3,7 +3,7 @@ Label-free quantification of metabolites ## Introduction -Quantification and identification of chemical compounds are basic tasks in metabolomic studies. In this tutorial session we construct a UPLC-MS based, label-free quantification and identification workflow. Following quantification and identification we then perform statistical downstream analysis to detect quantification values that differ significantly between two conditions. This approach can, for example, be used to detect biomarkers. +Quantification and identification of chemical compounds are basic tasks in metabolomic studies. In this tutorial session we construct a UPLC-MS based, label-free quantification and identification workflow. Following quantification and identification we then perform statistical downstream analysis to detect quantification values that differ significantly between two conditions. This approach can, for example, be used to detect biomarkers. Here, we analyze a dataset derived from bacterial cytosolic fractions to investigate the metabolic effects of fosfomycin, an antibiotic that inhibits a key step in peptidoglycan biosynthesis. The study is based on *Bacillus subtilis* cultures subjected to different treatment conditions. @@ -155,8 +155,6 @@ The `FeatureLinkerUnlabeledKD` output can be visualized in TOPPView on top of th |Figure 35: Visualization of .consensusXML output over the .mzML and .featureXML ’layer’.| ## Basic metabolite identification - - At the current state we found several metabolites in the individual maps but so far don’t know what they are. To identify metabolites, OpenMS provides multiple tools, including search by mass: the AccurateMassSearch node searches observed masses against the Human Metabolome Database (HMDB)[^1], [^2], [^3]. We start with the workflow from the previous section (see Figure 34). - Add a **FileConverter** node (**Community Nodes** > **OpenMS** > **File Handling**) and connect the output of the FeatureLinkerUnlabeledKD to the incoming port. @@ -237,7 +235,6 @@ Have a look at the `Column Filter` node to reduce the table to the interesting c Try to compute and visualize the m/z and retention time error of the different feature elements (from the input maps) of each consensus feature. Hint: A nicely configured **Math Formula (Multi Column)** node should suffice.
- ## Identifying Metabolites Using Spectral Libraries diff --git a/docs/tutorials/knime-user-tutorial/lfq-peptide-protein.md b/docs/tutorials/knime-user-tutorial/lfq-peptide-protein.md index cfa016e4..013d6f63 100644 --- a/docs/tutorials/knime-user-tutorial/lfq-peptide-protein.md +++ b/docs/tutorials/knime-user-tutorial/lfq-peptide-protein.md @@ -291,5 +291,5 @@ that contain this protein ID. We plot the protein ID results versus two differen ## References -[^1]: A. Chawade, M. Sandin, J. Teleman, J. Malmström, and F. Levander, Data Processing Has Major Impact on the Outcome of Quantitative Label-Free LC-MS Analysis, Journal of Proteome Research 14(2), 676–687 (2015), PMID: 25407311, -arXiv:http://dx.doi.org/10.1021/pr500665j, doi:10.1021/pr500665j. 30 \ No newline at end of file +[^1]: A. Chawade, M. Sandin, J. Teleman, J. Malmström, and F. Levander, Data Processing Has Major Impact on the Outcome of Quantitative Label-Free LC-MS Analysis, Journal of Proteome Research 14(2), 676–687 (2015), PMID: 25407311, +arXiv:https://doi.org/10.1021/pr500665j, doi:10.1021/pr500665j. 30 \ No newline at end of file diff --git a/docs/tutorials/knime-user-tutorial/msstats.md b/docs/tutorials/knime-user-tutorial/msstats.md index c54d3a8b..ab560693 100644 --- a/docs/tutorials/knime-user-tutorial/msstats.md +++ b/docs/tutorials/knime-user-tutorial/msstats.md @@ -61,7 +61,7 @@ regularly needed if column names contain spaces, tabs or other special character ## Using MSstats in a KNIME workflow The R package `MSstats` can be used for statistical relative quantification of proteins and peptides in mass spectrometry-based proteomics. Supported are label-free as well as labeled experiments in combination with data-dependent, targeted and data independent acquisition. Inputs can be identified and quantified entities (peptides or proteins) and the output is a list of differentially abundant entities, or summaries of their relative abundance. It depends on accurate feature detection, identification -and quantification which can be performed e.g. by an OpenMS workflow. MSstats can be used for data processing & visualization, as well as statistical modeling & inference. Please see [^1] and the [MSstats](http://msstats.org) website for further +and quantification which can be performed e.g. by an OpenMS workflow. MSstats can be used for data processing & visualization, as well as statistical modeling & inference. Please see [^1] and the [MSstats](https://msstats.org) website for further information. ### Identification and quantification of the iPRG2015 data with subsequent MSstats analysis @@ -270,13 +270,13 @@ This matrix has the following properties: We can generate such a matrix in R using the following code snippet in (for example) a new **R to R** node that takes over the R workspace from the previous node with all its variables: ```r -comparison1<-matrix(c(-1,1,0,0),nrow=1) +comparison1<-matrix(c(-1,1,0,0),nrow=1) comparison2<-matrix(c(-1,0,1,0),nrow=1) -comparison3<-matrix(c(-1,0,0,1),nrow=1) +comparison3<-matrix(c(-1,0,0,1),nrow=1) comparison4<-matrix(c(0,-1,1,0),nrow=1) -comparison5<-matrix(c(0,-1,0,1),nrow=1) +comparison5<-matrix(c(0,-1,0,1),nrow=1) comparison6<-matrix(c(0,0,-1,1),nrow=1) comparison <- rbind(comparison1, comparison2, comparison3, comparison4, comparison5, comparison6) @@ -296,22 +296,22 @@ No more parameters need to be set for performing the comparison. In a next R to R node, the results are being processed. The following code snippet will rename the spiked-in proteins to A,B,C,D,E, and F and remove the names of other proteins, which will be beneficial for the subsequent visualization, as for example performed in Figure 20: ```r - test.MSstats.cr <- test.MSstats$ComparisonResult + test.MSstats.cr <- test.MSstats$ComparisonResult - # Rename spiked ins to A,B,C.... + # Rename spiked ins to A,B,C.... pnames <- c("A", "B", "C", "D", "E", "F") - names(pnames) <- c( - "sp|P44015|VAC2_YEAST", + names(pnames) <- c( + "sp|P44015|VAC2_YEAST", "sp|P55752|ISCB_YEAST", - "sp|P44374|SFG2_YEAST", - "sp|P44983|UTR6_YEAST", + "sp|P44374|SFG2_YEAST", + "sp|P44983|UTR6_YEAST", "sp|P44683|PGA4_YEAST", - "sp|P55249|ZRT4_YEAST" - ) + "sp|P55249|ZRT4_YEAST" + ) test.MSstats.cr.spikedins <- bind_rows( @@ -325,8 +325,8 @@ In a next R to R node, the results are being processed. The following code snipp test.MSstats.cr[grep("P44983", test.MSstats.cr$Protein),], - test.MSstats.cr[grep("P55249", test.MSstats.cr$Protein),] - ) + test.MSstats.cr[grep("P55249", test.MSstats.cr$Protein),] + ) # Rename Proteins test.MSstats.cr.spikedins$Protein <- sapply(test.MSstats.cr.spikedins$Protein, function(x) {pnames[as.character(x)]}) @@ -336,13 +336,13 @@ In a next R to R node, the results are being processed. The following code snipp test.MSstats.cr$Protein <- sapply(test.MSstats.cr$Protein, function(x) { - x <- as.character(x) + x <- as.character(x) if (x %in% names(pnames)) { - return(pnames[as.character(x)]) - } else { + return(pnames[as.character(x)]) + } else { return("") } @@ -355,7 +355,7 @@ In a next R to R node, the results are being processed. The following code snipp The last four nodes, each connected and making use of the same workspace from the last node, will export the results to a textual representation and volcano plots for further inspection. Firstly, quality control can be performed with the following snippet: ```r -qcplot <- dataProcessPlots(processed.quant, type="QCplot", +qcplot <- dataProcessPlots(processed.quant, type="QCplot", ylimDown=0, which.Protein = 'allonly', @@ -423,7 +423,7 @@ Please import the workflow from {path}`Workflows,Identificationquantificationiso The R package `MSstatsTMT` can be used for protein significance analysis in shotgun mass spectrometry-based proteomic experiments with tandem mass tag (TMT) labeling. `MSstatsTMT` provides functionality for two types of analysis & their visualization: Protein summarization based on peptide quantification and Model-based group comparison to detect significant changes in abundance. It depends on accurate feature detection, identification and quantification which can be performed e.g. by an OpenMS workflow. -In general, `MSstatsTMT` can be used for data processing & visualization, as well as statistical modeling. Please see [^3] and the [MSstats](http://msstats.org/msstatstmt/) website for further information. +In general, `MSstatsTMT` can be used for data processing & visualization, as well as statistical modeling. Please see [^3] and the [MSstats](https://msstats.org/msstatstmt/) website for further information. There is also an [online lecture](https://youtu.be/3CDnrQxGLbA) and tutorial for `MSstatsTMT` from the May Institute Workshop 2020. @@ -557,43 +557,43 @@ processed.data <- OpenMStoMSstatsTMTFormat(data) Afterwards different normalization steps are performed (global, protein, runs) as well as data imputation by using the msstats method. In addition peptide level data is summarized to protein level data. ```r -quant.data <- proteinSummarization(processed.data, +quant.data <- proteinSummarization(processed.data, method="msstats", - global_norm=TRUE, + global_norm=TRUE, reference_norm=TRUE, - MBimpute = TRUE, + MBimpute = TRUE, maxQuantileforCensored = NULL, remove_norm_channel = TRUE, remove_empty_channel = TRUE) ``` -There a lot of different possibilities to configure this method please have a look at the MSstatsTMT package for [additional detailed information](http://bioconductor.org/packages/release/bioc/html/MSstatsTMT.html). +There a lot of different possibilities to configure this method please have a look at the MSstatsTMT package for [additional detailed information](https://bioconductor.org/packages/release/bioc/html/MSstatsTMT.html). The next step is the comparions of the different conditions, here either a pairwise comparision can be performed or a confusion matrix can be created. The goal is to detect and compare the UPS peptides spiked in at different concentrations. ```r -# prepare contrast matrix -unique(quant.data$Condition) +# prepare contrast matrix +unique(quant.data$Condition) comparison<-matrix(c(-1,0,0,1, - 0,-1,0,1, + 0,-1,0,1, 0,0,-1,1, - 0,1,-1,0, - 1,-1,0,0), nrow=5, byrow = T) + 0,1,-1,0, + 1,-1,0,0), nrow=5, byrow = T) -# Set the names of each row +# Set the names of each row row.names(comparison)<- contrasts <- c("1-0125", - "1-05", + "1-05", "1-0667", - "05-0667", + "05-0667", "0125-05") # Set the column names @@ -603,7 +603,7 @@ colnames(comparison)<- c("0.125", "0.5", "0.667", "1") The constructed confusion matrix is used in the `groupComparisonTMT` function to test for significant changes in protein abundance across conditions based on a family of linear mixed-effects models in TMT experiments. ```r -data.res <- groupComparisonTMT(data = quant.data, +data.res <- groupComparisonTMT(data = quant.data, contrast.matrix = comparison, moderated = TRUE, # do moderated t test @@ -615,7 +615,7 @@ data.res <- data.res %>% filter(!is.na(Protein)) In the next step the comparison can be plotted using the `groupComparisonPlots` function by `MSstats`. ```r -library(MSstats) +library(MSstats) groupComparisonPlots(data=data.res.mod, type="VolcanoPlot", address=F, which.Comparison = "0125-05", sig = 0.05) ``` @@ -634,7 +634,7 @@ The isobaric analysis does not always has to be performed on protein level, for ## References -[^1]: A. Chawade, M. Sandin, J. Teleman, J. Malmström, and F. Levander, Data Processing Has Major Impact on the Outcome of Quantitative Label-Free LC-MS Analysis, Journal of Proteome Research 14(2), 676–687 (2015), PMID: 25407311, arXiv:http://dx.doi.org/10.1021/pr500665j, doi:10.1021/pr500665j. 30 +[^1]: A. Chawade, M. Sandin, J. Teleman, J. Malmström, and F. Levander, Data Processing Has Major Impact on the Outcome of Quantitative Label-Free LC-MS Analysis, Journal of Proteome Research 14(2), 676–687 (2015), PMID: 25407311, arXiv:https://doi.org/10.1021/pr500665j, doi:10.1021/pr500665j. 30 [^2]: M. Choi, Z. F. Eren-Dogu, C. Colangelo, J. Cottrell, M. R. Hoopmann, E. A. Kapp, S. Kim, H. Lam, T. A. Neubert, M. Palmblad, B. S. Phinney, S. T. Weintraub, B. MacLean, and O. Vitek, ABRF Proteome Informatics Research Group (iPRG) diff --git a/docs/tutorials/knime-user-tutorial/openswath-metabolomics.md b/docs/tutorials/knime-user-tutorial/openswath-metabolomics.md index abc82031..b9aa863d 100644 --- a/docs/tutorials/knime-user-tutorial/openswath-metabolomics.md +++ b/docs/tutorials/knime-user-tutorial/openswath-metabolomics.md @@ -47,15 +47,15 @@ We suggest do use a virtual environment for the Python 3 installation on windows 2. Activate `py39` environment. ```bash conda activate py39 - ``` + ``` 3. Install pip (see above). 4. On the command line: ```bash - python -m pip install -U pip - python -m pip install -U numpy + python -m pip install -U pip + python -m pip install -U numpy python -m pip install -U pandas - - python -m pip install -U pyprophet + + python -m pip install -U pyprophet python -m pip install -U pyopenms ``` @@ -73,11 +73,11 @@ We suggest do use a virtual environment for the Python 3 installation on Mac. He ``` 3. On the Terminal: ```bash - python -m pip install -U pip - python -m pip install -U numpy + python -m pip install -U pip + python -m pip install -U numpy python -m pip install -U pandas - - python -m pip install -U pyprophet + + python -m pip install -U pyprophet python -m pip install -U pyopenms ``` @@ -90,12 +90,12 @@ Use your package manager apt-get or yum, where possible. 3. Install setuptools (Debian/RedHat: python-setuptools). 4. On the Terminal: ```bash - python -m pip install -U pip - python -m pip install -U numpy + python -m pip install -U pip + python -m pip install -U numpy python -m pip install -U pandas - - python -m pip install -U pyprophet - python -m pip install -U pyopenms + + python -m pip install -U pyprophet + python -m pip install -U pyopenms ``` ## Benchmark data diff --git a/docs/tutorials/knime-user-tutorial/openswath.md b/docs/tutorials/knime-user-tutorial/openswath.md index 4bea0efc..d929c01a 100644 --- a/docs/tutorials/knime-user-tutorial/openswath.md +++ b/docs/tutorials/knime-user-tutorial/openswath.md @@ -3,7 +3,7 @@ OpenSWATH ## Introduction -[OpenSWATH](http://openswath.org/en/latest/index.html) [^3] allows the analysis of LC-MS/MS DIA (data independent acquisition) data using the approach described by Gillet *et al*. [^4]. The DIA approach described there uses 32 cycles to iterate through precursor ion windows from 400-426 Da to 1175-1201 Da and at each step acquires a complete, multiplexed fragment ion spectrum of all precursors present in that window. After 32 fragmentations (or 3.2 seconds), the cycle is restarted and the first window (400-426 Da) is fragmented again, thus delivering complete “snapshots” of all fragments of a specific window every 3.2 seconds. +[OpenSWATH](https://openswath.org/en/latest/index.html) [^3] allows the analysis of LC-MS/MS DIA (data independent acquisition) data using the approach described by Gillet *et al*. [^4]. The DIA approach described there uses 32 cycles to iterate through precursor ion windows from 400-426 Da to 1175-1201 Da and at each step acquires a complete, multiplexed fragment ion spectrum of all precursors present in that window. After 32 fragmentations (or 3.2 seconds), the cycle is restarted and the first window (400-426 Da) is fragmented again, thus delivering complete “snapshots” of all fragments of a specific window every 3.2 seconds. The analysis approach described by Gillet et al. extracts ion traces of specific fragment ions from all MS2 spectra that have the same precursor isolation window, thus generating data that is very similar to SRM traces. ## Installation of OpenSWATH @@ -12,7 +12,7 @@ OpenSWATH has been fully integrated since OpenMS 1.10 [^2], [^1] ## Installation of mProphet -mProphet[^8] is available as standalone script in {path}`External_Tools,mProphet` or can be downloaded [here](https://github.com/OpenMS/OpenMS-Tutorials/releases/download/data-and-tools-OpenMSv2.0.0/External_Tools.zip). [R](http://www.r-project.org/) and the package [MASS](http://cran.r-project.org/web/packages/MASS/) are further required to execute mProphet. Please obtain a version for either Windows, Mac or Linux directly from CRAN. +mProphet[^8] is available as standalone script in {path}`External_Tools,mProphet` or can be downloaded [here](https://github.com/OpenMS/OpenMS-Tutorials/releases/download/data-and-tools-OpenMSv2.0.0/External_Tools.zip). [R](https://www.r-project.org/) and the package [MASS](https://cran.r-project.org/web/packages/MASS/) are further required to execute mProphet. Please obtain a version for either Windows, Mac or Linux directly from CRAN. PyProphet, a much faster reimplementation of the mProphet algorithm is available from [PyPI](https://pypi.python.org/pypi/pyprophet/). The usage of pyprophet instead of mProphet is suggested for large-scale applications. mProphet will be used in this tutorial. @@ -94,7 +94,7 @@ Use transition for peptidoform inference using IPF. (0) Use transition to quantify peak group. (1) -For further instructions about generic transition list and assay library generation please see the following [link](http://openswath.org/en/latest/docs/generic.html). +For further instructions about generic transition list and assay library generation please see the following [link](https://openswath.org/en/latest/docs/generic.html). To convert transitions lists to TraML, use the TargetedFileConverter: Please use the absolute path to your OpenMS installation. **Linux or Mac** @@ -166,21 +166,21 @@ Please note that due to the semi-supervised machine learning approach of mProphe |Figure 44: OpenSWATH KNIME Workflow.| Additionally, the chromatogram output (.mzML) can be visualized for inspection with TOPPView. -For additional instructions on how to use pyProphet instead of mProphet please have a look at the [PyProphet Legacy Workflow](http://openswath.org/en/latest/docs/pyprophet_legacy.html). If you want to use the SQLite-based workflow in your lab in the future, please have a look [here](http://openswath.org/en/latest/docs/pyprophet.html). The SQLite-based workflow will not be part of the tutorial. +For additional instructions on how to use pyProphet instead of mProphet please have a look at the [PyProphet Legacy Workflow](https://openswath.org/en/latest/docs/pyprophet_legacy.html). If you want to use the SQLite-based workflow in your lab in the future, please have a look [here](https://openswath.org/en/latest/docs/pyprophet.html). The SQLite-based workflow will not be part of the tutorial. ## From the example dataset to real-life applications -The sample dataset used in this tutorial is part of the larger SWATH MS Gold Standard (SGS) dataset which is described in the publication of Roest *et al.*[^3]. It contains one of 90 SWATH-MS runs with significant data reduction (peak picking of the raw, profile data) to make file transfer and working with it easier. Usually SWATH-MS datasets are huge with several gigabyte per run. Especially when complex samples in combination with large assay libraries are analyzed, the TOPP tool based workflow requires a lot of computational resources. Additional information and instruction can be found at the following [link](http://openswath.org/en/latest/). +The sample dataset used in this tutorial is part of the larger SWATH MS Gold Standard (SGS) dataset which is described in the publication of Roest *et al.*[^3]. It contains one of 90 SWATH-MS runs with significant data reduction (peak picking of the raw, profile data) to make file transfer and working with it easier. Usually SWATH-MS datasets are huge with several gigabyte per run. Especially when complex samples in combination with large assay libraries are analyzed, the TOPP tool based workflow requires a lot of computational resources. Additional information and instruction can be found at the following [link](https://openswath.org/en/latest/). ## References [^1]: M. Sturm, A. Bertsch, C. Gröpl, A. Hildebrandt, R. Hussong, E. Lange, N. Pfeifer, -O. Schulz-Trieglaff, A. Zerck, K. Reinert, and O. Kohlbacher, OpenMS - an opensource software framework for mass spectrometry., BMC bioinformatics 9(1) +O. Schulz-Trieglaff, A. Zerck, K. Reinert, and O. Kohlbacher, OpenMS - an opensource software framework for mass spectrometry., BMC bioinformatics 9(1) (2008), doi:10.1186/1471-2105-9-163. 7, 83 [^2]: O. Kohlbacher, K. Reinert, C. Gröpl, E. Lange, N. Pfeifer, O. Schulz-Trieglaff, and -M. Sturm, TOPP–the OpenMS proteomics pipeline., Bioinformatics 23(2) (Jan. +M. Sturm, TOPP–the OpenMS proteomics pipeline., Bioinformatics 23(2) (Jan. 2007). 7, 83 [^3]: H. L. Röst, G. Rosenberger, P. Navarro, L. Gillet, S. M. Miladinovic, O. T. Schubert, W. Wolski, B. C. Collins, J. Malmstrom, L. Malmström, and R. Aebersold, diff --git a/docs/tutorials/knime-user-tutorial/quality-control.md b/docs/tutorials/knime-user-tutorial/quality-control.md index a33b44e7..a8aa8c8d 100644 --- a/docs/tutorials/knime-user-tutorial/quality-control.md +++ b/docs/tutorials/knime-user-tutorial/quality-control.md @@ -41,12 +41,12 @@ Import the workflow from {path}`Workflows,Quality Control,QC Metanodes.zip` by n R Dependencies: This section requires that the R packages `ggplot2` and scales are both installed. This is the same procedure as in this section. In case that you use an R installation where one or both of them are not yet installed, open the **R Snippet** nodes inside the metanodes you just used (double-click). Edit the script in the *R Script* text editor from: ```r -#install.packages("ggplot2") +#install.packages("ggplot2") #install.packages("scales") ``` to ```r -install.packages("ggplot2") +install.packages("ggplot2") install.packages("scales") ``` Press **Eval script** to execute the script. @@ -73,14 +73,14 @@ We can also add brand new QC metrics to our qcML files. Remember the **Histogram - Edit the **R View (table)** by adding the *R Script* according to this: ```r - #install.packages("ggplot2") -library("ggplot2") + #install.packages("ggplot2") +library("ggplot2") ggplot(knime.in, aes(x=peptide_charge)) + - - geom_histogram(binwidth=1, origin =-0.5) + + + geom_histogram(binwidth=1, origin =-0.5) + scale_x_discrete() + - - ggtitle("Identified peptides charge histogram") + + + ggtitle("Identified peptides charge histogram") + ylab("Count") ```