Skip to content
Snippets Groups Projects
Commit cbfb41ec authored by Galeone's avatar Galeone
Browse files

moving files from old repo

parent c4ffafa4
Branches
No related tags found
No related merge requests found
Showing
with 1995 additions and 57 deletions
# GEP # README
# Genome Evaluation Pipeline (GEP)
<br>
## Getting started * User-friendly and **all-in-one quality control and evaluation** Snakemake pipeline for genome assemblies
To make it easy for you to get started with GitLab, here's a list of recommended next steps. * Run **multiple genome evaluations** in one parallel (as many as you want!)
Already a pro? Just edit this README.md and make it your own. Want to make it easy? [Use the template at the bottom](#editing-this-readme)! * **Scales to server and HPC environments**
## Add your files * Required **software stack automatically deployed** using Conda
- [ ] [Create](https://docs.gitlab.com/ee/user/project/repository/web_editor.html#create-a-file) or [upload](https://docs.gitlab.com/ee/user/project/repository/web_editor.html#upload-a-file) files
- [ ] [Add files using the command line](https://docs.gitlab.com/ee/gitlab-basics/add-file.html#add-a-file-using-the-command-line) or push an existing Git repository with the following command:
---
# The GEP Workflow - Two *Modes*
### 1. **Build Mode**:
Builds meryl k-mer databases from sequencing reads. If you've already generated these database, skip to **Evaluate Mode**.
---
#### Requires:
- Raw Illumina short-insert whole-genome sequencing reads
and/or
- Raw PacBio HiFi sequencing reads
---
#### *Process* and **tools**:
- *(Optionally) trim raw sequencing reads*
- **Trim Galore** (remove Illumina adapters) and **Trimmomatic** (remove 10x barcodes if present) for Illumina reads
- **Cutadapt** (remove SMRTbell adaptors) for HiFI reads
- *(Optionally) perform quality check of raw sequencing read*s
- **FastQC** and **MultiQC**
- *Builds k-mer databases from raw sequencing reads to be used for evalution*
- **Meryl**
<br>
![](https://i.imgur.com/YKSGmmj.png)
<br>
---
### 2. **Evaluation Mode:**
Runs numerous whole-genome assembly evaluation tools, generating a user-friendly report for easy viewing and distribution (as well as individual outputs as normally generated by the tools in a well-defined folder structure).
---
Requires:
- Meryl k-mer databases (as produced by **Build Mode**)
- Genome Assembly(s) to evaluate - including (optionally) phased alternate assembly
- (Optionally) Hi-C libraries
- (Optionally) Estimated genome sizes
---
*Process* and **tools**:
- *K-mer completeness/correctness, QV Score, Spectra-CN (k-mer multiplicity) plots*
- **Merqury**
- *Genome Profiling based on raw sequencing reads*s
- **GenomeScope2**
- *Extensive contiguity statistics such as N# and L# stats, Scaffold and contig counts, GC Content, etc.*
- **Python script**
- *Gene completeness/correctness (i.e. recovered complete, duplicated, missing BUSCOs)*
- **BUSCO5**
- *(Optionally) Hi-C contact maps*
- **BWA-MEM2** and **samtools** for aligning Hi-C data
- **Pretext** for generating contact maps
<br>
![](https://i.imgur.com/JdAj5vz.png)
---
## **Downloading the workflow**
To clone the repository, use the following command:
```
git clone https://git.imp.fu-berlin.de/cmazzoni/GEP.git
```
---
## Installing Conda
- Conda (v4.11+) *but may work on older versions*
If you already have conda installed on your system, please skip to [Creating our Snakemake conda environment](#creating-our-snakemake-conda-environment)
Download the linux Miniconda3 installer from the following URL: https://docs.conda.io/en/latest/miniconda.html
Run the miniconda3 installation and check if it worked:
```
bash /<your_path_to>/Miniconda3-latest-Linux-x86_64.sh
##Follow miniconda3 installation instructions##
source ~/.bashrc
conda update conda
```
If `conda command not found` please close and re-open your terminal for conda installation to take effect, and then update.
---
<div id="creating-our-snakemake-conda-environment"></div>
## Creating our GEP (Snakemake) Conda environment
The pipeline requires the following software to run (some versions may not need to be exactly matching):
- snakemake (v7.8.5)
- beautifulsoup4 (v4.9.3)
- pandas (v1.4.2)
- numpy (v1.22.3)
- python (v3.10)
- mamba (v0.25)
The easiest method to install this software stack is to create a GEP conda environment with the provided `installGEP.yaml` (see ***Note**)
```
conda env create -f /<your_path_to>/GEP/installGEP.yaml
conda activate GEP
##check snakemake installed correctly
snakemake --version
```
You may need to close and reopen your terminal after the `conda env create ...` command, or use `source ~/.bashrc` depending on your system.
***Note** *If you already have a snakemake (or suitable Python) installation and would like to avoid installing again, ensure that all of the above software are in your `PATH`. If you do this instead of installing from the provided GEP environment (`installGEP.yaml`), you will still need at least the base conda installed/activated - as it's required to handle software dependencies of all the tools used within the workflow itself*
<br>
---
<div id="configure-sample"></div>
## **Configure Sample Sheet `.tsv`s**
**IMPORTANT:** GEP can run with *only one* of the below sample sheets at a given time. A sample sheet *can* however include multiple different samples, datasets, and assemblies for a given run.
**Depending on which sample sheet is provided, GEP will automatically run in one of the following two modes:**
1. **Build**
- This mode is run if either the [Illumina Sample Sheet `.tsv`](#illumina-sample) or the [PacBio HiFi Sample Sheet `.tsv`](#hifi-sample) are provided to GEP.
2. **Evaluate**
- This mode is run if the [Assembly Evaluation Sample Sheet `.tsv`](#assembly-eval) is provided.
If you already have meryl k-mer databases [(see the meryl github for more details)](https://github.com/marbl/merqury/wiki/1.-Prepare-meryl-dbs) for the genomes you wish to evaluate, you can skip **Build** mode.
<br>
---
<div id="illumina-sample"></div>
#### Illumina Sample Sheet `.tsv` - for **Build** mode
(see example [GEP/configuration/exampleSampleSheets/build_illumina_example.tsv](configuration/exampleSampleSheets/build_illumina_example.tsv))
|sample|Library_R1|Library_R2|meryl_kmer_size| trim10X| trimAdapters| fastQC|
|:----|:---|:---|:---|:---|:----|:---|
|Your preferred identifier (Results will include this sample name in output files) <br /><br />**`string`** | **Full path** to Forward (R1) read of PE library/s **in `fastq` format. Can be `.gz`ipped** | **Full path** to Reverse (R2) read of PE library/s **in `fastq` format. Can be `.gz`ipped**| Your choice of k-mer size used to count k-mers in the reads. <br /><br /> **Recommended for llumina reads is `21`**| Remove first 23bp from R1 of the library. This is only if your reads were sequenced using 10X sequencing platform.<br /><br /> **Possible options are `True` or `False`** | Check for and remove any other sequencing adapters that may still be present in the reads.<br /><br /> **Possible options are `True` or `False`** |Run FastQC on the library pair provided in `Library_R1` and `Library_R2` columns. <br /><br /> **Possible options are `True` or `False`**|
<br>
Additional Info in case of multiple PE libraries for a single sample:
- **sample**: Provide the same sample ID for each of the pairs of a sample (the final output will be a single `.meryl` database consisting of k-mers from all PE libraries given for a sample, with this identifier as a prefix). Every line with the same unique sample ID will be considered as coming from the same sample.
- **Library_R1** and **Library_R2**: Each library pair is provided as one line in the tsv. If you have three PE libraries, then you will have three lines in the tsv for this sample.
- **meryl_kmer_size**: This should be consistent for libraries that have the same sample ID.
If you are a little uncertain about which short-read libraries you should use, see [How to choose your illumina libraries](#how-to-choose-your-illumina-libraries) further down the page.
<br>
---
<div id="hifi-sample"></div>
#### PacBio HiFi Sample Sheet `.tsv` - for **Build** mode
(see example [GEP/configuration/exampleSampleSheets/build_hifi_example.tsv](configuration/exampleSampleSheets/runEval_example.tsv))
|sample|hifi_reads|meryl_kmer_size| trimSMRTbell| fastQC|
|:----|:---|:---|:----|:---|
|Your preferred identifier (Results will include this sample name in output files) <br /><br />**`string`** | **Full path** to PacBio HiFi library **in `fastq` format. Can be `.gz`ipped** | Your choice of k-mer size used to count k-mers in the reads. <br /><br /> **Recommended for PacBio Hifi is `31`**| Check for and remove any SMRT-bell adapter sequences. <br /><br /> **Possible options are `True` or `False`** | Run FastQC on the library provided in `hifi_reads` column. <br /><br /> **Possible options are `True` or `False`** |
<br>
Additional Info in case of multiple HiFi libraries for a single sample:
- **sample**: Provide the same sample ID for each Hifi library of a sample (the final output will be a single `.meryl` database consisting of k-mers from all PE libraries given for a sample, with this identifier as a prefix). Every line with the same unique sample ID will be considered as coming from the same sample.
- **hifi_reads**: Each library is provided as one line in the tsv. If you have three HiFi libraries for the same sample, then you will have three lines in the tsv for this sample.
- **meryl_kmer_size**: This should be consistent for libraries that have the same sample ID.
<br>
---
**Note:** You may also wish to concatenate the libraries of a sample together prior to building the database. In this case, you will only need to provide one line per sample in the respective sample sheets. However, the execution run-time will be hindered as the pipeline is designed to run on multiple libraries in parallel.
---
<div id="assembly-eval"></div>
#### Assembly Evaluation Sample Sheet `.tsv` - for **Evaluate** mode
See [GEP/configuration/exampleSampleSheets/runEval_example.tsv](configuration/exampleSampleSheets/runEval_example.tsv) for an idea of how to set up the evaluation sample sheet.
|ID|ASM_LEVEL | PRI_asm|ALT_asm| merylDB| merylDB_kmer |genomeSize|HiC_R1|HiC_R2|
|:----|:---|:---|:---|:----|:---|:---|:---|:------------------|
|Identifier for results and reporting| Level that genome is assembled to. <br /><br /> **Possible options are `chr` (chromosome), `scaff` (scaffold), or `cont` (contig)** |Full path to primary assembly you wish to evaluate in`fasta` format (any extension allowed). <br /><br /> **Can be `.gz`ipped** | Full path to alternate assembly (haplotype) in`fasta` format (any extension allowed). <br /><br /> **Can be `.gz`ipped.** <br /><br /> **If you do not have one, write** `None` | Full path to `.meryl` database. <br /><br /> **Must have `.meryl` extension**| The k-mer size used to build the provided corresponding `.meryl` database. | Provide a size estimate (in bp and as an `integer`) for the corresponding assembly/species. <br /><br /> **If you do not have one, write** `auto` |Full path to Forward (R1) reads of HiC library* in `fastq` format. <br /><br /> **Can be `.gz`ipped** |Full path to Reverse (R2) read of HiC library* in `fastq` format. <br /><br /> **Can be `.gz`ipped**|
*Please concatenate all HiC Libraries for a sample into a single library pair (one file for R1 and one file for R2), providing this concatenated pair to the sample sheet
#### Difference between `sample` column in Build Mode sample sheets and `ID` column in Evaluate Mode sample sheet
In practice, ’sample’ from Build Mode sample sheets and ’ID’ from Evaluate Mode sample sheets may often be synonymous, however they are defined differently for cases where multiple assemblies (defined in the sample sheet with unique ID’s) are evaluated using the same k-mer database from a single ’sample’, and so the results need to be differentiated.
---
### Configuration `.yaml`
For both **Build** and **Evaluate** modes, you will need to configure the `config.yaml` found in the [GEP/configuration](configuration/config.yaml) folder.
```
Results: "<path>"
samplesTSV: "<path>"
busco5Lineage: "<nameofbuscoDB>"
resources: "configuration/define_resources.yaml"
```
##### `Results:`
Provide a path where all your results are stored for the run. It does not have to be inside the project folder (can be any location accessible by the user). You also do not need to create this folder yourself, Snakemake will create this folder if it does not yet exist (together with any non-existent parent folders).
##### `samplesTSV:`
This is where you provide the full path for **one** of the sample sheets after you have filled out the necessary information. Depending on which kind of sample sheet you provide, GEP will run the corresponding mode (**Build** - Illumina, **Build** - HiFi, or **Evaluate**)
##### `busco5Lineage` - Only required for **Evaluate** mode
Busco needs a database to be able to run. Here you have a couple of different options.
Have a look at all the busco databases here https://busco-data.ezlab.org/v5/data/lineages/.
**You can simply provide the taxonomy name that you wish to use and latest database matching the name provided will be automatically downloaded prior to execution**.
```
busco5Lineage: "vertebrata"
```
**OR** You can manually download and unpack your desired BUSCO lineage (or if you already have a BUSCO database downloaded), and provide the full path to this. Alternatively, you may already have the necessary BUSCO database on your system and do not wish for GEP to re-download it. In this case, provide the path to the already existing BUSCO db:
``` ```
cd existing_repo busco5Lineage: "/<PathTo>/manuallyDownloaded/vertebrata_odb10"
git remote add origin https://git.imp.fu-berlin.de/begendiv/gep.git ```
git branch -M main
git push -uf origin main
*If a busco database matching the given name already exists in the `GEP/buscoLineage/` folder, either from manual download or from previously automatic download (from previously executed runs), then the pipeline will skip re-downloading.*
---
### Configure run (some more `.yaml`'s)
The user will also need to configure a separate config.yaml file pertaining to the execution of Snakemake,and depending on whether or not GEP will be executed HPC infrastructure.
#### Default (non-HPC) configuration
Modify the Snakemake parameters in the file located at [GEP/SUBMIT_CONFIG/**default**/config.yaml](GEP/SUBMIT_CONFIG/default/config.yaml).
Here the user can provide a key-value pair for any Snakemake parameter, such as the maximum number of threads or jobs to utilise at any given time for the entire execution. Some commonly used parameters are already included in the provided file, but the user can add any Snakemake parameter to this file (run ’snakemake –help for possible options). There are already some default parameters present.
```
#Example
cores: 64
dry-run: False
use-conda: True
latency-wait: 60
rerun-incomplete: True
printshellcmds: False
keep-going: False
```
#### HPC (SLURM) configuration
Modify the Snakemake parameters in the file located at [GEP/SUBMIT_CONFIG/**slurm**/config.yaml](GEP/SUBMIT_CONFIG/slurm/config.yaml). Similar to the default execution, any Snakemake-specific parameter can be configured here. Additionally, the user can provide default resources to be used by every job submitted to the Slurm scheduler such as ’–partition’ and ’–qos’.
---
## `cores`, `jobs`, `resources`, and parallelisation
Memory (in Mb), wall-time, and number of threads for non-locally executed jobs are defined in the file [GEP/configuration/define_resources.yaml](GEP/configuration/define_resources.yaml). The current values are likely over estimations and should allow for executing GEP without any issues. As required, these values should be increased or decreased accordingly. **If running on non-HPC infrastructure the wall-time will be ignored.**
The Snakemake parameter `cores` (as found in `SUBMIT_CONFIG/<default_OR_slurm>/config.yaml` ) is the maximum number of cores (synonymous with threads in this case) GEP will utilise at any time during execution. **For example during Evaluation Mode if a single BUSCO job uses 16 threads (as is set currently in define_resources.yaml), then by setting the maximum number of cores to 64, GEP will run up to (64/16 =) 4 BUSCO jobs in parallel. For other steps that require less than 16 threads, the number of jobs will be scaled up to utilise as many of the total requested cores as possible without exceeding the maximum.**
Similarly the user can provide a maximum number of jobs (e.g. `jobs: 10`). **GEP will never exceed either the maximum number of cores or the maximum number of jobs, whichever is reached first.** In the above example, **if the user was to provide `cores: 64` AND `jobs: 2`, then GEP will only ever run two jobs in parallel regardless of whether the maximum number of cores has been reached.** The user needs to consider how many samples there are (and therefore how many instances of a single step will be run), as well as how much resources are available.
---
# Running GEP
Checklist before running:
- [x] Installed and activated the GEP Conda environment
- [x] Filled out the respective sample sheet correctly
- [x] Filled out the run configuration (i.e providing a results folder and the path to the sample sheet) in [GEP/configuration/config.yaml](configuration/config.yaml)
- [x] Change the maximum number of `cores` and `jobs`, as well as any other snakemake-specific parameters you wish to use, in the file [GEP/SUBMIT_CONFIG/**default**/config.yaml](GEP/SUBMIT_CONFIG/default/config.yaml), or for executing on an HPC (Slurm) infrastructure the file [GEP/SUBMIT_CONFIG/**slurm**/config.yaml](GEP/SUBMIT_CONFIG/slurm/config.yaml)
- [x] Modify the per-job resources (i.e. threads, memory, and wall-time) in the file [GEP/configuration/define_resources.yaml](GEP/configuration/define_resources.yaml)
*For simplicity, make you are inside the main GEP folder before running*
For both local and HPC (slurm) execution, it is good practice to run GEP as a *dry run* first **before any full execution**. This is give you an idea of whether the sample sheet and `configuration/config.yaml` are filled out correctly (by checking if the paths exists), and if there is anything wrong with your installation.
```
snakemake --cores 1 --dry-run
```
### Default Execution
``` ```
snakemake --profile SUBMIT_CONFIG/default/
```
### HPC (Slurm) Execution
```
snakemake --profile SUBMIT_CONFIG/slurm/
```
### Run in background
For those not aware, you can run GEP with `nohup` and `&` added to the command (both default and slurm). `nohup` will run the Snakemake command in a way that cannot be interrupted when you lose connection to the server/cluster. The trailing `&` will simply run the command in the background of your current terminal, allowing you to freely use the terminal instance in the meantime.
```
nohup snakemake --profile SUBMIT_CONFIG/slurm/ &
```
---
## More Useful Tips
The user can directly override any of the Snakemake parameters (and paths) set in the respective configuration files through the command line when executing GEP. For example, to override the values in the sample configuration [GEP/configuration/config.yaml](configuration/config.yaml) for the path to the main results folder without having to edit the document itself:
```
snakemake --profile SUBMIT_CONFIG/default/ --config [Results=/Alternative/ResultsFolder]
```
Or to override the BUSCO lineage for example:
```
snakemake --profile SUBMIT_CONFIG/default/ --config [busco5Lineage=vertebrata]
```
To override values for maximum number of cores and jobs, as well as setting the flag for printing the shell commands (a Snakemake-specific parameter) in the submission configuration as found in [GEP/SUBMIT_CONFIG/default/config.yaml](GEP/SUBMIT_CONFIG/default/config.yaml):
```
snakemake --profile SUBMIT_CONFIG/default/ --cores 144 --jobs 20 --printshellcmds
```
The user can also run GEP only up to and including specific rules in the pipeline if they do not wish to run the full evaluation:
```
snakemake --profile SUBMIT_CONFIG/default/ --until <ruleName>
```
In such cases all files that this rule depends on will be run, but final PDF reports will not be created.
---
# Results
## Folder Structure of results
### **Build Mode**
![](https://i.imgur.com/3Eaf9yR.png)
### **Evaluate Mode**
![](https://i.imgur.com/Xl1WGnC.png)
## PDF Report
**One page per-assembly for key results**
![](https://i.imgur.com/UCWXvXB.png)
- **A:** Name of the assembly and k-mer size used for databases.
- **B:** Key contiguity statistics. Full contiguity statistics can be view in the respective output files for that step.
- **C:** Genomescope Profiles (Untransformed Linear and Log versions).
- **D:** QV Score and K-mer Completeness tables.
- **E:** BUSCO results including name of BUSCO clade.
- **F:** K-mer multiplicity plots for primary assembly only (Top) and both primary and alternate assemblies together (Bottom). If no alternate assembly is provided, the same plot will be present in both positions.
---
Followed by **Up to six (2 by 3) Hi-C contact maps per page**
![](https://i.imgur.com/Oa9kFPi.png)
---
Followed by a page with **an aggregated table (coloured as a heatmap) for all assemblies evaluated in a given run**
![](https://i.imgur.com/jwL6Wgx.png)
---
Followed by a page with **an aggregated table coloured according to tiers, also for all assemblies evaluated in a given run**. The threshold values for the different tiers were loosely based on the Earth Biogenome Project's [TABLE OF PROPOSED EBP METRICS](https://www.earthbiogenome.org/assembly-standards.
![](https://i.imgur.com/3HyLfmY.png)
---
#### Tables in different formats
## Integrate with your tools
- [ ] [Set up project integrations](https://git.imp.fu-berlin.de/begendiv/gep/-/settings/integrations) *The above tables can also be found in the results folder: `/<pathToResults>/1evaluation/finalResults/tables/` in various formats (i.e. `.tsv`, `.md`, `.html`).*
## Collaborate with your team ---
- [ ] [Invite team members and collaborators](https://docs.gitlab.com/ee/user/project/members/) #### All result files
- [ ] [Create a new merge request](https://docs.gitlab.com/ee/user/project/merge_requests/creating_merge_requests.html)
- [ ] [Automatically close issues from merge requests](https://docs.gitlab.com/ee/user/project/issues/managing_issues.html#closing-issues-automatically)
- [ ] [Enable merge request approvals](https://docs.gitlab.com/ee/user/project/merge_requests/approvals/)
- [ ] [Set auto-merge](https://docs.gitlab.com/ee/user/project/merge_requests/merge_when_pipeline_succeeds.html)
## Test and Deploy All results files as output by each evaluation tool can also be found in their respective folders.
Use the built-in continuous integration in GitLab. ---
# How to choose your Illumina libraries
- [ ] [Get started with GitLab CI/CD](https://docs.gitlab.com/ee/ci/quick_start/index.html)
- [ ] [Analyze your code for known vulnerabilities with Static Application Security Testing(SAST)](https://docs.gitlab.com/ee/user/application_security/sast/)
- [ ] [Deploy to Kubernetes, Amazon EC2, or Amazon ECS using Auto Deploy](https://docs.gitlab.com/ee/topics/autodevops/requirements.html)
- [ ] [Use pull-based deployments for improved Kubernetes management](https://docs.gitlab.com/ee/user/clusters/agent/)
- [ ] [Set up protected environments](https://docs.gitlab.com/ee/ci/environments/protected_environments.html)
*** Variations in sequencing methods/protocols can lead to an increase in bias in the corresponding raw sequencing libraries. Sequencing a biological sample may often consist of both mate-pair/long-insert (e.g. insert sizes of 5k, 10k, 20k bp, etc.) and short-insert (e.g. insert-sizes 180, 250, 500, 800bp) paired-end libraries, respectively.
# Editing this README Usually you can deduce the insert sizes and library types from the metadata found within NCBI or and SRA archive. In order to maintain a little bias as possible whilst maintaining decent coverage, you should ideally use only short-insert paired-end libraries for this evaluation pipeline.
When you're ready to make this README your own, just edit this file and use the handy template below (or feel free to structure it however you want - this is just a starting point!). Thank you to [makeareadme.com](https://www.makeareadme.com/) for this template. ---
## Suggestions for a good README ## Citations
Every project is different, so consider which of these sections apply to yours. The sections used in the template are suggestions for most open source projects. Also keep in mind that while a README can be too long and detailed, too long is better than too short. If you think your README is too long, consider utilizing another form of documentation rather than cutting out information.
## Name #######################################################################
Choose a self-explaining name for your project.
## Description The software/tools used as part of our genome evaluation are as follows. Please see [GEP/envs] for exact versions of tools, as they may be updated frequently:
Let people know what your project can do specifically. Provide context and add a link to any reference visitors might be unfamiliar with. A list of Features or a Background subsection can also be added here. If there are alternatives to your project, this is a good place to list differentiating factors.
## Badges #### Pre-Processing:
On some READMEs, you may see small images that convey metadata, such as whether or not all the tests are passing for the project. You can use Shields to add some to your README. Many services also have instructions for adding a badge. * **Trimmomatic** (*Bolger, A. M., Lohse, M., & Usadel, B. (2014)*. http://www.usadellab.org/cms/?page=trimmomatic)
* **Trim_galore** (*Felix Krueger* bioinformatics.babraham.ac.uk)
* **Cutadapt** (*Martin, M. (2011)*. https://doi.org/10.14806/ej.17.1.200)
* **Fastqc** (*Simon Andrews* https://github.com/s-andrews/FastQC
* **Multiqc** (*Ewels, P., Magnusson, M., Lundin, S., Käller, M. (2016)*. https://doi.org/10.1093/bioinformatics/btw354)
## Visuals #### Reference-free Genome Profiling
Depending on what you are making, it can be a good idea to include screenshots or even a video (you'll frequently see GIFs rather than actual videos). Tools like ttygif can help, but check out Asciinema for a more sophisticated method. * **GenomeScope2** (*Ranallo-Benavidez, T.R., Jaron, K.S. & Schatz, M.C. (2020)* https://github.com/tbenavi1/genomescope2.0)
## Installation #### K-mer distribution analysis
Within a particular ecosystem, there may be a common way of installing things, such as using Yarn, NuGet, or Homebrew. However, consider the possibility that whoever is reading your README is a novice and would like more guidance. Listing specific steps helps remove ambiguity and gets people to using your project as quickly as possible. If it only runs in a specific context like a particular programming language version or operating system or has dependencies that have to be installed manually, also add a Requirements subsection. * **meryl** and **merqury** (*Rhie, A., Walenz, B.P., Koren, S. et al. (2020)*. https://doi.org/10.1186/s13059-020-02134-9)
## Usage
Use examples liberally, and show the expected output if you can. It's helpful to have inline the smallest example of usage that you can demonstrate, while providing links to more sophisticated examples if they are too long to reasonably include in the README.
## Support
Tell people where they can go to for help. It can be any combination of an issue tracker, a chat room, an email address, etc.
## Roadmap #### Assessing quality and annotation completeness with Benchmarking Universal Single-Copy Orthologs (BUSCOs)
If you have ideas for releases in the future, it is a good idea to list them in the README. * **BUSCOv5** (*Seppey M., Manni M., Zdobnov E.M. (2019)* https://busco.ezlab.org/ )
## Contributing
State if you are open to contributions and what your requirements are for accepting them.
For people who want to make changes to your project, it's helpful to have some documentation on how to get started. Perhaps there is a script that they should run or some environment variables that they need to set. Make these steps explicit. These instructions could also be useful to your future self. #### Hi-C Alignment and Contact Maps
* **samtools** (*Danecek, P., (2021)* https://doi.org/10.1093/gigascience/giab008)
* **BWA** (*Li H. and Durbin R. (2009)* https://doi.org/10.1093/bioinformatics/btp324)
* Pretext (https://github.com/wtsi-hpag/PretextMap)
You can also document commands to lint the code or run tests. These steps help to ensure high code quality and reduce the likelihood that the changes inadvertently break something. Having instructions for running tests is especially helpful if it requires external setup, such as starting a Selenium server for testing in a browser. #### Generate PDF reports from markdown files
* **Pandoc** (https://pandoc.org/MANUAL.html)
## Authors and acknowledgment
Show your appreciation to those who have contributed to the project.
## License #### Scaffold/contig statistics: N# and L# stats, scaffold metrics, sequence counts, GC content, Estimated genome size
For open source projects, say how it is licensed. * Python scripts (*Mike Trizna. assembly_stats 0.1.4 (Version 0.1.4). Zenodo. (2020)*. http://doi.org/10.5281/zenodo.3968775 ) (This script was used as a foundation, being modified and improved upon for inclusion in GEP)
## Project status #######################################################################
If you have run out of energy or time for your project, put a note at the top of the README saying that development has slowed down or stopped completely. Someone may choose to fork your project or volunteer to step in as a maintainer or owner, allowing your project to keep going. You can also make an explicit request for maintainers.
cores: 256
dry-run: False
use-conda: True
latency-wait: 60
rerun-incomplete: True
printshellcmds: False
keep-going: False
#use-singularity: False
# __default__:
# jobname: 'GEP.{rule}'
# partition: begendiv,main
# # nCPUs: "{threads}"
# qos: 'standard'
#
#### PRETEXT ####
unzipHiC_R1:
jobname: GEP.{rule}.{wildcards.asmID}
mem_mb: 500
time: "01:30:05"
threads: 4
# symlinkUnzippedHiC_R1:
# jobname: GEP.{rule}.{wildcards.asmID}
# mem_mb: 10
# time: "00:05:00"
# threads: 1
unzipHiC_R2:
jobname: GEP.{rule}.{wildcards.asmID}
mem_mb: 500
time: "01:30:05"
threads: 4
# symlinkUnzippedHiC_R2:
# jobname: GEP.{rule}.{wildcards.asmID}
# mem_mb: 10
# time: "00:05:00"
# threads: 1
pretext_index_PRI_asm:
jobname: GEP.{rule}.{wildcards.asmID}
mem_mb: 72000
time: "02:30:00"
threads: 1
pretext_fastq2bam_R1:
jobname: GEP.{rule}.{wildcards.asmID}
mem_mb: 32000
time: "08:00:00"
threads: 12
pretext_fastq2bam_R2:
jobname: GEP.{rule}.{wildcards.asmID}
mem_mb: 32000
time: "08:00:00"
threads: 12
pretext_filter_5primeEnd_R1:
jobname: GEP.{rule}.{wildcards.asmID}
mem_mb: 200
time: "08:00:00"
threads: 4
pretext_filter_5primeEnd_R2:
jobname: GEP.{rule}.{wildcards.asmID}
mem_mb: 200
time: "08:00:00"
threads: 4
pretext_filtered_combine:
jobname: GEP.{rule}.{wildcards.asmID}
mem_mb: 9500
time: "08:30:00"
threads: 12
pretext_map:
jobname: GEP.{rule}.{wildcards.asmID}
mem_mb: 32000
time: "08:30:00"
threads: 12
pretext_snapshot:
jobname: GEP.{rule}.{wildcards.asmID}
mem_mb: 500
time: "00:10:00"
threads: 1
# pretextMaps2md:
# jobname: GEP.{rule}.{wildcards.asmID}
# mem_mb: 500
# time: "00:10:00"
# threads: 1
unzipFasta_PRI:
jobname: GEP.{rule}.{wildcards.asmID}
mem_mb: 500
time: "00:15:00"
threads: 4
# symlinkUnzippedFasta_PRI:
# jobname: GEP.{rule}.{wildcards.asmID}
# mem_mb: 500
# time: "00:15:00"
# threads: 1
unzipFasta_ALT:
jobname: GEP.{rule}.{wildcards.asmID}
mem_mb: 500
time: "00:15:00"
threads: 4
# symlinkUnzippedFasta_ALT:
# jobname: GEP.{rule}.{wildcards.asmID}
# mem_mb: 500
# time: "00:05:05"
# threads: 1
merqury:
jobname: GEP.{rule}.{wildcards.asmID}
mem_mb: 40000
time: "08:00:00"
threads: 8
busco5:
jobname: GEP.{rule}.{wildcards.asmID}
mem_mb: 150000
time: "1-00:00:00"
threads: 16
# moveBuscoOutputs:
# jobname: GEP.{rule}.{wildcards.asmID}
# mem_mb: 500
# time: "00:05:05"
# threads: 1
genomescope2:
jobname: GEP.{rule}.{wildcards.asmID}.{wildcards.kmer}
mem_mb: 500
time: "00:15:00"
threads: 1
assemblyStats:
jobname: GEP.{rule}.{wildcards.asmID}
mem_mb: 500
time: "00:15:00"
threads: 1
# saveConfiguration_and_getKeyValues_kmer:
# jobname: GEP.{rule}.{wildcards.asmID}.{wildcards.kmer}
# mem_mb: 500
# time: "00:05:00"
# threads: 1
#
#
# saveConfiguration_and_getKeyValues:
# jobname: GEP.{rule}.{wildcards.asmID}
# mem_mb: 500
# time: "00:05:05"
# threads: 1
#
#
# aggregateAllAssemblies:
# jobname: GEP.{rule}
# mem_mb: 500
# time: "00:05:05"
# threads: 1
#
# makeReport:
# jobname: GEP.{rule}.{wildcards.asmID}.{wildcards.kmer}
# mem_mb: 500
# time: "00:05:05"
# threads: 1
#
#
# addFullTable:
# jobname: GEP.{rule}
# mem_mb: 500
# time: "00:05:05"
# threads: 1
#
# aggregateReport:
# jobname: GEP.{rule}
# mem_mb: 500
# time: "00:05:05"
# threads: 1
makePDF:
jobname: GEP.{rule}
mem_mb: 1000
time: "00:10:05"
threads: 1
###### Rules for illuminadb building ##########
unzipFastq_R1:
jobname: GEP.{rule}.{wildcards.sample}.{wildcards.readCounter}.{wildcards.trim10x}.{wildcards.trimAdapters}
mem_mb: 96000
time: "01:45:05"
threads: 4
unzipFastq_R2:
jobname: GEP.{rule}.{wildcards.sample}.{wildcards.readCounter}.{wildcards.trim10x}.{wildcards.trimAdapters}
mem_mb: 96000
time: "01:45:05"
threads: 4
# symlinkUnzippedFastq_R1:
# jobname: GEP.{rule}.{wildcards.sample}.{wildcards.readCounter}.{wildcards.trim10x}.{wildcards.trimAdapters}
# mem_mb: 500
# time: "00:01:35"
# threads: 1
#
# symlinkUnzippedFastq_R2:
# jobname: GEP.{rule}.{wildcards.sample}.{wildcards.readCounter}.{wildcards.trim10x}.{wildcards.trimAdapters}
# mem_mb: 500
# time: "00:01:35"
# threads: 1
# symLink_trim10xbarcodes_notrimAdapt:
# jobname: GEP.{rule}.{wildcards.sample}.{wildcards.readCounter}
# mem_mb: 500
# time: "00:01:35"
# threads: 1
#
# symlinks_no10xOrAdaptTrim:
# jobname: GEP.{rule}.{wildcards.sample}.{wildcards.readCounter}
# mem_mb: 500
# time: "00:01:35"
# threads: 1
#
#
# symlinks_no10xwithAdaptTrim:
# jobname: GEP.{rule}.{wildcards.sample}.{wildcards.readCounter}
# mem_mb: 500
# time: "00:01:35"
# threads: 1
#
#
# symlink_trim10xbarcodesR2:
# jobname: GEP.{rule}.{wildcards.sample}.{wildcards.readCounter}.{wildcards.trimAdapters}
# mem_mb: 500
# time: "00:01:35"
# threads: 1
trim10xbarcodes:
jobname: GEP.{rule}.{wildcards.sample}.{wildcards.readCounter}.{wildcards.trimAdapters}
mem_mb: 500
time: "02:30:05"
threads: 4
trimAdapters:
jobname: GEP.{rule}.{wildcards.sample}.{wildcards.readCounter}.{wildcards.trim10x}
mem_mb: 500
time: "02:30:05"
threads: 8
fastqc_Illumina:
jobname: GEP.{rule}.{wildcards.sample}.{wildcards.readCounter}.{wildcards.trimAdapters}.{wildcards.trim10x}
mem_mb: 1000
time: "04:00:05"
threads: 4
# multiqc_hifi:
# jobname: GEP.{rule}.{wildcards.sample}.{wildcards.trimAdapters}.{wildcards.trim10x}
# mem_mb: 500
# time: "01:00:05"
# threads: 1
meryl_R1:
jobname: GEP.{rule}.{wildcards.sample}.{wildcards.readCounter}.{wildcards.trimAdapters}.{wildcards.trim10x}.{wildcards.kmer}
mem_mb: 20000
time: "01:30:05"
threads: 12
meryl_R2:
jobname: GEP.{rule}.{wildcards.sample}.{wildcards.readCounter}.{wildcards.trimAdapters}.{wildcards.trim10x}.{wildcards.kmer}
mem_mb: 20000
time: "01:30:05"
threads: 12
meryl_illumina_build:
jobname: GEP.{rule}.{wildcards.sample}.{wildcards.trimAdapters}.{wildcards.trim10x}.{wildcards.kmer}
mem_mb: 30000
time: "02:45:05"
threads: 12
###### HIFI BUILD #######
unzipHifi:
jobname: GEP.{rule}.{wildcards.sample}.{wildcards.readCounter}.{wildcards.smrtornot}
mem_mb: 128000
time: "00:30:05"
# symlinkUnzippedHifi:
# jobname: GEP.{rule}.{wildcards.sample}.{wildcards.readCounter}.{wildcards.smrtornot}
# mem_mb: 500
# time: "00:05:05"
# symlinkfornotSmartTrimmed:
# jobname: GEP.{rule}.{wildcards.sample}.{wildcards.readCounter}
# mem_mb: 500
# time: "00:05:05"
fastqc_hifi:
jobname: GEP.{rule}.{wildcards.sample}.{wildcards.readCounter}.{wildcards.smrtornot}
mem_mb: 12000
time: "04:00:05"
multiqc_hifi:
jobname: GEP.{rule}.{wildcards.sample}.{wildcards.readCounter}.{wildcards.smrtornot}
mem_mb: 4000
time: "01:15:05"
meryl_hifi_count:
jobname: GEP.{rule}.{wildcards.sample}.{wildcards.readCounter}.{wildcards.smrtornot}.{wildcards.kmer}
mem_mb: 96000
time: "02:30:05"
meryl_hifi_build:
jobname: GEP.{rule}.{wildcards.sample}.{wildcards.smrtornot}.{wildcards.kmer}
mem_mb: 96000
time: "01:30:05"
trimSMRTbell:
jobname: GEP.{rule}.{wildcards.sample}.{wildcards.readCounter}
mem_mb: 96000
time: "01:30:05"
### SBATCH JOB THAT WILL BE SUBMITTED FOR EACH RULE IN THE PIPELINE ###
cluster:
mkdir -p slurm_logs/{rule} &&
sbatch
--partition={resources.partition}
--qos={resources.qos}
--cpus-per-task={threads}
--mem={resources.mem_mb}
--job-name=GEP.{rule}.%j
--output=slurm_logs/{rule}/%j.out
--error=slurm_logs/{rule}/%j.err
--time={resources.time}
### DEFAULT RESOURCES THAT WILL BE USED IN THE ABOVE SBATCH COMMAND IF NOT DEFINED IN '../../../GEP/configuration/define_resources.yaml' ###
default-resources:
- partition=begendiv,main
- qos=standard
# - mem_mb=100000
# - time="3-00:00:00"
# - nodes=1
### SNAKEMAKE ARGUMENTS ###
# restart-times: 3]
# max-jobs-per-second: 100
#max-status-checks-per-second: 10
cores: 256
jobs: 100
use-conda: True
keep-going: False
rerun-incomplete: True
printshellcmds: False
scheduler: greedy
latency-wait: 60
Snakefile 0 → 100644
import numpy as np
from itertools import groupby
import json
import pandas as pd
import yaml
import os
import sys
import csv
from urllib.request import urlopen
from bs4 import BeautifulSoup
import argparse, subprocess
#temporary
from make_erga_assembly_report import create_yaml_file
container: "docker://gepdocker/gep_dockerimage:latest"
configfile: "configuration/config.yaml"
### LOAD IN RESOURCES CONFIG ###
with open(config['resources'], 'r') as f:
resource = yaml.safe_load(f)
### GET BASENAME FOR READS WILDCARDS ###
# def getBasename4Reads(path):
# base=os.path.basename(path)
# return os.path.splitext(base)[0]
### CHECK IF INPUT IS GZIPPED ###
def gzipped_or_not(path):
trueORfalse=path.endswith('.gz')
return trueORfalse
### CHECK IF HIC AND/OR ALT_ASM FILES ARE GIVEN, OR VALUE IS NONE ###
def FILE_present_or_not(path):
if path == 'None':
return False
return True
### CHECK IF GENOME_SIZE IS PROVIDED, OR VALUE IS AUTO ###
def genomeSize_auto_or_not(given_size):
if given_size == 'auto':
return 0
return given_size
# def to_be_smrtTrimmed(userAnswer):
# if userAnswer == 'True':
# string2Add="smrtTrimmed"
# elif userAnswer == 'False':
# string2Add="notsmrtTrimmed"
samples = pd.read_csv(config['samplesTSV'], dtype=str, index_col=False, delim_whitespace=True, skip_blank_lines=True)
# print(samples)
if set(['sample', 'Library_R1', 'Library_R2', 'meryl_kmer_size', 'trim10X', 'trimAdapters', 'fastQC']).issubset(samples.columns):
whichRule = "rules/build_illumina.smk"
# samples=samples.reset_index()
samples['readCounter'] = samples.groupby(['sample']).cumcount()+1
samples['readCounter'] = samples['readCounter'].astype(str)
samples['readCounter'] = samples['sample'] + "_LibraryPair" + samples['readCounter']
samples['10xtrimorNot'] = np.where(samples['trim10X'] == "True", "10xTrimmed", "not10xTrimmed")
samples['AdpaterTrimorNot'] = np.where(samples['trimAdapters'] == "True", "AdaptTrimmed", "notAdaptTrimmed")
dictSamples=samples[['sample','meryl_kmer_size', '10xtrimorNot','AdpaterTrimorNot']]
dictSamples=dictSamples.set_index(['sample']).T.to_dict('list')
testDictQC=samples[['sample', '10xtrimorNot', 'AdpaterTrimorNot', 'fastQC']]
testDictQC = testDictQC[testDictQC['fastQC'] == "True"]
testDictQC=testDictQC.drop_duplicates('sample', keep='first')
testDictQC=testDictQC.set_index(['sample']).T.to_dict('list')
trimAdapters = samples[samples['trimAdapters'] == "True"]
dictReadCounter = {}
for i in samples['sample'].unique():
dictReadCounter[i] = [samples['readCounter'][j] for j in samples[samples['sample']==i].index]
# dkmerSize = {}
samples['gzipped_R1']=samples['Library_R1'].apply(gzipped_or_not)
samples['gzipped_R2']=samples['Library_R2'].apply(gzipped_or_not)
noGzip_R1 = samples[samples['gzipped_R1'] == False]
noGzip_R1=noGzip_R1.set_index(['sample','readCounter'])
noGzip_R2 = samples[samples['gzipped_R2'] == False]
noGzip_R2=noGzip_R2.set_index(['sample','readCounter'])
yesGzip_R1 = samples[samples['gzipped_R1'] == True]
yesGzip_R1=yesGzip_R1.set_index(['sample','readCounter'])
yesGzip_R2 = samples[samples['gzipped_R2'] == True]
yesGzip_R2=yesGzip_R2.set_index(['sample','readCounter'])
samples=samples.set_index(['sample','readCounter'])
ruleAllQCFiles=[]
if samples['fastQC'].str.contains('True').any():
ruleAllQCFiles=[expand(os.path.join(config['Results'],"0_buildDatabases/{sample}/illuminaReads/QC/multiqc/{sample}.multiqcReport.html"), sample=key) for key, [value1, value2, value3] in testDictQC.items()]
ruleAll=[expand(os.path.join(config['Results'], "0_buildDatabases/{sample}/illuminaReads/merylDb/complete_illumina.{sample}.{kmer}.meryl"), sample=key, kmer=value1) for key, [value1, value2, value3] in dictSamples.items()]
elif set(['sample', 'hifi_reads', 'meryl_kmer_size','trimSMRTbell', 'fastQC']).issubset(samples.columns):
whichRule = "rules/build_hifi.smk"
# samples=samples.reset_index()
samples['readCounter'] = samples.groupby(['sample']).cumcount()+1
samples['readCounter'] = samples['readCounter'].astype(str)
samples['readCounter'] = samples['sample'] + "_Library" + samples['readCounter']
samples['smrtornot'] = np.where(samples['trimSMRTbell'] == "True", "smrtTrimmed", "notsmrtTrimmed")
dictSamples=samples[['sample','meryl_kmer_size', 'smrtornot']]
dictSamples=dictSamples.drop_duplicates('sample', keep='first')
dictSamples=dictSamples.set_index(['sample']).T.to_dict('list')
testDictQC=samples[['sample', 'smrtornot', 'fastQC']]
testDictQC = testDictQC[testDictQC['fastQC'] == "True"]
testDictQC=testDictQC.set_index(['sample']).T.to_dict('list')
dictReadCounter = {}
for i in samples['sample'].unique():
dictReadCounter[i] = [samples['readCounter'][j] for j in samples[samples['sample']==i].index]
samples['gzipped_hifi']=samples['hifi_reads'].apply(gzipped_or_not)
noGzip_hifi = samples[samples['gzipped_hifi'] == False]
noGzip_hifi=noGzip_hifi.set_index(['sample','readCounter'])
yesGzip_hifi = samples[samples['gzipped_hifi'] == True]
yesGzip_hifi=yesGzip_hifi.set_index(['sample','readCounter'])
samples=samples.set_index(['sample','readCounter'])
ruleAllQCFiles=[]
if samples['fastQC'].str.contains('True').any():
ruleAllQCFiles=[expand(os.path.join(config['Results'],"0_buildDatabases/{sample}/hifiReads/QC/multiqc/{sample}.multiqcReport.html"), sample=key) for key, [value1, value2] in testDictQC.items()]
ruleAll=[expand(os.path.join(config['Results'],"0_buildDatabases/{sample}/hifiReads/merylDb/complete_hifi.{sample}.{kmer}.meryl"), sample=key, kmer=value1) for key, [value1, value2] in dictSamples.items()]
elif set(['ID', 'ASM_LEVEL', 'busco_lineage', 'PRI_asm', 'ALT_asm', 'merylDB', 'merylDB_kmer', 'genomeSize', 'HiC_R1', 'HiC_R2']).issubset(samples.columns):
whichRule = "rules/evaluate.smk"
samples['genomeSize']=samples['genomeSize'].apply(genomeSize_auto_or_not)
### make new column for whether or not path/file given is gzipped (True or False)
samples['gzipped_PRI']=samples['PRI_asm'].apply(gzipped_or_not)
samples['gzipped_ALT']=samples['ALT_asm'].apply(gzipped_or_not)
samples['gzipped_HiC_R1']=samples['HiC_R1'].apply(gzipped_or_not)
samples['gzipped_HiC_R2']=samples['HiC_R2'].apply(gzipped_or_not)
### new column for whether or not alt asm is provided or not
samples['ALT_present']=samples['ALT_asm'].apply(FILE_present_or_not)
samples['HiC_R1_present']=samples['HiC_R1'].apply(FILE_present_or_not)
samples['HiC_R2_present']=samples['HiC_R2'].apply(FILE_present_or_not)
samples['merylDB_present']=samples['merylDB'].apply(FILE_present_or_not)
testDictPRETEXT = samples[['ID', 'HiC_R1_present', 'HiC_R2_present']]
testDictPRETEXT = testDictPRETEXT[(testDictPRETEXT['HiC_R1_present'] == True) & (testDictPRETEXT['HiC_R2_present'] == True)]
testDictPRETEXT=testDictPRETEXT.set_index(['ID']).T.to_dict('list')
noGzip_HiC_R1 = samples[samples['gzipped_HiC_R1'] == False]
noGzip_HiC_R1 = noGzip_HiC_R1.set_index(['ID'])
noGzip_HiC_R2 = samples[samples['gzipped_HiC_R2'] == False]
noGzip_HiC_R2 = noGzip_HiC_R2.set_index(['ID'])
yesGzip_HiC_R1 = samples[samples['gzipped_HiC_R1'] == True]
yesGzip_HiC_R1 = yesGzip_HiC_R1.set_index(['ID'])
yesGzip_HiC_R2 = samples[samples['gzipped_HiC_R2'] == True]
yesGzip_HiC_R2 = yesGzip_HiC_R2.set_index(['ID'])
noGzip_asm1 = samples[samples['gzipped_PRI'] == False]
noGzip_asm1 =noGzip_asm1.set_index(['ID'])
yesGzip_asm1 = samples[samples['gzipped_PRI'] == True]
yesGzip_asm1 = yesGzip_asm1.set_index(['ID'])
noGzip_asm2 = samples[samples['gzipped_ALT'] == False]
noGzip_asm2 = noGzip_asm2[noGzip_asm2['ALT_present'] == True]
noGzip_asm2 = noGzip_asm2.set_index(['ID'])
noGzip_asm2Dict=noGzip_asm2.T.to_dict('list')
no_asm2 = samples[samples['gzipped_ALT'] == False]
no_asm2 = no_asm2[no_asm2['ALT_present'] == False]
no_asm2 = no_asm2.set_index(['ID'])
no_asm2Dict=no_asm2.T.to_dict('list')
yesGzip_asm2 = samples[samples['gzipped_ALT'] == True]
yesGzip_asm2=yesGzip_asm2.set_index(['ID'])
samples=samples.set_index(['ID'])
dictSamples=samples.T.to_dict('list')
samples['merylDB_basename'] = samples['merylDB'].apply(lambda path: os.path.basename(path))
#warning: check that unique merylDB path point to unique names for the meryl database
if len(samples["merylDB_basename"].unique()) != len(samples["merylDB"].unique()):
print ("WARNING: please check that each meryl database is named differently")
if config['EAR']:
if len(samples["busco_lineage"].unique()) != 1: # when EAR runs, only one busco lineage should be given
print ("check tsv file, multiple busco lineages inserted, all rows should point to the same busco lineage")
if len(samples["merylDB"].unique()) != 1: # when EAR runs, only one merylDB should be given
print ("check tsv file, multiple merylDB inserted, all rows should point to the same database")
ruleAllQCFiles=[]
ruleAll=os.path.join(config['Results'],"1_evaluation/finalResults/GEP_FINAL_REPORT.pdf"), \
os.path.join(config['Results'],"1_evaluation/finalResults/EAR_report.yaml")
else:
ruleAllQCFiles=[]
ruleAll=os.path.join(config['Results'],"1_evaluation/finalResults/GEP_FINAL_REPORT.pdf")
### DOWNLOAD BUSCO LINEAGE (IF IT DOESN'T ALREADY EXIST) ####
busco5Lineages = samples['busco_lineage']
args_o=os.path.join(workflow.basedir, "buscoLineage")
args_l=list(set(busco5Lineages))
# print(args_l)
for i in args_l:
checkLineagePath=args_o + "/" + i + "_odb10"
checkLineageDled=args_o + "/" + i
outDir=args_o
if os.path.isdir(i) is True:
# subprocess.call("ln -sf %s %s"%(args.l, args.l), shell=True)
buscoDataBaseName=os.path.basename(i)
buscoDataBaseName=buscoDataBaseName[:-6]
# print("Lineage path given, basename is:", buscoDataBaseName)
elif os.path.isdir(i) is False and os.path.isdir(checkLineagePath) is True:
# subprocess.call("ln -sf %s %s"%(checkLineagePath, checkLineageDled), shell=True)
buscoDataBaseName=os.path.basename(checkLineagePath)
buscoDataBaseName=buscoDataBaseName[:-6]
# print("Database already in buscoLineage directory, basename is:", buscoDataBaseName)
else:
# print("Database will be downloaded")
url = "https://busco-data.ezlab.org/v5/data/lineages/"
html = urlopen(url).read()
soup = BeautifulSoup(html, features="html.parser")
# kill all script and style elements
for script in soup(["script", "style"]):
script.extract() # rip it out
# get text
text = soup.get_text()
# break into lines and remove leading and trailing space on each
lines = (line.strip() for line in text.splitlines())
linID=None
#identify the lineage file
for line in text.splitlines():
if line.startswith(i):
# print(line)
linID=line.split(" ")[0]
# print(linID)
break
if not linID==None:
linLink=url+linID
# print(linLink)
print('Downloading Busco Database:', i)
subprocess.run("wget -q -P %s %s"%(outDir, linLink), shell=True, check=True, stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
subprocess.run("tar -xvf %s*.tar.gz -C %s"%(args_o + "/" + i, outDir), shell=True, check=True,stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
subprocess.run("rm %s_*.tar.gz"%(checkLineageDled), shell=True, check=True)
buscoDataBaseName=i
print('Downloading Busco Database:', i, " - COMPLETE")
else:
raise ValueError("Error - could not identify lineage please check busco site for a correct prefix")
# busco5Lineages = samples['busco_lineage']
else:
raise ValueError('Sample Sheet not recognised. Please make sure you are using the correct sample sheet')
if "Results" not in config:
config["Results"] = "results"
include: whichRule
final_target_outputs = ruleAllQCFiles, ruleAll
rule all:
input:
final_target_outputs
### PATH TO WHERE YOU WANT YOUR RESULTS FOR THIS RUN STORED - WILL BE AUTOMATICALLY CREATED IF IT DOESN'T EXIST ###
Results: "/scratch/galeov97/begendiv/pilot/results5"
### FULL PATH TO YOUR SAMPLESHEET ###
samplesTSV: "/scratch/galeov97/begendiv/pilot/runEvaltest_nohic.tsv"
### PATH TO DEFINE RESOURCES FILE - DO NOT CHANGE UNLESS YOU WISH TO USE IN A DIFFERENT LOCATION ####
resources: "configuration/define_resources.yaml"
### With EAR=True the ERGA Assembly Report is automatically generated from the analysis
EAR: False
smudgeplot: True
# __default__:
# jobname: 'GEP.{rule}'
# partition: begendiv,main
# # nCPUs: "{threads}"
# qos: 'standard'
#
#### PRETEXT ####
unzipFastq_R1_HiC:
mem_mb: 96000
time: "12:00:00"
threads: 4
unzipFastq_R2_HiC:
mem_mb: 96000
time: "12:00:00"
threads: 4
indexFasta_PRI:
mem_mb: 72000
time: "06:00:00"
threads: 1
convertFastqTObam_R1_HiC:
mem_mb: 32000
time: "24:00:00"
threads: 12
convertFastqTObam_R2_HiC:
mem_mb: 32000
time: "24:00:00"
threads: 12
filter5PrimeEnd_R1_HiC:
mem_mb: 2000
time: "24:00:00"
threads: 4
filter5PrimeEnd_R2_HiC:
mem_mb: 2000
time: "24:00:00"
threads: 4
pairAndCombineFiltered_HiC:
mem_mb: 9500
time: "24:00:00"
threads: 12
pretextMap:
mem_mb: 32000
time: "24:00:00"
threads: 12
pretextSnapshot:
mem_mb: 1000
time: "02:00:00"
threads: 1
unzipFasta_PRI:
mem_mb: 5000
time: "12:00:00"
threads: 4
unzipFasta_ALT:
mem_mb: 1000
time: "12:00:00"
threads: 4
merylHistogram:
mem_mb: 1000
time: "02:00:00"
threads: 1
merqury:
mem_mb: 40000
time: "08:00:00"
threads: 8
smudgeplot:
mem_mb: 40000
time: "03:00:00"
threads: 12
busco5:
mem_mb: 128000
time: "1-12:00:00"
threads: 16
GenomeScope2Profiles:
mem_mb: 1000
time: "02:00:00"
threads: 1
gfastatsResults:
mem_mb: 2000
time: "02:00:00"
threads: 8
###### Rules for illuminadb building ##########
unzipFastq_R1_illumina:
mem_mb: 96000
time: "12:00:00"
threads: 4
unzipFastq_R2_illumina:
mem_mb: 96000
time: "12:00:00"
threads: 4
trim10xBarcodes_illumina:
mem_mb: 5000
time: "03:00:00"
threads: 4
trimSequencingAdapters_illumina:
mem_mb: 5000
time: "12:00:00"
threads: 8
fastQC_illumina:
mem_mb: 1000
time: "12:00:00"
threads: 4
merylCount_R1_illumina:
mem_mb: 20000
time: "12:00:00"
threads: 12
merylCount_R2_illumina:
mem_mb: 20000
time: "12:00:00"
threads: 12
merylUnion_illumina:
mem_mb: 30000
time: "12:00:00"
threads: 12
###### HIFI BUILD #######
unzipFastq_hifi:
mem_mb: 128000
time: "12:00:00"
threads: 4
trimSMRTBellAdapters_hifi:
mem_mb: 96000
time: "12:00:00"
threads: 4
fastQC_hifi:
mem_mb: 12000
time: "12:00:00"
threads: 2
multiQC_hifi:
mem_mb: 4000
time: "06:00:00"
threads: 2
merylCount_hifi:
mem_mb: 96000
time: "12:00:00"
threads: 12
merylUnion_hifi:
mem_mb: 96000
time: "12:00:00"
threads: 12
sample hifi_reads meryl_kmer_size trimSMRTbell fastQC
organismX /<pathto>/organismX_Hifi_Library1.fq 31 False True
organismX /<pathto>/organismX_Hifi_Library2.fq 31 False True
sample Library_R1 Library_R2 meryl_kmer_size trim10x trimAdapters fastQC
organismX /<pathto>/organismX_Library1_R1.fq.gz /<pathto>/organismX_Library1_R2.fq.gz 21 False False True
organismX /<pathto>/organismX_Library2_R1.fq.gz /<pathto>/organismX_Library2_R2.fq.gz 21 False False True
organismY /<pathto>/organismY_Library1_R1.fastq /<pathto>/organismY_Library1_R2.fastq 21 True True True
organismY /<pathto>/organismY_Library2_R1.fastq /<pathto>/organismY_Library2_R2.fastq 21 True True True
organismY /<pathto>/organismY_Library3_R1.fastq /<pathto>/organismY_Library3_R2.fastq 21 True True True
ID ASM_LEVEL busco_lineage PRI_asm ALT_asm merylDB merylDB_kmer genomeSize HiC_R1 HiC_R2
AssemblyX.illu chr <busco_db> /<pathto>/speciesX.fasta None /<pathTo>/speciesX.illuminaDB.meryl 21 auto None None
AssemblyX.HiFi chr <busco_db> /<pathto>/speciesX.fasta None /<pathTo>/speciesX.hifiDB.meryl 31 auto None None
AssemblyY.illu scaff <busco_db> /<pathto>/speciesY.Primary.fa /<pathto>/speciesY.Alternate.fa /<pathTo>/speciesY.illuminaDB.meryl 21 1050000 None None
channels:
- conda-forge
- bioconda
- anaconda
dependencies:
- tabulate=0.8.7
- beautifulsoup4=4.9
- pandoc=2.15.*
- tectonic
- wkhtmltopdf
- pandas
- numpy
- ghostscript
- python=3.9.*
\ No newline at end of file
channels:
- conda-forge
- bioconda
dependencies:
- python >=3.10.4
- r-argparse == 2.1.5
- r-base >=4.1.3
- r-minpack.lm == 1.2_2
- genomescope2 == 2.0
- r-dplyr==1.0.9
- r-formattable==0.2.1
- gfastats
channels:
- conda-forge
- bioconda
- anaconda
- defaults
dependencies:
- augustus >=3.3
- biopython
- blast >=2.10.1
- fonts-conda-ecosystem
- hmmer >=3.1b2
- metaeuk
- pandas
- prodigal
- python >=3.3
- r-base
- r-ggplot2 >=2.2.1
- sepp >=4.3.10
- busco == 5.4.7
channels:
- anaconda
- bioconda
- conda-forge
dependencies:
- pretextmap == 0.1.9
- pretextsnapshot == 0.0.4
- bwa-mem2 == 2.2.1
- samtools == 1.14
\ No newline at end of file
channels:
- bioconda
- defaults
- conda-forge
dependencies:
- merqury == 1.3
- bedtools == 2.29.2
- meryl == 1.3
- cairo == 1.16
- openjdk == 11.0.13
- r-argparse == 2.1.5
- r-base >=4.0,<4.1.0a0
- r-ggplot2=3.3.2=r40hc72bb7e_1
- r-scales == 1.2.0
- samtools == 1.15.1
- xorg-libxrender == 0.9.10
- xorg-libxext == 1.3.4
- xorg-libxau == 1.0.9
name: smudgeplot_testt
channels:
- conda-forge
- bioconda
- defaults
dependencies:
- r-base
- r-ggplot2
- merqury == 1.3
- bedtools == 2.29.2
- meryl == 1.3
- smudgeplot == 0.2.5
channels:
- anaconda
- conda-forge
- bioconda
dependencies:
- openssl==3.0.5
- python=3.10.5=ha86cf86_0_cpython
- pigz=2.6
- trim-galore=0.6.7
- pandoc=2.11.4
- multiqc=1.13
- trimmomatic=0.39
name: GEP
channels:
- conda-forge
- bioconda
- anaconda
dependencies:
- snakemake==7.30.2
- beautifulsoup4==4.9.3
- pandas==1.4.2
- numpy==1.22.4
- python>=3.10
- tabulate==0.8.10
localrules: symlink_UnzippedFastq_hifi, \
symlink_noSMRTBellAdaptTrim_hifi, \
multiQC_hifi
# def fq_to_trimSMRTbell(wildcards):
# return trimSMRTbell.loc[(wildcards.sample, wildcards.readCounter), "hifi_reads"]
#
#
# def fq_to_notTrimSMRTbell(wildcards):
# return notrimSMRTbell.loc[(wildcards.sample, wildcards.readCounter), "hifi_reads"]
def hifi_gzipped(wildcards):
return yesGzip_hifi.loc[(wildcards.sample, wildcards.readCounter), "hifi_reads"]
def hifi_notgzipped(wildcards):
return noGzip_hifi.loc[(wildcards.sample, wildcards.readCounter), "hifi_reads"]
rule unzipFastq_hifi:
input:
fastq=hifi_gzipped,
output:
os.path.join(config['Results'], "0_buildDatabases/{sample}/hifiReads/temp_unzipFastqs/{readCounter}.{smrtornot}.fastq"),
log:
os.path.join(config['Results'], "0_buildDatabases/{sample}/hifiReads/logs/pigzUnzip.{readCounter}.{smrtornot}.log")
conda:
os.path.join(workflow.basedir, "envs/UNZIP_and_QC.yaml")
threads:
resource['unzipFastq_hifi']['threads']
resources:
mem_mb=resource['unzipFastq_hifi']['mem_mb'],
time=resource['unzipFastq_hifi']['time'],
shell:
"""
pigz -p {threads} -c -d -k {input.fastq} > {output} 2> {log}
"""
rule symlink_UnzippedFastq_hifi:
input:
fastq=hifi_notgzipped,
output:
os.path.join(config['Results'], "0_buildDatabases/{sample}/hifiReads/temp_unzipFastqs/{readCounter}.{smrtornot}.fastq"),
log:
os.path.join(config['Results'], "0_buildDatabases/{sample}/hifiReads/logs/pigzUnzip.{readCounter}.{smrtornot}.log")
container:
None
shell:
"""
ln -s {input.fastq} {output}
echo "{input.fastq} not gzipped. Symlink created in place of expected decompressed file." > {log}
"""
rule trimSMRTBellAdapters_hifi:
input:
os.path.join(config['Results'], "0_buildDatabases/{sample}/hifiReads/temp_unzipFastqs/{readCounter}.smrtTrimmed.fastq"),
output:
outputFile=os.path.join(config['Results'],"0_buildDatabases/{sample}/hifiReads/temp_trimReads/{readCounter}.smrtTrimmed.fastq")
threads:
resource['trimSMRTBellAdapters_hifi']['threads']
resources:
mem_mb=resource['trimSMRTBellAdapters_hifi']['mem_mb'],
time=resource['trimSMRTBellAdapters_hifi']['time'],
log:
os.path.join(config['Results'], "0_buildDatabases/{sample}/hifiReads/logs/trimSMRTbell.{readCounter}.log")
priority:
15
conda:
os.path.join(workflow.basedir, "envs/UNZIP_and_QC.yaml")
shell:
"""
(cutadapt -j {threads} -o {output.outputFile} {input} -b AAAAAAAAAAAAAAAAAATTAACGGAGGAGGAGGA --overlap 35 -b ATCTCTCTCTTTTCCTCCTCCTCCGTTGTTGTTGTTGAGAGAGAT --overlap 45 --revcomp -e 0.1 --discard-trimmed) &> {log}
"""
rule symlink_noSMRTBellAdaptTrim_hifi:
input:
os.path.join(config['Results'], "0_buildDatabases/{sample}/hifiReads/temp_unzipFastqs/{readCounter}.notsmrtTrimmed.fastq"),
output:
outputFile=os.path.join(config['Results'],"0_buildDatabases/{sample}/hifiReads/temp_trimReads/{readCounter}.notsmrtTrimmed.fastq")
container:
None
shell:
"""
ln -s {input} {output.outputFile}
"""
rule fastQC_hifi:
input:
os.path.join(config['Results'],"0_buildDatabases/{sample}/hifiReads/temp_trimReads/{readCounter}.{smrtornot}.fastq")
params:
folder2out=os.path.join(config['Results'],"0_buildDatabases/{sample}/hifiReads/QC/fastqc")
output:
os.path.join(config['Results'],"0_buildDatabases/{sample}/hifiReads/QC/fastqc/{readCounter}.{smrtornot}_fastqc.html")
threads:
resource['fastQC_hifi']['threads']
resources:
mem_mb=resource['fastQC_hifi']['mem_mb'],
time=resource['fastQC_hifi']['time'],
log:
os.path.join(config['Results'], "0_buildDatabases/{sample}/hifiReads/logs/fastQC.hifi.{readCounter}.{smrtornot}.log")
conda:
os.path.join(workflow.basedir, "envs/UNZIP_and_QC.yaml")
shell:
"""
(fastqc {input} -o {params.folder2out} -t {threads}) &> {log}
"""
rule multiQC_hifi:
input:
lambda wildcards: expand(os.path.join(config['Results'],"0_buildDatabases/{sample}/hifiReads/QC/fastqc/{readCounter}.{smrtornot}_fastqc.html"), sample=wildcards.sample, readCounter=dictReadCounter[wildcards.sample], smrtornot=dictSamples[wildcards.sample][1])
params:
folder2qc=os.path.join(config['Results'],"0_buildDatabases/{sample}/hifiReads/QC/fastqc/"),
folder2OUT=os.path.join(config['Results'],"0_buildDatabases/{sample}/hifiReads/QC/multiqc/"),
filename="{sample}.multiqcReport.html"
output:
os.path.join(config['Results'],"0_buildDatabases/{sample}/hifiReads/QC/multiqc/{sample}.multiqcReport.html")
log:
os.path.join(config['Results'], "0_buildDatabases/{sample}/hifiReads/logs/multiQC.{sample}.log")
conda:
os.path.join(workflow.basedir, "envs/UNZIP_and_QC.yaml")
shell:
"(multiqc {params.folder2qc} -o {params.folder2OUT} -n {params.filename}) &> {log}"
rule merylCount_hifi:
input:
reads=os.path.join(config['Results'],"0_buildDatabases/{sample}/hifiReads/temp_trimReads/{readCounter}.{smrtornot}.fastq")
params:
kmer = "{kmer}"
threads:
resource['merylCount_hifi']['threads']
resources:
mem_mb=resource['merylCount_hifi']['mem_mb'],
time=resource['merylCount_hifi']['time'],
output:
temp(directory(os.path.join(config['Results'], "0_buildDatabases/{sample}/hifiReads/merylDb/" + "{readCounter}" + "_hifi_dB.{smrtornot}.{kmer}.meryl"))),
log:
os.path.join(config['Results'], "0_buildDatabases/{sample}/hifiReads/logs/meryl_hifi_count.{readCounter}.{kmer}.{smrtornot}.log")
priority:
10
conda:
os.path.join(workflow.basedir, "envs/MERYL_MERQURY.yaml")
shell:
"""
(meryl count k={params.kmer} threads={threads} {input.reads} output {output}) &> {log}
"""
rule merylUnion_hifi:
input:
lambda wildcards: expand(os.path.join(config['Results'],"0_buildDatabases/{sample}/hifiReads/merylDb/{readCounter}_hifi_dB.{smrtornot}.{kmer}.meryl/"), sample=wildcards.sample, readCounter=dictReadCounter[wildcards.sample], kmer=dictSamples[wildcards.sample][0], smrtornot=dictSamples[wildcards.sample][1])
params:
kmer = "{kmer}",
removeReadDIR_trimmed=os.path.join(config['Results'],"0_buildDatabases/{sample}/hifiReads/temp_trimReads/"),
removeReadDIR_unzipped=os.path.join(config['Results'],"0_buildDatabases/{sample}/hifiReads/temp_unzipFastqs/")
threads:
resource['merylUnion_hifi']['threads']
resources:
mem_mb=resource['merylUnion_hifi']['mem_mb'],
time=resource['merylUnion_hifi']['time'],
output:
directory(os.path.join(config['Results'], "0_buildDatabases/{sample}/hifiReads/merylDb/complete_hifi.{sample}.{kmer}.meryl")),
log:
os.path.join(config['Results'], "0_buildDatabases/{sample}/hifiReads/logs/meryl_hifi_combine.{sample}.{kmer}.log")
priority:
10
conda:
os.path.join(workflow.basedir, "envs/MERYL_MERQURY.yaml")
shell:
"""
(meryl union-sum {input} output {output}) &> {log}
rm -r {params.removeReadDIR_trimmed}
rm -r {params.removeReadDIR_unzipped}
"""
localrules: symlink_UnzippedFastq_R1_illumina,\
symlink_UnzippedFastq_R2_illumina, \
symLink_Trim10xBarcodes_noSequencingAdaptTrim_illumina, \
symlink_No10xWithSequencingAdaptTrim_illumina, \
symlink_No10xOrSequencingAdaptTrim_illumina, \
symlink_Trim10xBarcodes_R2_illumina, \
multiQC_illumina
def R1_gzipped(wildcards):
return yesGzip_R1.loc[(wildcards.sample, wildcards.readCounter), "Library_R1"]
def R1_notgzipped(wildcards):
return noGzip_R1.loc[(wildcards.sample, wildcards.readCounter), "Library_R1"]
def R2_gzipped(wildcards):
return yesGzip_R2.loc[(wildcards.sample, wildcards.readCounter), "Library_R2"]
def R2_notgzipped(wildcards):
return noGzip_R2.loc[(wildcards.sample, wildcards.readCounter), "Library_R2"]
rule unzipFastq_R1_illumina:
input:
assembly=R1_gzipped,
output:
os.path.join(config['Results'], "0_buildDatabases/{sample}/illuminaReads/temp_unzipFastqs/{readCounter}.{trim10x}.{trimAdapters}_R1.fastq"),
log:
os.path.join(config['Results'], "0_buildDatabases/{sample}/illuminaReads/logs/{readCounter}.{trim10x}.{trimAdapters}_R1_pigzUnzip.log"),
conda:
os.path.join(workflow.basedir, "envs/UNZIP_and_QC.yaml")
threads:
resource['unzipFastq_R1_illumina']['threads']
resources:
mem_mb=resource['unzipFastq_R1_illumina']['mem_mb'],
time=resource['unzipFastq_R1_illumina']['time']
shell:
"""
pigz -p {threads} -c -d -k {input.assembly} > {output} 2> {log}
"""
rule symlink_UnzippedFastq_R1_illumina:
input:
assembly=R1_notgzipped,
output:
os.path.join(config['Results'], "0_buildDatabases/{sample}/illuminaReads/temp_unzipFastqs/{readCounter}.{trim10x}.{trimAdapters}_R1.fastq"),
container:
None
shell:
"""
ln -s {input} {output}
"""
rule unzipFastq_R2_illumina:
input:
assembly=R2_gzipped,
output:
os.path.join(config['Results'], "0_buildDatabases/{sample}/illuminaReads/temp_unzipFastqs/{readCounter}.{trim10x}.{trimAdapters}_R2.fastq"),
log:
os.path.join(config['Results'], "0_buildDatabases/{sample}/illuminaReads/logs/{readCounter}.{trim10x}.{trimAdapters}_R2_pigzUnzip.log")
conda:
os.path.join(workflow.basedir, "envs/UNZIP_and_QC.yaml")
threads:
resource['unzipFastq_R2_illumina']['threads']
resources:
mem_mb=resource['unzipFastq_R2_illumina']['mem_mb'],
time=resource['unzipFastq_R2_illumina']['time']
shell:
"""
pigz -p {threads} -c -d -k {input.assembly} > {output} 2> {log}
"""
rule symlink_UnzippedFastq_R2_illumina:
input:
assembly=R2_notgzipped,
output:
os.path.join(config['Results'], "0_buildDatabases/{sample}/illuminaReads/temp_unzipFastqs/{readCounter}.{trim10x}.{trimAdapters}_R2.fastq"),
container:
None
shell:
"""
ln -s {input} {output}
"""
rule trim10xBarcodes_illumina:
input:
read1=os.path.join(config['Results'], "0_buildDatabases/{sample}/illuminaReads/temp_unzipFastqs/{readCounter}.10xTrimmed.{trimAdapters}_R1.fastq"),
output:
read1=os.path.join(config['Results'],"0_buildDatabases/{sample}/illuminaReads/temp_trimReads/{readCounter}.10xTrimmed.{trimAdapters}_Read1.fastq"),
threads:
resource['trim10xBarcodes_illumina']['threads']
resources:
mem_mb=resource['trim10xBarcodes_illumina']['mem_mb'],
time=resource['trim10xBarcodes_illumina']['time']
log:
os.path.join(config['Results'], "0_buildDatabases/{sample}/illuminaReads/logs/{readCounter}.10xTrimmed.10BarcodeRemoval_Trimmomatic.{trimAdapters}.log")
conda:
os.path.join(workflow.basedir, "envs/UNZIP_and_QC.yaml")
shell:
"""
(trimmomatic SE -threads {threads} {input.read1} {output.read1} HEADCROP:23) &> {log}
"""
rule symlink_Trim10xBarcodes_R2_illumina:
input:
read2=os.path.join(config['Results'], "0_buildDatabases/{sample}/illuminaReads/temp_unzipFastqs/{readCounter}.10xTrimmed.{trimAdapters}_R2.fastq")
output:
read2=os.path.join(config['Results'],"0_buildDatabases/{sample}/illuminaReads/temp_trimReads/{readCounter}.10xTrimmed.{trimAdapters}_Read2.fastq")
container:
None
shell:
"""
ln -s {input.read2} {output.read2}
"""
rule symLink_Trim10xBarcodes_noSequencingAdaptTrim_illumina:
input:
read1=os.path.join(config['Results'],"0_buildDatabases/{sample}/illuminaReads/temp_trimReads/{readCounter}.10xTrimmed.notAdaptTrimmed_Read1.fastq"),
read2=os.path.join(config['Results'],"0_buildDatabases/{sample}/illuminaReads/temp_trimReads/{readCounter}.10xTrimmed.notAdaptTrimmed_Read2.fastq"),
output:
read1=os.path.join(config['Results'],"0_buildDatabases/{sample}/illuminaReads/temp_trimReads/{readCounter}.10xTrimmed.notAdaptTrimmed_val_1.fq"),
read2=os.path.join(config['Results'],"0_buildDatabases/{sample}/illuminaReads/temp_trimReads/{readCounter}.10xTrimmed.notAdaptTrimmed_val_2.fq")
container:
None
shell:
"""
ln -s {input.read1} {output.read1}
ln -s {input.read2} {output.read2}
"""
rule symlink_No10xOrSequencingAdaptTrim_illumina:
input:
read1=os.path.join(config['Results'], "0_buildDatabases/{sample}/illuminaReads/temp_unzipFastqs/{readCounter}.not10xTrimmed.notAdaptTrimmed_R1.fastq"),
read2=os.path.join(config['Results'], "0_buildDatabases/{sample}/illuminaReads/temp_unzipFastqs/{readCounter}.not10xTrimmed.notAdaptTrimmed_R2.fastq")
output:
read1=os.path.join(config['Results'],"0_buildDatabases/{sample}/illuminaReads/temp_trimReads/{readCounter}.not10xTrimmed.notAdaptTrimmed_val_1.fq"),
read2=os.path.join(config['Results'],"0_buildDatabases/{sample}/illuminaReads/temp_trimReads/{readCounter}.not10xTrimmed.notAdaptTrimmed_val_2.fq")
container:
None
shell:
"""
ln -s {input.read1} {output.read1}
ln -s {input.read2} {output.read2}
"""
rule symlink_No10xWithSequencingAdaptTrim_illumina:
input:
read1=os.path.join(config['Results'], "0_buildDatabases/{sample}/illuminaReads/temp_unzipFastqs/{readCounter}.not10xTrimmed.AdaptTrimmed_R1.fastq"),
read2=os.path.join(config['Results'], "0_buildDatabases/{sample}/illuminaReads/temp_unzipFastqs/{readCounter}.not10xTrimmed.AdaptTrimmed_R2.fastq")
output:
read1=os.path.join(config['Results'],"0_buildDatabases/{sample}/illuminaReads/temp_trimReads/{readCounter}.not10xTrimmed.AdaptTrimmed_Read1.fastq"),
read2=os.path.join(config['Results'],"0_buildDatabases/{sample}/illuminaReads/temp_trimReads/{readCounter}.not10xTrimmed.AdaptTrimmed_Read2.fastq")
container:
None
shell:
"""
ln -s {input.read1} {output.read1}
ln -s {input.read2} {output.read2}
"""
rule trimSequencingAdapters_illumina:
input:
read1= os.path.join(config['Results'],"0_buildDatabases/{sample}/illuminaReads/temp_trimReads/{readCounter}.{trim10x}.AdaptTrimmed_Read1.fastq"),
read2= os.path.join(config['Results'],"0_buildDatabases/{sample}/illuminaReads/temp_trimReads/{readCounter}.{trim10x}.AdaptTrimmed_Read2.fastq"),
params:
outputDir=os.path.join(config['Results'],"0_buildDatabases/{sample}/illuminaReads/temp_trimReads/"),
r1_prefix="{readCounter}.{trim10x}.AdaptTrimmed",
output:
read1=temp(os.path.join(config['Results'],"0_buildDatabases/{sample}/illuminaReads/temp_trimReads/{readCounter}.{trim10x}.AdaptTrimmed_val_1.fq")),
read2=temp(os.path.join(config['Results'],"0_buildDatabases/{sample}/illuminaReads/temp_trimReads/{readCounter}.{trim10x}.AdaptTrimmed_val_2.fq"))
threads:
resource['trimSequencingAdapters_illumina']['threads']
resources:
mem_mb=resource['trimSequencingAdapters_illumina']['mem_mb'],
time=resource['trimSequencingAdapters_illumina']['time'],
log:
os.path.join(config['Results'], "0_buildDatabases/{sample}/illuminaReads/logs/{readCounter}.{trim10x}.AdaptTrimmed_tGalore.log")
conda:
os.path.join(workflow.basedir, "envs/UNZIP_and_QC.yaml")
shell:
"""
(trim_galore -j {threads} --basename {params.r1_prefix} --dont_gzip --length 65 -o {params.outputDir} --paired {input.read1} {input.read2}) &> {log}
"""
rule fastQC_illumina:
input:
read1=os.path.join(config['Results'],"0_buildDatabases/{sample}/illuminaReads/temp_trimReads/{readCounter}.{trim10x}.{trimAdapters}_val_1.fq"),
read2=os.path.join(config['Results'],"0_buildDatabases/{sample}/illuminaReads/temp_trimReads/{readCounter}.{trim10x}.{trimAdapters}_val_2.fq")
params:
folder2out=os.path.join(config['Results'],"0_buildDatabases/{sample}/illuminaReads/QC/fastqc")
output:
os.path.join(config['Results'],"0_buildDatabases/{sample}/illuminaReads/QC/fastqc/{readCounter}.{trim10x}.{trimAdapters}_val_1_fastqc.html"),
os.path.join(config['Results'],"0_buildDatabases/{sample}/illuminaReads/QC/fastqc/{readCounter}.{trim10x}.{trimAdapters}_val_2_fastqc.html")
log:
os.path.join(config['Results'], "0_buildDatabases/{sample}/illuminaReads/logs/{readCounter}.{trim10x}.{trimAdapters}_fastqc.log")
threads:
resource['fastQC_illumina']['threads']
resources:
mem_mb=resource['fastQC_illumina']['mem_mb'],
time=resource['fastQC_illumina']['time'],
conda:
os.path.join(workflow.basedir, "envs/UNZIP_and_QC.yaml")
shell:
"(fastqc {input} -o {params.folder2out} -t {threads}) &> {log}"
rule multiQC_illumina:
input:
read1=lambda wildcards: expand(os.path.join(config['Results'],"0_buildDatabases/{sample}/illuminaReads/QC/fastqc/{readCounter}.{trim10x}.{trimAdapters}_val_1_fastqc.html"), sample=wildcards.sample, readCounter=dictReadCounter[wildcards.sample], trim10x=dictSamples[wildcards.sample][1], trimAdapters=dictSamples[wildcards.sample][2]),
read2=lambda wildcards: expand(os.path.join(config['Results'],"0_buildDatabases/{sample}/illuminaReads/QC/fastqc/{readCounter}.{trim10x}.{trimAdapters}_val_2_fastqc.html"), sample=wildcards.sample, readCounter=dictReadCounter[wildcards.sample], trim10x=dictSamples[wildcards.sample][1], trimAdapters=dictSamples[wildcards.sample][2])
params:
folder2qc=os.path.join(config['Results'],"0_buildDatabases/{sample}/illuminaReads/QC/fastqc/"),
folder2out=os.path.join(config['Results'],"0_buildDatabases/{sample}/illuminaReads/QC/multiqc/"),
filename="{sample}.multiqcReport.html"
output:
os.path.join(config['Results'],"0_buildDatabases/{sample}/illuminaReads/QC/multiqc/{sample}.multiqcReport.html")
log:
os.path.join(config['Results'], "0_buildDatabases/{sample}/illuminaReads/logs/{sample}.multiqc.log")
conda:
os.path.join(workflow.basedir, "envs/UNZIP_and_QC.yaml")
shell:
"(multiqc {params.folder2qc} -o {params.folder2out} -n {params.filename}) &> {log}"
rule merylCount_R1_illumina:
input:
read1= os.path.join(config['Results'],"0_buildDatabases/{sample}/illuminaReads/temp_trimReads/{readCounter}.{trim10x}.{trimAdapters}_val_1.fq")
params:
kmer = "{kmer}",
threads:
resource['merylCount_R1_illumina']['threads']
resources:
mem_mb=resource['merylCount_R1_illumina']['mem_mb'],
time=resource['merylCount_R1_illumina']['time'],
output:
temp(directory(os.path.join(config['Results'], "0_buildDatabases/{sample}/illuminaReads/merylDb/{readCounter}.{trim10x}.{trimAdapters}_R1.{kmer}.meryl")))
log:
os.path.join(config['Results'], "0_buildDatabases/{sample}/illuminaReads/logs/{readCounter}.{trim10x}.{trimAdapters}_meryl_R1.{kmer}.log")
priority:
10
conda:
os.path.join(workflow.basedir, "envs/MERYL_MERQURY.yaml")
shell:
"""
(meryl count k={params.kmer} threads={threads} {input.read1} output {output}) &> {log}
"""
rule merylCount_R2_illumina:
input:
read2= os.path.join(config['Results'],"0_buildDatabases/{sample}/illuminaReads/temp_trimReads/{readCounter}.{trim10x}.{trimAdapters}_val_2.fq")
params:
kmer = "{kmer}",
threads:
resource['merylCount_R2_illumina']['threads']
resources:
mem_mb=resource['merylCount_R2_illumina']['mem_mb'],
time=resource['merylCount_R2_illumina']['time'],
output:
temp(directory(os.path.join(config['Results'], "0_buildDatabases/{sample}/illuminaReads/merylDb/{readCounter}.{trim10x}.{trimAdapters}_R2.{kmer}.meryl")))
log:
os.path.join(config['Results'], "0_buildDatabases/{sample}/illuminaReads/logs/{readCounter}.{trim10x}.{trimAdapters}_meryl_R2.{kmer}.log")
priority:
10
conda:
os.path.join(workflow.basedir, "envs/MERYL_MERQURY.yaml")
shell:
"""
(meryl count k={params.kmer} threads={threads} {input.read2} output {output}) &> {log}
"""
rule merylUnion_illumina:
input:
# removeReads1=lambda wildcards: expand(os.path.join(config['Results'],"0_buildDatabases/{sample}/illuminaReads/temp_trimReads/{readCounter}.{trim10x}.{trimAdapters}_val_1.fq"), sample=wildcards.sample, readCounter=dictReadCounter[wildcards.sample], trim10x=dictSamples[wildcards.sample][1], trimAdapters=dictSamples[wildcards.sample][2]),
# removeReads2=lambda wildcards: expand(os.path.join(config['Results'],"0_buildDatabases/{sample}/illuminaReads/temp_trimReads/{readCounter}.{trim10x}.{trimAdapters}_val_2.fq"), sample=wildcards.sample, readCounter=dictReadCounter[wildcards.sample], trim10x=dictSamples[wildcards.sample][1], trimAdapters=dictSamples[wildcards.sample][2]),
read1=lambda wildcards: expand(os.path.join(config['Results'],"0_buildDatabases/{sample}/illuminaReads/merylDb/{readCounter}.{trim10x}.{trimAdapters}_R1.{kmer}.meryl"), sample=wildcards.sample, readCounter=dictReadCounter[wildcards.sample], trim10x=dictSamples[wildcards.sample][1], kmer=dictSamples[wildcards.sample][0], trimAdapters=dictSamples[wildcards.sample][2]),
read2=lambda wildcards: expand(os.path.join(config['Results'],"0_buildDatabases/{sample}/illuminaReads/merylDb/{readCounter}.{trim10x}.{trimAdapters}_R2.{kmer}.meryl"), sample=wildcards.sample, readCounter=dictReadCounter[wildcards.sample], trim10x=dictSamples[wildcards.sample][1], kmer=dictSamples[wildcards.sample][0], trimAdapters=dictSamples[wildcards.sample][2])
params:
removeReadDIR_trimmed=os.path.join(config['Results'],"0_buildDatabases/{sample}/illuminaReads/temp_trimReads/"),
removeReadDIR_unzipped=os.path.join(config['Results'],"0_buildDatabases/{sample}/illuminaReads/temp_unzipFastqs/"),
kmer = "{kmer}",
path= os.path.join(config['Results'], "0_buildDatabases/{sample}/illuminaReads/merylDb/")
threads:
resource['merylUnion_illumina']['threads']
resources:
mem_mb=resource['merylUnion_illumina']['mem_mb'],
time=resource['merylUnion_illumina']['time'],
output:
directory(os.path.join(config['Results'], "0_buildDatabases/{sample}/illuminaReads/merylDb/complete_illumina.{sample}.{kmer}.meryl")),
log:
os.path.join(config['Results'], "0_buildDatabases/{sample}/illuminaReads/logs/{sample}.illumina_meryl.{kmer}.log")
priority:
10
conda:
os.path.join(workflow.basedir, "envs/MERYL_MERQURY.yaml")
shell:
"""
(meryl union-sum {input.read1} {input.read2} output {output}) &> {log}
rm -r {params.removeReadDIR_trimmed}
rm -r {params.removeReadDIR_unzipped}
"""
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please to comment