MetaBGC | metagenomic strategy for harnessing the chemical repertoire | Genomics library
kandi X-RAY | MetaBGC Summary
kandi X-RAY | MetaBGC Summary
MetaBGC is a read-based algorithm for the detection of biosynthetic gene clusters (BGCs) directly in metagenomic sequencing data.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Evaluate the HMM on the synthetic genome files
- Returns a pandas dataframe with unique reads
- Compares two HMM data pandas DataFrames
- Calculate F1 F1 frequency
- Run HMM search against a fasta file
- Create a pandas dataframe from a dictionary of keyscore
- Parse hmmmer3 - text file
- Gather reference Bias
- Parse the nucmer align file name
- Pre - process reads
- Interleave between two iterables
- Performs pre - processing of reads
- Interleave reads in parallel
- Runs HMMM on the input directory
- Create pandas DataFrame from PolymerideType dictionary
- Parse hmmmer3 - > hits
- Extract protonsq files from Ncl sequences
- Generate a metagenomic sample
- Run BLAST searches
- Build MetaBGC
- Runs the HMM on the input directory
- Extract FASTASeq files
- Updates the header of the HMM
- Build BLAST BLAST BLAST database
- Generate a Gengeneposlist from protos
- Read a Nucmer alignment file
- Make database and blastn
- Perform Metabgc search
MetaBGC Key Features
MetaBGC Examples and Code Snippets
Community Discussions
Trending Discussions on Genomics
QUESTION
I´m working with two text files that look like this: File 1
...ANSWER
Answered 2022-Apr-09 at 00:49Perhaps you are after this?
QUESTION
I'm using the software plink2 (https://www.cog-genomics.org/plink/2.0/) and I'm trying to iterate over 3 variables.
This software admits an input file with .ped extention file and an exclude file with .txt extention which contains a list of names to be excluded from the input file.
The idea is to iterate over the input files and then over exclude files to generate single outputfiles.
- Input files: Highland.ped - Midland.ped - Lowland.ped
- Exclude-map files: HighlandMidland.txt - HighlandLowland.txt - MidlandLowland.txt
- Output files: HighlandMidland - HighlandLowland - MidlandHighland - MidlandLowland - LowlandHighland - LowlandMidland
The general code is:
...ANSWER
Answered 2021-Dec-09 at 23:50Honestly, I think your current code is quite clear; but if you really want to write this as a loop, here's one possibility:
QUESTION
From this example string:
...ANSWER
Answered 2021-Dec-09 at 01:11use regexp_extract(col, r"&q;Stockcode&q;:([^/$]*?),&q;.*")
if applied to sample data in your question - output is
QUESTION
I am making a code which takes in jumble word and returns a unjumbled word , the data.json contains a list and here take a word one-by-one and check if it contains all the characters of the word and later checking if the length is same , but the problem is when i enter a word as helol then the l is checked twice and giving me some other outputs including the main one(hello). i know why does it happen but i cant get a fix to it
...ANSWER
Answered 2021-Nov-25 at 18:33As I understand it you are trying to identify all possible matches for the jumbled string in your list. You could sort the letters in the jumbled word and match the resulting list against sorted lists of the words in your data file.
QUESTION
I am trying to use plink1.9 to split multiallelic into biallelic. The input is that
...ANSWER
Answered 2021-Nov-17 at 09:45I used bcftools to complete the task.
QUESTION
I have a FASTA file that has about 300000 sequences but some of the sequences are like these
...ANSWER
Answered 2021-Oct-12 at 20:28You can match your non-X containing FASTA entries with the regex >.+\n[^X]+\n
. This checks for a substring starting with >
having a first line of anything (the FASTA header), which is followed by characters not containing an X until you reach a line break.
For example:
QUESTION
For example, I have two strings:
...ANSWER
Answered 2021-Oct-04 at 22:27For your example your pattern would be:
QUESTION
I am currently trying to run genomic analyses pipelines using Hail(library for genomics analyses written in python and Scala). Recently, Apache Spark 3 was released and it supported GPU usage.
I tried spark-rapids library start an on-premise slurm cluster with gpu nodes. I was able to initialise the cluster. However, when I tried running hail tasks, the executors keep getting killed.
On querying in Hail forum, I got the response that
That’s a GPU code generator for Spark-SQL, and Hail doesn’t use any Spark-SQL interfaces, only the RDD interfaces.
So, does Spark3 not support GPU usage for RDD interfaces?
...ANSWER
Answered 2021-Sep-23 at 05:53As of now, spark-rapids doesn't support GPU usage for RDD interfaces.
Source: Link
Apache Spark 3.0+ lets users provide a plugin that can replace the backend for SQL and DataFrame operations. This requires no API changes from the user. The plugin will replace SQL operations it supports with GPU accelerated versions. If an operation is not supported it will fall back to using the Spark CPU version. Note that the plugin cannot accelerate operations that manipulate RDDs directly.
Here, an answer from spark-rapids team
Source: Link
We do not support running the RDD API on GPUs at this time. We only support the SQL/Dataframe API, and even then only a subset of the operators. This is because we are translating individual Catalyst operators into GPU enabled equivalent operators. I would love to be able to support the RDD API, but that would require us to be able to take arbitrary java, scala, and python code and run it on the GPU. We are investigating ways to try to accomplish some of this, but right now it is very difficult to do. That is especially true for libraries like Hail, which use python as an API, but the data analysis is done in C/C++.
QUESTION
I have 1500 files with the same format (the .scount file format from PLINK2 https://www.cog-genomics.org/plink/2.0/formats#scount), an example is below:
...ANSWER
Answered 2021-Sep-07 at 11:10a tidyverse
solution
QUESTION
I have been implementing a suite of RecordBatchReaders for a genomics toolset. The standard unit of work is a RecordBatch. I ended up implementing a lot of my own compression and IO tools instead of using the existing utilities in the arrow cpp platform because I was confused about them. Are there any clear examples of using the existing compression and file IO utilities to simply get a file stream that inflates standard zlib data? Also, an object diagram for the cpp platform would be helpful in ramping up.
...ANSWER
Answered 2021-Jun-02 at 18:58Here is an example program that inflates a compressed zlib file and reads it as CSV.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install MetaBGC
MetaBGC can be installed using Bioconda too. The following commands install the dependencies and then install PyPI metabgc from PyPI:.
Docker container files are provided for releases > 2.0.0. Go to the latest release on GitHub and download the source code tar ball and uncompress it.
Change to the source code directory and build the container. Then you can run the container from commandline to view metabgc help.
The quick start guide provides 2 sample datasets for building a new database and searching an existing database. The example commands are provided to run the processes and can be used as a template.
To run a toy build example, please download a file from here. This should take about 1 hour using 8 threads on a 2.5 GHz Ivybridge Intel processor. The directory also has a SLURM job script runBuildTest.sh, that can be submitted to a cluster after changing the paths.
To build and evaluate spHMMs for the protein family of interest, the metabgc build command has to be executed with required input files. To select high performance spHMMs, a synthetic metagenomic dataset must be generated with reads from true positive genes spiked in to test the performance of each spHMM. To generate synthetic metagenomes for the build process use the metagbc synthesize module.
To build and evaluate spHMMs for the protein family of interest, the metabgc build command has to be executed with required input files. To select high performance spHMMs, a synthetic metagenomic dataset must be generated with reads from true positive genes spiked in to test the performance of each spHMM. To generate synthetic metagenomes for the build process use the metagbc synthesize module. 1. --prot_alignment, required=True: Alignment of homologs from the protein family of interest in FASTA format. 2. --prot_family_name, required=True: Name of the protein family. This is used as prefix for spHMM files. 3. --cohort_name, required=True: Name of the cohort of synthetic metagenomic samples used for evaluation. 4. --nucl_seq_directory, required=True: Directory of reads for the synthetic metagenomic samples. The filenames are used as sample names. 5. --prot_seq_directory, required=False: Directory with translated synthetic read files of the cohort. Computed if not provided. 6. --seq_fmt, required=True: {fasta, fastq} Sequence file format and extension. 7. --pair_fmt, required=True: {single, split, interleaved} Paired-end information. 8. --R1_file_suffix, required=False: Suffix including extension of the file name specifying the forward reads. Not specified for single or interleaved reads. Example: .R1.fastq 9. --R2_file_suffix, required=False: Suffix including extension of the file name specifying the reverse reads. Not specified for single or interleaved reads. Example: .R2.fastq 10. --tp_genes_nucl, required=True: Nucleotide sequence of the full-length true-positive genes in the synthetic dataset in multi-FASTA format. This can be generated using the "metabgc findtp" module. 11. --F1_Thresh, required=False: Threshold of the F1 score for selection of high performance spHMMs (Def.=0.5). 12. --blastn_search_directory, required=False: Directory with BLAST search of the synthetic read files against the TP genes. Computed if not provided. To compute seperately, please see job_scripts in development. 13. --hmm_search_directory, required=False: Directory with HMM searches of the synthetic read files against all the spHMMs. Computed if not provided. To compute seperately, please see job_scripts in development. 14. --output_directory, required=True: Directory to save results. 15. --cpu, required=False: Number of CPU threads to use (Def.=4).
The high-performance spHMMs will be saved in the HiPer_spHMMs folder in the output directory specified. The HiPer_spHMMs folder should have the following files: 1. F1_Plot.png : F1 score plot of all the spHMMs and the F1 cutoff threshold. 2. *.hmm : A set of spHMMs that perform above the F1 cutoff threshold. 3. <prot_family_name>_F1_Cutoff.tsv: HMM search cutoff scores to be used for each high-performance spHMM interval. 4. <prot_family_name>_Scores.tsv: FP, TP and FN read counts of the the HMM search for all the spHMMs. 5. <prot_family_name>_FP_Reads.tsv: The false positive reads. These are reads identified in the spHMM search but are derived from the TP genes in the BLAST search. Because synthetic datasets do not fully represent real data, please be aware that some spHMM cutoffs may need to be further tuned after running MetaBGC on a real metagenomic dataset, as was done with the Type II polyketide cyclase cutoffs in the original MetaBGC publication.
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page