Genotype Data Formatting#

This module implements a collection of workflows used to format genotype data.

Description#

We include steps for the formatting of genotype files. This includes the conversion between VCF and PLINK formats, the splitting of data (by specified input, chromosomes or genes) and the merging of data (by specified input, or by chromosomes).

Input#

Depending on the analysis task, input files are specified in one of the following formats:

  1. A single Whole genome data in VCF format, or in PLINK bim/bed/fam bundle; Or,

  2. A list of VCF or PLINK bed file

  3. A singular column file containing a list of VCF or PLINK bed file

  4. A two column file containing a list of per chromosome VCF or PLINK bed file where the first column is chrom and 2nd column is file name

Output#

Genotype data in PLINK format partitioned by chromosome

Minimal Working Example#

The data and singularity container bioinfo.sif can be downloaded from Synapse.

iv. Partition Genotype Data by Chromosome#

Timing: <1 min

sos run pipeline/genotype_formatting.ipynb genotype_by_chrom \
    --genoFile input/protocol_example.genotype.chr21_22.bed \
    --cwd output \
    --chrom `cut -f 1 input/protocol_example.genotype.chr21_22.bim | uniq | sed "s/chr//g"` \
    --container containers/bioinfo.sif 
INFO: Running genotype_by_chrom_1:
INFO: genotype_by_chrom_1 (index=1) is completed.
INFO: genotype_by_chrom_1 (index=0) is completed.
INFO: genotype_by_chrom_1 output:   /Users/alexmccreight/xqtl-protocol/output/protocol_example.genotype.chr21_22.22.bed /Users/alexmccreight/xqtl-protocol/output/protocol_example.genotype.chr21_22.21.bed in 2 groups
INFO: Running genotype_by_chrom_2:
INFO: genotype_by_chrom_2 is completed (pending nested workflow).
INFO: Running write_data_list:
INFO: write_data_list is completed.
INFO: write_data_list output:   /Users/alexmccreight/xqtl-protocol/output/protocol_example.genotype.chr21_22.genotype_by_chrom_files.txt
INFO: genotype_by_chrom_2 output:   /Users/alexmccreight/xqtl-protocol/output/protocol_example.genotype.chr21_22.genotype_by_chrom_files.txt
INFO: Workflow genotype_by_chrom (ID=w5be6d78805f1df2e) is executed successfully with 3 completed steps and 4 completed substeps.

Command Interface#

sos run genotype_formatting.ipynb -h
usage: sos run genotype_formatting.ipynb
               [workflow_name | -t targets] [options] [workflow_options]
  workflow_name:        Single or combined workflows defined in this script
  targets:              One or more targets to generate
  options:              Single-hyphen sos parameters (see "sos run -h" for details)
  workflow_options:     Double-hyphen workflow-specific parameters

Workflows:
  plink_to_vcf
  vcf_to_plink
  plink_by_gene
  plink_by_chrom
  merge_plink
  merge_vcf

Global Workflow Options:
  --cwd output (as path)
                        Work directory & output directory
  --container ''
                        The filename name for containers
  --job-size 1 (as int)
                        For cluster jobs, number commands to run per job
  --walltime 5h
                        Wall clock time expected
  --mem 3G
                        Memory expected
  --numThreads 20 (as int)
                        Number of threads
  --genoFile  paths

                        the path to a bed file or VCF file, a vector of bed
                        files or VCF files, or a text file listing the bed files
                        or VCF files to process

Sections
  plink_to_vcf_1:
  vcf_to_plink:
  plink_by_gene_1:
    Workflow Options:
      --window 500000 (as int)
                        cis window size
      --region-list VAL (as path, required)
                        Region definition
  plink_by_chrom_1:
    Workflow Options:
      --chrom VAL VAL ... (as type, required)
  plink_by_chrom_2, plink_by_gene_2:
  plink_to_vcf_2:
  merge_plink:
    Workflow Options:
      --name VAL (as str, required)
                        File prefix for the analysis output
  merge_vcf:
    Workflow Options:
      --name VAL (as str, required)
                        File prefix for the analysis output
[global]
# Work directory & output directory
parameter: cwd = path("output")
# The filename name for containers
parameter: container = ''
import re
parameter: entrypoint= ('micromamba run -a "" -n' + ' ' + re.sub(r'(_apptainer:latest|_docker:latest|\.sif)$', '', container.split('/')[-1])) if container else ""
# For cluster jobs, number commands to run per job
parameter: job_size = 1
# Wall clock time expected
parameter: walltime = "5h"
# Memory expected
parameter: mem = "16G"
# Number of threads
parameter: numThreads = 20
# the path to a bed file or VCF file, a vector of bed files or VCF files, or a text file listing the bed files or VCF files to process
parameter: genoFile = paths
# use this function to edit memory string for PLINK input
from sos.utils import expand_size
cwd = f"{cwd:a}"

import os
def get_genotype_file(geno_file_paths):
    #
    def valid_geno_file(x):
        suffixes = path(x).suffixes
        if suffixes[-1] == '.bed':
            return True
        if len(suffixes)>1 and ''.join(suffixes[-2:]) == ".vcf.gz":
            return True
        return False
    #
    def complete_geno_path(x, geno_file):
        if not valid_geno_file(x):
            raise ValueError(f"Genotype file {x} should be VCF (end with .vcf.gz) or PLINK bed file (end with .bed)")
        if not os.path.isfile(x):
            # relative path
            if not os.path.isfile(f'{geno_file:ad}/' + x):
                raise ValueError(f"Cannot find genotype file {x}")
            else:
                x = f'{geno_file:ad}/' + x
        return x
    # 
    def format_chrom(chrom):
        if chrom.startswith('chr'):
            chrom = chrom[3:]
        return chrom
    # Inputs are either VCF or bed, or a vector of them 
    if len(geno_file_paths) > 1:
        if all([valid_geno_file(x) for x in geno_file_paths]):
            return paths(geno_file_paths)
        else: 
            raise ValueError(f"Invalid input {geno_file_paths}")
    # Input is one genotype file or text list of genotype files
    geno_file = geno_file_paths[0]
    if valid_geno_file(geno_file):
        return paths(geno_file)
    else: 
        units = [x.strip().split() for x in open(geno_file).readlines() if x.strip() and not x.strip().startswith('#')]
        if all([len(x) == 1 for x in units]):
            return paths([complete_geno_path(x[0], geno_file) for x in units])
        elif all([len(x) == 2 for x in units]):
            genos = dict([(format_chrom(x[0]), path(complete_geno_path(x[1], geno_file))) for x in units])
        else:
            raise ValueError(f"{geno_file} should contain one column of file names, or two columns of chrom number and corresponding file name")
        return genos
                        
genoFile = get_genotype_file(genoFile)

Compute LD matrices for given input region#

ldstore2 based implementation#

This is good for larger sample sizes such as data from UK Biobank although we are not facing this challenge in the FunGen-xQTL project.

FIXME: we need to build ldstore2 into a container image. According to Diana it should be

pip3 install https://files.pythonhosted.org/packages/a8/fd/f98ab7dea176f42cb61b80450b795ef19b329e8eb715b87b0d13c2a0854d/ldstore-0.1.9.tar.gz 

FIXME: Diana, what’s the input for this workflow?

Create master file for ldstore2#

The master file is a semicolon-separated text file and contains no space. It contains the following mandatory column names and one dataset per line:

FIXME: Diana, this documentation is not clearly written. I cannot understand it. What are the mandatory column names? What does it mean by one data-set per line?

  • For the Z file, the format should be rsid:chrom:pos:a1:a2. Formatting for chromosome should be 01,02,03 etc

  • List of samples

The LDstore draft is currently availale here with the code to prepare for the genotypic input here. A minimal working example can be found [here]

Split VCF by Chromosome#

FIXME: add this as needed

Merge VCF files#

[merge_vcf]
skip_if(len(genoFile) == 1)
# File prefix for the analysis output
parameter: name = str
input: genoFile, group_by = 'all'
output:  f"{cwd}/{name}.vcf.gz"
task: trunk_workers = 1, trunk_size = job_size, walltime = walltime, mem = mem, cores = numThreads, tags = f'{step_name}_{_output:bn}'
bash: container=container, expand= "${ }", stderr = f'{_output:n}.stderr', stdout = f'{_output:n}.stdout', entrypoint=entrypoint
    bcftools concat -Oz ${_input} > ${_output}
    tabix -p vcf ${_output}

bash: expand= "${ }", stderr = f'{_output[0]:n}.stderr', stdout = f'{_output[0]:n}.stdout', container = container, entrypoint=entrypoint
    for i in ${_output} ; do 
        # Capture file metadata
        output_info="$i"
        output_size=$(ls -lh "$i" | awk '{print $5}')
        output_rows=$(zcat "$i" | wc -l)
        output_column=$(zcat "$i" | grep -v "##" | head -1 | wc -w)
        output_header_row=$(zcat "$i" | grep "##" | wc -l)
        output_preview=$(zcat "$i" | grep -v "##" | head | cut -f 1-11)

        # Write captured information to the stdout file
        printf "output_info: %s\noutput_size: %s\noutput_rows: %d\noutput_column: %d\noutput_header_row: %d\noutput_preview:\n%s\n" \
            "$output_info" "$output_size" "$output_rows" "$output_column" "$output_header_row" "$output_preview" >> ${_output:n}.stdout
    done