TWAS, cTWAS and MR#
Introduction#
This module provides software implementations for transcriptome-wide association analysis (TWAS), and performs variant selection for providing sparse signals for cTWAS (causal TWAS) analysis as described in Qian et al (2024+) the multi-group cTWAS method. It will additionally perform Mendelian Randomization using fine-mapping instrumental variables (IV) as described in Zhang et al 2020 for “causal” effects estimation and model validation, with the unit of analysis being a single gene-trait pair.
This procedure is a continuation of the SuSiE-TWAS workflow — it assumes that xQTL fine-mapping has been performed and moleuclar traits prediction weights pre-computed (to be used for TWAS). Cross validation for TWAS weights is optional but highly recommended.
GWAS data required are GWAS summary statistics and LD matrix for the region of interest.
Step 1: TWAS#
Extract GWAS z-score for region of interest and corresponding LD matrix.
(Optional) perform allele matching QC for the LD matrix with summary stats.
Process weights: for a number of methods such as LASSO, Elastic Net and mr.ash we have to take the weights as is for QTL variants overlapping with GWAS variants. For SuSiE weights it can be adjusted to exactly match GWAS variants.
Perofrm TWAS test for multiple sets of weights.
For each gene, filter TWAS results by keeping the best model selected by CV. Drop the genes that don’t show good evidence of TWAS prediction weights.
Step 2: Variant Selection for Imputable Genes via the Best Prediction Methods#
Determine if the gene is imputable at each context based on the twas_cv performance by adjusted \(r^2\) (>=0.01) and p-values (<0.05).
The imputable gene-context pair will go through variant selection step. Maximum 10 variants with top pip selected from either
top_loci
table or SuSiE CS set.Harmonize weights against LD reference and udpate SuSiE weight.
Extract weights by best model for the context then by the variant names were selected from the previous step
Step 3: cTWAS analysis#
FIXME: add more documentation here
Step 4: MR for candidate genes#
Limit MR only to those showing some evidence of cTWAS significance AND have strong instrumental variable (fine-mapping PIP or CS).
Use fine-mapped xQTL with GWAS data to perform MR.
For multiple IV, aggregate individual IV estimates using a fixed-effect meta-analysis procedure.
Identify and exclude results with severe violations of the exclusion restriction (ER) assumption.
Input#
GWAS Data Input Interface (Similar to susie_rss
)#
I. GWAS Summary Statistics Files
Input: Vector of files for one or more GWAS studies.
Format:
Tab-delimited files that is tabix-indexed by the first
chrom
(or#chrom
) and secondpos
column.First 4 columns:
chrom
or#chrom
,pos
,A1
,A2
Additional columns can be loaded using column mapping file see below
Column Mapping files (optional):
Optional YAML file for custom column mapping.
Required columns:
chrom
,pos
,A1
,A2
,z
or (betahat
andsebetahat
).Optional columns:
n
,var_y
(relevant to fine-mapping).
II. GWAS Summary Statistics Meta-File: this is optional and helpful when there are lots of GWAS data to process via the same command
Columns:
study_id
, chromosome number, path to summary statistics file, optional path to column mapping file.Note: Chromosome number
0
indicates a genome-wide file.
eg: gwas_meta.tsv
study_id chrom file_path column_mapping_file
study1 1 gwas1.tsv.gz column_mapping.yml
study1 2 gwas2.tsv.gz column_mapping.yml
study2 0 gwas3.tsv.gz column_mapping.yml
If both summary stats file (I) and meta data file (II) are specified we will take the union of the two.
III. LD Reference Metadata File
Format: Single TSV file.
Contents:
Columns:
#chrom
,start
,end
, path to the LD matrix, genomic build.LD matrix path format: comma-separated, first entry is the LD matrix, second is the bim file.
Documentation: Refer to our LD reference preparation document for detailed information.
Output of Fine-Mapping & TWAS Pipeline#
xQTL Weight Database Metadata File:
Essential columns:
#chr
,start
,end
,TSS
,region_id
,original_data
Structure of the weight database:
RDS format.
Organized hierarchically: region → context → weight matrix.
Each column represents a different method.
eg: xqtl_meta.tsv
#chr start end region_id TSS original_data combined_data combined_data_sumstats contexts contexts_top_loci
chr1 0 6480000 ENSG00000008128 1724356 "KNIGHT_pQTL.ENSG00000008128.univariate_susie_twas_weights.rds, MiGA_eQTL.ENSG00000008128.univariate_susie_twas_weights.rds, MSBB_eQTL.ENSG00000008128.univariate_susie_twas_weights.rds, ROSMAP_Bennett_Klein_pQTL.ENSG00000008128.univariate_susie_twas_weights.rds, ROSMAP_DeJager_eQTL.ENSG00000008128.univariate_susie_twas_weights.rds, ROSMAP_Kellis_eQTL.ENSG00000008128.univariate_susie_twas_weights.rds, ROSMAP_mega_eQTL.ENSG00000008128.univariate_susie_twas_weights.rds, STARNET_eQTL.ENSG00000008128.univariate_susie_twas_weights.rds" Fungen_xQTL.ENSG00000008128.cis_results_db.export.rds Fungen_xQTL.ENSG00000008128.cis_results_db.export_sumstats.rds Knight_eQTL_brain,MiGA_GFM_eQTL,MiGA_GTS_eQTL,MiGA_SVZ_eQTL,MiGA_THA_eQTL,BM_10_MSBB_eQTL,BM_22_MSBB_eQTL,BM_36_MSBB_eQTL,BM_44_MSBB_eQTL,monocyte_ROSMAP_eQTL,Mic_DeJager_eQTL,Ast_DeJager_eQTL,Oli_DeJager_eQTL,Exc_DeJager_eQTL,Inh_DeJager_eQTL,DLPFC_DeJager_eQTL,PCC_DeJager_eQTL,AC_DeJager_eQTL,Mic_Kellis_eQTL,Ast_Kellis_eQTL,Oli_Kellis_eQTL,OPC_Kellis_eQTL,Exc_Kellis_eQTL,Inh_Kellis_eQTL,Ast_mega_eQTL,Exc_mega_eQTL,Inh_mega_eQTL,Oli_mega_eQTL,STARNET_eQTL_Mac Knight_eQTL_brain,MiGA_GFM_eQTL,MiGA_GTS_eQTL,MiGA_SVZ_eQTL,MiGA_THA_eQTL,BM_10_MSBB_eQTL,BM_22_MSBB_eQTL,BM_36_MSBB_eQTL,BM_44_MSBB_eQTL,monocyte_ROSMAP_eQTL,Mic_DeJager_eQTL,Ast_DeJager_eQTL,Oli_DeJager_eQTL,Exc_DeJager_eQTL,Inh_DeJager_eQTL,DLPFC_DeJager_eQTL,PCC_DeJager_eQTL,AC_DeJager_eQTL,Mic_Kellis_eQTL,Ast_Kellis_eQTL,Oli_Kellis_eQTL,OPC_Kellis_eQTL,Exc_Kellis_eQTL,Inh_Kellis_eQTL,Ast_mega_eQTL,Exc_mega_eQTL,Inh_mega_eQTL,Oli_mega_eQTL,STARNET_eQTL_Mac
This file is automatically generated as part of the FunGen-xQTL protocol, although only the essential columns are relevant to our application here.
TWAS region information#
This is required for cTWAS analysis, where multiple TWAS and SNP data within each region are combined for joint inference to select the variables, either genes or SNPs, to figure out which variables are likely to be directly associated with the phenotype of interest, rather than being associated through correlations with true causal variables.
chrom start end block_id
1 1000 5000 block1
2 2000 6000 block2
3 3000 7000 block3
Output#
I. A table with the following contents
gwas_study, chrom, start, end, block, gene, TSS, context, is_imputable, method, is_selected_method, rsq_adj_cv, pval_cv, twas_z, twas_pval
where
if
twas_z
isNA
it means the context is not imputable for the method of choice
II. a list of refined_twas_weights_data
per block, in RDS format, of this structure:
$ region_id
$ gene
$ context
$ selected_model
$ is_imputable
$ selected_top_variants
$ selected_model_weights
This will only contain imputatable genes. It should come with a meta-data file like this:
chrom start end block_id refined_twas_db
1 1000 5000 block1 block1.rds
2 2000 6000 block2 block2.rds
3 3000 7000 block3 block3.rds
Example#
sos run xqtl-protocol/code/pecotmr_integration/twas.ipynb twas \
--cwd /output/ --name test \
--gwas_meta_data /mnt/vast/hpc/csg/cl4215/mrmash/workflow/GWAS/gwas_meta.tsv \
--ld_meta_data /mnt/vast/hpc/csg/data_public/20240409_ADSP_LD_matrix/ld_meta_file.tsv \
--regions /mnt/vast/hpc/csg/cl4215/mrmash/workflow/pipeline_data/picalm_EUR_LD_blocks.bed \
--xqtl_meta_data /mnt/vast/hpc/csg/cl4215/mrmash/workflow/pipeline_data/picalm_region_gene_meta_data_test.tsv \
--xqtl_type_table /mnt/vast/hpc/csg/cl4215/mrmash/workflow/pipeline_data/data_type_table.txt \
--p_value_cutoff 0.05 --rsq_threshold 0.01
sos run xqtl-protocol/code/pecotmr_integration/twas.ipynb ctwas \
--cwd /output/ --name test \
--gwas_meta_data /mnt/vast/hpc/csg/cl4215/mrmash/workflow/GWAS/gwas_meta.tsv \
--ld_meta_data /mnt/vast/hpc/csg/data_public/20240409_ADSP_LD_matrix/ld_meta_file.tsv \
--regions /mnt/vast/hpc/csg/cl4215/mrmash/workflow/pipeline_data/picalm_EUR_LD_blocks.bed \
--xqtl_meta_data /mnt/vast/hpc/csg/cl4215/mrmash/workflow/pipeline_data/picalm_region_gene_meta_data_test.tsv \
--twas_weight_cutoff 1e-5
[global]
parameter: cwd = path("output/")
parameter: gwas_meta_data = path()
parameter: xqtl_meta_data = path()
parameter: ld_meta_data = path()
parameter: xqtl_type_table = ''
parameter: gwas_name = []
parameter: gwas_data = []
parameter: column_mapping = []
parameter: regions = path()
parameter: name = f"{xqtl_meta_data:bn}.{gwas_meta_data:bn}"
parameter: container = ''
import re
parameter: entrypoint= ('micromamba run -a "" -n' + ' ' + re.sub(r'(_apptainer:latest|_docker:latest|\.sif)$', '', container.split('/')[-1])) if container else ""
parameter: job_size = 100
parameter: walltime = "5m"
parameter: mem = "8G"
parameter: numThreads = 1
import os
import pandas as pd
def adapt_file_path(file_path, reference_file):
"""
Adapt a single file path based on its existence and a reference file's path.
Args:
- file_path (str): The file path to adapt.
- reference_file (str): File path to use as a reference for adaptation.
Returns:
- str: Adapted file path.
Raises:
- FileNotFoundError: If no valid file path is found.
"""
reference_path = os.path.dirname(reference_file)
# Check if the file exists
if os.path.isfile(file_path):
return file_path
# Check file name without path
file_name = os.path.basename(file_path)
if os.path.isfile(file_name):
return file_name
# Check file name in reference file's directory
file_in_ref_dir = os.path.join(reference_path, file_name)
if os.path.isfile(file_in_ref_dir):
return file_in_ref_dir
# Check original file path prefixed with reference file's directory
file_prefixed = os.path.join(reference_path, file_path)
if os.path.isfile(file_prefixed):
return file_prefixed
# If all checks fail, raise an error
raise FileNotFoundError(f"No valid path found for file: {file_path}")
def group_by_region(lst, partition):
# from itertools import accumulate
# partition = [len(x) for x in partition]
# Compute the cumulative sums once
# cumsum_vector = list(accumulate(partition))
# Use slicing based on the cumulative sums
# return [lst[(cumsum_vector[i-1] if i > 0 else 0):cumsum_vector[i]] for i in range(len(partition))]
return partition
[get_analysis_regions: shared = ["filtered_region_info", "filtered_regional_xqtl_files", "regional_data"]]
from collections import OrderedDict
def check_required_columns(df, required_columns):
"""Check if the required columns are present in the dataframe."""
missing_columns = [col for col in required_columns if col not in list(df.columns)]
if missing_columns:
raise ValueError(f"Missing required columns: {', '.join(missing_columns)}")
def extract_regional_data(gwas_meta_data, xqtl_meta_data, regions, gwas_name, gwas_data, column_mapping):
"""
Extracts data from GWAS and xQTL metadata files and additional GWAS data provided.
Args:
- gwas_meta_data (str): File path to the GWAS metadata file.
- xqtl_meta_data (str): File path to the xQTL weight metadata file.
- gwas_name (list): vector of GWAS study names.
- gwas_data (list): vector of GWAS data.
- column_mapping (list, optional): vector of column mapping files.
Returns:
- Tuple of two dictionaries:
- GWAS Dictionary: Maps study IDs to a list containing chromosome number,
GWAS file path, and optional column mapping file path.
- xQTL Dictionary: Nested dictionary with region IDs as keys.
Raises:
- FileNotFoundError: If any specified file path does not exist.
- ValueError: If required columns are missing in the input files or vector lengths mismatch.
"""
# Check vector lengths
if len(gwas_name) != len(gwas_data):
raise ValueError("gwas_name and gwas_data must be of equal length")
if len(column_mapping)>0 and len(column_mapping) != len(gwas_name):
raise ValueError("If column_mapping is provided, it must be of the same length as gwas_name and gwas_data")
# Required columns for each file type
required_gwas_columns = ['study_id', 'chrom', 'file_path']
required_xqtl_columns = ['region_id', '#chr', 'start', 'end', "TSS", 'original_data'] #region_id here is gene name
required_ld_columns = ['chr', 'start', 'stop']
# Reading the GWAS metadata file
gwas_df = pd.read_csv(gwas_meta_data, sep="\t")
check_required_columns(gwas_df, required_gwas_columns)
gwas_dict = OrderedDict()
# Reading LD regions info
regions_df = pd.read_csv(regions, sep="\t",skipinitialspace=True)
regions_df.columns = [col.strip() for col in regions_df.columns] # Strip spaces from column names before
regions_df['chr'] = regions_df['chr'].str.strip()
check_required_columns(regions_df, required_ld_columns)
regions_dict = OrderedDict()
# Reading the xQTL weight metadata file
xqtl_df = pd.read_csv(xqtl_meta_data, sep=" ")
check_required_columns(xqtl_df, required_xqtl_columns)
xqtl_dict = OrderedDict()
# Process additional GWAS data from R vectors
for name, data, mapping in zip(gwas_name, gwas_data, column_mapping or [None]*len(gwas_name)):
gwas_dict[name] = {0: [data, mapping]}
for _, row in gwas_df.iterrows():
file_path = row['file_path']
mapping_file = row.get('column_mapping_file')
# Adjust paths if necessary
file_path = adapt_file_path(file_path, gwas_meta_data)
if mapping_file:
mapping_file = adapt_file_path(mapping_file, gwas_meta_data)
# Create or update the entry for the study_id
if row['study_id'] not in gwas_dict:
gwas_dict[row['study_id']] = {}
# Expand chrom 0 to chrom 1-22 or use the specified chrom
chrom_range = range(1, 23) if row['chrom'] == 0 else [row['chrom']]
for chrom in chrom_range:
if chrom in gwas_dict[row['study_id']]:
existing_entry = gwas_dict[row['study_id']][f'chr{chrom}']
raise ValueError(f"Duplicate chromosome specification for study_id {row['study_id']}, chrom {chrom}. "
f"Conflicting entries: {existing_entry} and {[file_path, mapping_file]}")
gwas_dict[row['study_id']][f'chr{chrom}'] = [file_path, mapping_file]
for _, row in regions_df.iterrows():
LD_region_id = f"{row['chr']}_{row['start']}_{row['stop']}"
overlapping_xqtls = xqtl_df[(xqtl_df['#chr'] == row['chr']) &
(xqtl_df['TSS'] <= row['stop']) &
(xqtl_df['TSS'] >= (row['start']))]
file_paths = []
mapped_genes = []
# Collect file paths for xQTLs overlapping this region
for _, xqtl_row in overlapping_xqtls.iterrows():
original_data = xqtl_row['original_data']
file_list = original_data.split(',') if ',' in original_data else [original_data]
file_paths.extend([adapt_file_path(fp.strip(), xqtl_meta_data) for fp in file_list])
mapped_genes.extend([xqtl_row['region_id']] * len(file_list))
# Store metadata and files in the dictionary
regions_dict[LD_region_id] = {
"meta_info": [row['chr'], row['start'], row['stop'], LD_region_id, mapped_genes],
"files": file_paths
}
for _, row in xqtl_df.iterrows():
file_paths = [adapt_file_path(fp.strip(), xqtl_meta_data) for fp in row['original_data'].split(',')] # Splitting and stripping file paths
xqtl_dict[row['region_id']] = {"meta_info": [row['#chr'], row['start'], row['end'], row['region_id'], row['contexts']],
"files": file_paths}
return gwas_dict, xqtl_dict, regions_dict
gwas_dict, xqtl_dict, regions_dict = extract_regional_data(gwas_meta_data, xqtl_meta_data,regions,gwas_name, gwas_data, column_mapping)
regional_data = dict([("GWAS", gwas_dict), ("xQTL", xqtl_dict), ("Regions", regions_dict)])
# get regions data
region_info = [x["meta_info"] for x in regional_data['Regions'].values()]
regional_xqtl_files = [x["files"] for x in regional_data['Regions'].values()]
# Filter out empty xQTL file paths
filtered_region_info = []
filtered_regional_xqtl_files = []
skipped_regions =[]
for region, files in zip(region_info, regional_xqtl_files):
if files:
filtered_region_info.append(region)
filtered_regional_xqtl_files.append(files)
else:
skipped_regions.append(region)
print(f"Skipping {len(skipped_regions)} out of {len(regional_xqtl_files)} regions, no overlapping xQTL weights found. ")
[twas]
depends: sos_variable("filtered_regional_xqtl_files")
parameter: coverage = "cs_coverage_0.95"
# maximum number of top variant selection
# Threashold for rsq and pvalue for imputability determination for a model
parameter: p_value_cutoff = 0.05
parameter: rsq_threshold = 0.01
parameter: mr_pval_cutoff = 0.05
parameter: save_ctwas_data = True
parameter: save_mr_result = True
input: filtered_regional_xqtl_files, group_by = lambda x: group_by_region(x, filtered_regional_xqtl_files), group_with = "filtered_region_info"
output_files = [f'{cwd:a}/{step_name}/{name}.{_filtered_region_info[3]}.twas.tsv.gz']
if save_ctwas_data:
output_files.append(f'{cwd:a}/{step_name}/{name}.{_filtered_region_info[3]}.twas_data.rds')
if save_mr_result:
output_files.append(f'{cwd:a}/{step_name}/{name}.{_filtered_region_info[3]}.mr_result.tsv.gz')
output: output_files
task: trunk_workers = 1, trunk_size = job_size, walltime = walltime, mem = mem, cores = numThreads, tags = f'{step_name}_{_output[0]:bn}'
R: expand = '${ }', stdout = f"{_output[0]:n}.stdout", stderr = f"{_output[0]:n}.stderr", container = container, entrypoint = entrypoint
library(dplyr)
library(data.table)
library(pecotmr)
library(readr)
# get xQTL weight information
xqtl_meta_df <- fread("${xqtl_meta_data}", data.table=FALSE) # Get related gene information from the xqtl_meta data table
xqtl_type_table <- if (isTRUE(file.exists("${xqtl_type_table}"))) fread("${xqtl_type_table}") else NULL
gwas_studies = c(${paths(regional_data["GWAS"].keys()):r,})
gwas_files = c(${paths([v[_filtered_region_info[0]] for k, v in regional_data["GWAS"].items()]):r,})
gene_list <- c(${', '.join([f"'{gene}'" for gene in _filtered_region_info[4]])})
# Process weights
twas_weights_results <- list()
weight_db_list <- c(${_input:r,})
names(weight_db_list) <- gene_list
weight_db_list <- split(weight_db_list, names(weight_db_list), c)
# remove weight db files that only contain messages: "No SNPs found in the specified region ..."
weight_db_list_update <- Filter(Negate(is.null), lapply(weight_db_list, function(file_list){do.call(c,lapply(file_list, function(file) if (file.size(file) > 200) file))}))
if(length(weight_db_list_update) <= 0) stop(paste0("No weight information available from ", paste0(weight_db_list, collapse=","), ". "))
# Step 1: Load TWAS weights and generate twas weight db, output export_twas_weights_db
for (gene_db in names(weight_db_list_update)) {
weight_dbs <- weight_db_list_update[[gene_db]]
twas_weights_results[[gene_db]] = load_twas_weights(weight_dbs, variable_name_obj="variant_names",
susie_obj="susie_weights_intermediate",
twas_weights_table = "twas_weights")
twas_weights_results[[gene_db]]$data_type <- setNames(lapply(names(twas_weights_results[[gene_db]]$weights), function(context) xqtl_type_table$type[sapply(xqtl_type_table$context,
function(x) grepl(x, context))]), names(twas_weights_results[[gene_db]]$weights))
}
# Step 2: twas analysis for imputable genes across contexts
twas_results_db <- twas_pipeline(twas_weights_results, "${ld_meta_data}", "${gwas_meta_data}", region_block="${_filtered_region_info[3]}",
min_rsq_threshold = ${rsq_threshold}, p_val_cutoff = ${p_value_cutoff}, mr_pval_cutoff = ${mr_pval_cutoff},
output_twas_data = ${"TRUE" if save_ctwas_data else "FALSE"})
twas_results_db$twas_result <- merge( twas_results_db$twas_result, xqtl_meta_df[, c("region_id","TSS","start","end")], by.x="molecular_id", by.y="region_id")
fwrite(twas_results_db$twas_resul[, c(2,1,14:16,3:13)], file = ${_output[0]:r}, sep = "\t", compress = "gzip")
# Step 3: reformat for follow up cTWAS analysis
if (${"TRUE" if save_ctwas_data else "FALSE"}) {
saveRDS(twas_results_db$twas_data, "${_output[0]:nnn}.twas_data.rds", compress='xz')
}
if (${"TRUE" if save_mr_result else "FALSE"}) {
fwrite(twas_results_db$mr_result, file = "${_output[0]:nnn}.mr_result.tsv.gz", sep = "\t", compress = "gzip")
}
[ctwas]
depends: sos_variable("filtered_region_info")
parameter: twas_weight_cutoff = 1e-5
parameter: max_num_variants=2000
region_blocks = ', '.join([f"'{region_info[3]}'" for region_info in filtered_region_info])
twas_output_files = [f'{cwd:a}/twas/{name}.{region_info[3]}.twas_data.rds' for region_info in filtered_region_info]
input: twas_output_files, group_by = "all"
output: f"{cwd:a}/{step_name}/{name}.ctwas_results.rds"
task: trunk_workers = 1, trunk_size = job_size, walltime = walltime, mem = mem, cores = numThreads, tags = f'{step_name}_{_output:bn}'
R: expand = '${ }', stdout = f"{_output:n}.stdout", stderr = f"{_output:n}.stderr", container = container, entrypoint = entrypoint
library(data.table)
library(ctwas) # multigroup_ctwas
library(pecotmr)
# load genome-wide data across all regions
regions_data <- get_ctwas_meta_data("${ld_meta_data}", "${regions}", "${xqtl_meta_data}")
gwas_studies <- unique(fread("${gwas_meta_data}",data.table=FALSE, select = "study_id"))[,1]
data <- lapply(c(${_input:r,}), readRDS)
names(data) <- c(${region_blocks})
# compile z_snp, z_gene, and snp_info across multiple regions
snp_info <- do.call(c, lapply(unname(data), function(x) x$snp_info))
snp_info <- snp_info[!duplicated(names(snp_info))]
z_gene_dat <- do.call(c, lapply(unname(data), function(x)x$z_gene))
z_snp_dat <- do.call(c, lapply(unname(data), function(x)x$z_snp))
z_gene <- setNames(lapply(gwas_studies, function(study) do.call(rbind, z_gene_dat[names(z_gene_dat) == study])), gwas_studies)
z_gene <- lapply(z_gene, function(x) x[!is.na(x$z),])
z_snp <- setNames(lapply(gwas_studies, function(study) {
df <- do.call(rbind, z_snp_dat[names(z_snp_dat) == study])
df[!duplicated(df$id), ]
}), gwas_studies)
# select weight variants based on minimum twas wieghts, cs, and pip
weights <- do.call(c, lapply(names(data), function(region_block){
trim_ctwas_variants(data[[region_block]], twas_weight_cutoff=${twas_weight_cutoff}, cs_min_cor=0.8,
min_pip_cutoff=0.1, max_num_variants=${max_num_variants})
}))
# ctwas main function for all studies
ctwas_res <- do.call(c, lapply(gwas_studies, function(study){
st <- proc.time()
res <- ctwas_sumstats(z_snp[[study]],
weights,
region_info=regions_data$region_info,
LD_map=regions_data$LD_info,
snp_map=snp_info,
z_gene = z_gene[[study]],
thin = 0.1,
ncore = ${numThreads},
outputdir = ${_output[0]:dr},
outname = "${name}",
logfile = file.path(${_output[0]:dr}, paste0("${name}", ".ctwas_sumstats.log")),
verbose = FALSE,
cor_dir = NULL,
save_cor = FALSE,
group_prior_var_structure = "shared_nonSNP",
LD_format="custom",
LD_loader_fun = ctwas_ld_loader,
snpinfo_loader_fun = ctwas_bimfile_loader)
res$total_time_elapsed <- proc.time() - st
return(res)
}))
names(ctwas_res) <- gwas_studies
saveRDS(ctwas_res, ${_output:r})