Browsed by
Category: Bioinformatics

Finding Files in My Folders

Finding Files in My Folders

Managing disk space efficiently is essential, especially when working on systems with strict file quotas. Recently, I encountered a situation where I had exceeded my file limit and needed a quick way to determine which folders contained the most files. To analyze my storage usage, I used the following command:

for d in .* *; do [ -d "$d" ] && echo "$d: $(find "$d" -type f | wc -l)"; done | sort -nr -k2

Breaking Down the Command

This one-liner efficiently counts files across all directories in the current location, including hidden ones. Here’s how it works:

  • for d in .* * – Loops through all files and directories, including hidden ones.
  • [ -d "$d" ] – Ensures that only directories are processed.
  • find "$d" -type f | wc -l – Counts all files (not directories) inside each folder, including subdirectories.
  • sort -nr -k2 – Sorts the results in descending order based on the number of files.

Why This is Useful

With this command, I quickly identified the directories consuming the most inodes and was able to take action, such as cleaning up unnecessary files. It’s an efficient method for understanding file distribution and managing storage limits effectively.

Alternative Approaches

If you only want to count files directly inside each folder (without subdirectories), you can modify the command like this:

for d in .* *; do [ -d "$d" ] && echo "$d: $(find "$d" -maxdepth 1 -type f | wc -l)"; done | sort -nr -k2

This variation is useful when you need a more localized view of file distribution.

Introducing the Fluidigm R- Package

Introducing the Fluidigm R- Package

Our Fluidigm R-package was just released on Cran. The package is designed to streamline the process of analyzing genotyping data from Fluidigm machines. It offers a suite of tools for data handling and analysis, making it easier for researchers to work with their data. Here are the key functions provided by the package:

  1. fluidigm2PLINK(...): Converts Fluidigm data to the format used by PLINK, creating a ped/map-file pair from the CSV output received from the Fluidigm machine.
  2. estimateErrors(...): Estimates errors in the genotyping data.
  3. calculatePairwiseSimilarities(...): Calculates pairwise similarities between samples.
  4. getPairwiseSimilarityLoci(...): Determines pairwise similarity loci.
  5. similarityMatrix(...): Generates a similarity matrix.

Users can choose to run these functions individually or execute them all at once using the convenient fluidigmAnalysisWrapper(...) wrapper function.

Finding the Closest Variants to Specific Genomic Locations

Finding the Closest Variants to Specific Genomic Locations

In the field of genomics, we often need to find the closest variants (e.g., SNPs, indels) to a set of genomic locations of interest. This task can be accomplished using various bioinformatics tools such as bedtools. In this blog post, we will walk through a step-by-step guide on how to achieve this.

Prerequisites

Before we start, make sure you have the following files:

  1. A BED file with your locations of interest. In this example, we’ll use locations_of_interest.bed
  2. A VCF file with your variants. In this example, we’ll use FinalSetVariants_referenceGenome.vcf

Step 1: Sorting the VCF File

The first issue we encountered was that the VCF file was not sorted lexicographically. bedtools requires the input files to be sorted in this manner. We can sort the VCF file using the following command:

(grep '^#' FinalSetVariants_referenceGenome.vcf; grep -v '^#' FinalSetVariants_referenceGenome.vcf | sort -k1,1 -k2,2n) > sorted_FinalSetVariants_referenceGenome.vcf

This command separates the header lines (those starting with #) from the data lines, sorts the data lines, and then concatenates the header and sorted data into a new file sorted_FinalSetVariants_referenceGenome.vcf.

Step 2: Converting VCF to BED and Finding the Closest Variants

The next step is to find the closest variants to our locations of interest. However, by default, bedtools closest outputs the entire VCF entry, which might be more information than we need. To limit the output, we can convert the VCF file to a BED format on-the-fly and assign an additional feature, the marker name, as chr_bpLocation (which is the convention we use for naming our markers). We can also add the -d option to get the distance between the location of interest and the closest variant. Here is the command:

awk 'BEGIN {OFS="\t"} {if (!/^#/) {print $1,$2-1,$2,$4"/"$5,"+",$1"_"$2}}' sorted_FinalSetVariants_referenceGenome.vcf | bedtools closest -a locations_of_interest.bed -b stdin -d

This command uses awk to read the VCF data, convert it to BED format, and write the result to the standard output. The pipe (|) then feeds this output directly into bedtools closest as the -b file. The keyword stdin is used to tell bedtools to read from the standard input.

Conclusion

With these two steps, we can efficiently find the closest variants to a set of genomic locations of interest. This approach is flexible and can be adapted to different datasets and requirements.

Couldn’t delete a failed R package installation

Couldn’t delete a failed R package installation

Today I faced a strange thing, while I tried to install an update to one of my R-packages. As usually, I installed the latest version from it like this

library("devtools")
install_github("fischuu/GenomicTools")

But the installation failed, and when I tried to reinstall it, I got this error message:

Installing package into ‘/homeappl/home/<user>/R/x86_64-redhat-linux-gnu-library/4.1’
(as ‘lib’ is unspecified)
ERROR: failed to lock directory ‘/homeappl/home/<user>/R/x86_64-redhat-linux-gnu-library/4.1’ for modifying
Try removing ‘/homeappl/home/<user>/R/x86_64-redhat-linux-gnu-library/4.1/00LOCK-GenomicTools’
Warning message:
In i.p(...) :
  installation of package ‘/tmp/Rtmp<...>Lv/file3c36<...>34/GenomicTools_0.2.11.tar.gz’ had non-zero exit status

So, I tried to go in said directory and delete the folder manually and there I received another error:

rm: cannot remove '00LOCK-GenomicTools/GenomicTools/libs/.nfs00000001002e2<...>d': Device or resource busy

I tried this and this, but nothing helped to delete that folder, it kept mocking my that the device is busy. Eventually, it helped just to rename the folder list this

mv 00LOCK-GenomicTools/ 00LOCK-GenomicTools-deleteThis/

It is now still hanging there in the folder, but I was able to reinstall the R-package and now I need to revisit the folder in a few days and check, if the device is still busy or if I can delete it then…

Take a random sample of size k from paired-end FASTQ

Take a random sample of size k from paired-end FASTQ

Today I wrote a bash script that creates a random subset of a paired-end FASTQ file pair. It requires the names of the two FASTQ-files as input and also the amount of reads that the sample should have.

The script is mainly based on this Blog post. This is a rather rough code and it could be more user-friendly and allow for more options, but in its current form, it does what I need it to do.

#!/bin/bash

round() {
    printf "%.2f" "$1"
}

file1=$1
file2=$2
sample=$3
# Input test
 if ! [[ $sample =~ ^-?[0-9]+([.][0-9]+)?$ ]]; then 
>&2 echo "$sample is not a number"; exit 1; 
fi
  
extension1="${file1##*.}"
extension2="${file2##*.}"
filename1="${file1%.*}"
filename2="${file2%.*}"

fn1=$filename1"_"$sample".fastq"
fn2=$filename2"_"$sample".fastq"

if [ $extension1 == "gz" ]; then
  gunzip $file1;
  file1=$filename1;
  filename1="${file1%.*}"
  fn1=$filename1"_"$sample".fastq"
fi
if [ $extension2 == "gz" ]; then
  gunzip $file2;
  file2=$filename2;
  filename2="${file2%.*}"
  fn2=$filename2"_"$sample".fastq"
fi

lines=$(wc -l < $file1)
echo $lines
echo $sample

if (( $(awk 'BEGIN {print ("'$sample'" <= 1)}') )); then
  sample=$(awk 'BEGIN {printf("%.0f", "'$sample'" * "'$lines'")}')
fi

echo $sample

paste $file1 $file1 | \
awk '{ printf("%s",$0); n++; if(n%4==0) { printf("\n");} else { printf("\t");} }' | \
awk -v k=$sample 'BEGIN{srand(systime() + PROCINFO["pid"]); }{ s=x++<k?x- 1:int(rand()*x);
                  if(s<k)R[s]=$0}END{for(i in R)print R[i]}' | \
awk -F"\t" -v file1=$fn1 -v file2=$fn2 '{print $1"\n"$3"\n"$5"\n"$7 > file1;\
                                         print $2"\n"$4"\n"$6"\n"$8 > file2}'
                                         
if [ $extension1 == "gz" ]; then
  gzip $fn1;
  gzip $file1;
fi
if [ $extension2 == "gz" ]; then
  gzip $fn2;
  gzip $file2;
fi