Haplotype networks in R

Haplotype networks are pretty useful for genomic analyses, especially when you’re looking at phylogeography (Shannon et al 2015) or if you’re looking at a region of special interest (Huerta-Sánchez et al 2014). A quick google search will turn up a number of blog posts that tell you how to use pegas to create a haplotype network, but many of them use the provided wood mouse dataset, which is already nicely formatted. If you’re using whole genome data (e.g. Illumina), it turns out to be a little more complicated than just importing a fasta file. In this post, I’ll show you how to get from a whole genome VCF to a haplotype network.

Pegas requires a fasta file (there are other input methods, but I found fasta to be the most straightforward) with your locus of interest (e.g. mitochondria) and requires all individuals to have the same sequence length. The data I’m starting with is a whole genome, all sites VCF file. In order to extract the mitochondria, I wrote a python script available here. To use it, you should supply a vcf file (gzipped or not is fine), an output filename, and a chromosome of interest:

python vcf2fasta.py -v individuals.vcf.gz -g -o mitochondria.fasta -c mitochondria

This script will go through the VCF file and append the appropriate reference or alternate bases for each individual and output a valid fasta file.

In theory, we should be able to plug it into pegas now because it’s a fasta file with all individuals and each individual should have the same sequence length in the mitochondria, especially since it’s already been aligned to the reference. In practice, however, this is not the case. Some individuals will be missing bases, and sometimes the reference has weird issues where sites show up twice, etc. To fix this, I use clustal to align the fasta file again. Either clustal omega or clustalw should work, but clustal omega is supposed to be faster. After running clustal, you should output a fasta file where every individual has the same number of bases. Once this is done, we can finally use pegas to build a network.

input <- "MChloroplast.fasta"
d <- ape::read.dna(input, format='fasta')
e <- dist.dna(d)
h <- pegas::haplotype(d)
h <- sort(h, what = "label")
(net <- pegas::haploNet(h))
ind.hap<-with(
stack(setNames(attr(h, "index"), rownames(h))),
table(hap=ind, pop=rownames(d)[values])
)
plot(net, size=attr(net, "freq"), scale.ratio=0.2, pie=ind.hap)
legend(-8, 0, colnames(ind.hap), col=rainbow(ncol(ind.hap)), pch=19, ncol=2)

If everything goes right, you should get something like this:

unnamed-chunk-3-1

Advertisements

The effect of excess reads and uneven coverage on viral sequence assembly

tl;dr: it makes it harder. Use IDBA_UD for assembly.

Code: https://github.com/arundurvasula/coverage-depth-assembly

One of the coolest results of RNA biology has been the ability to reconstruct full viral genomes from the sequencing of siRNAs found in eukaryotes. The Wu et. al. 2010 paper in PNAS (http://www.pnas.org/content/107/4/1606.full.pdf) demonstrated that this was possible in fruit flies, mosquitoes, and nematodes. One of the things I’ve been working on has been applying this to virus discovery in crop plants. This is a cool method, but one of the problems with this is that most assemblers weren’t build with this kind of data in mind. That is to say, most assemblers expect a sample with relatively even coverage across a genome.

When we try and sequence the siRNAs, we get massive amounts of coverage in some areas and really low levels of coverage in other areas. This makes it difficult for most assemblers to reconstruct a full genome. In order to get around this, one of the things I tried was subsampling and normalizing my reads. The idea here was to even out the coverage to a lower level so that our data is more in line with what assemblers expect.

However, this didn’t go at all like I expected. What I did in the code above was subsample or normalize my reads down to a specific coverage level and map the reads to a virus that I know is in the sample. Below you can see what happened:

Subsampling down to 50x
Coverage: 50x. Method: subsampling

Normalization down to 50x
Coverage: 50x. Method: normalization

Subsampling down to 5x
Coverage: 5x. Method: subsampling

Normalization down to 5x
Coverage: 5x. Method: normalization

As you can see from the graphs, normalization produces a much smoother level of coverage, but doesn’t really even out the coverage at all. This doesn’t help assembly at all. Subsampling performs even worse and makes coverage look pretty terrible (especially at the 5x level). The code to do this is located here in the repository under normalization.sh and subsampling.sh. Normalization was done with bbmap and subsampling was done with bioawk.

As it turns out, the best way to deal with this is to use an assembler that can handle uneven coverage and that’s where IDBA_UD comes in. This assembler is awesome and was designed with uneven coverage in mind. From the homepage:

IDBA-UD is a iterative De Bruijn Graph De Novo Assembler for Short Reads Sequencing data with Highly Uneven Sequencing Depth. It is an extension of IDBA algorithm. IDBA-UD also iterates from small k to a large k. In each iteration, short and low-depth contigs are removed iteratively with cutoff threshold from low to high to reduce the errors in low-depth and high-depth regions. Paired-end reads are aligned to contigs and assembled locally to generate some missing k-mers in low-depth regions. With these technologies, IDBA-UD can iterate k value of de Bruijn graph to a very large value with less gaps and less branches to form long contigs in both low-depth and high-depth regions.

This assembler has performed much better than most of the alternatives (Velvet, ABySS, CLCBio, SOAP, etc.) and I’d highly recommend it if you’ve got uneven coverage in your samples.

Interpreting Tajima’s D

Introduction

Tajima’s D is a statistic that compares the average number of pairwise differences with the number of segregating sites. It’s an important statistic that is widely used in population genetics. However, it must be carefully analyzed because population demography changes how Tajima’s D can be interpreted. In this post, we will look at why Tajima’s D works and how demography (especially population bottlenecks) can cloud signals of Tajima’s D.
cow-bottleneck

Figure 1. A population bottleneck with a new rare allele (orange) arising after the bottleneck.

First, let’s look at the two major components of Tajima’s D. \hat{\theta}_T is the number of pairwise differences (Tajima’s estimator or \pi) and is given by:

\hat{\theta}_T=\frac{\sum\limits_{i<j} d_{ij}}{n(n-1)/2}

where d_{ij} is the number of differences between two sequences, i and j. n is given by the number of sequences to compare. \hat{\theta}_W is the number of segregating sites (Watterson’s estimator or S) given by:

\hat{\theta}_W=\frac{S}{\sum\limits_{i=1}^{n-1} 1/i}

where S is the number of sites that segregate in the sample (that is, the number of sites are variable). Now that we have these definitions, we can take a look at their differences. The expectation is that both estimators will be equal to \theta, which is the population scaled mutation rate. However, this is not the case! Because of the way it is calculated, \hat{\theta}_T will underestimate the number of mutations that are rare in the population. Because of this, comparing the two values of \theta gives us a test statistic, Tajima’s D:

D=\frac{\hat{\theta}_T - \hat{\theta}_W}{\sqrt{\hat{V}(\hat{\theta}_T - \hat{\theta}_W)}}

Rare variants contribute little to \hat{\theta}_T

In the examples below, sequences are represented as dashes and asterisks. The sequences are aligned and a dash is a site that is the same between sequences and the asterisks represents a mutation.

Given some sequences, what are the values for \hat{\theta}_T and \hat{\theta}_W?

Example 1

---*---*------
-------*---*--
-------*------
-----------*--

\hat{\theta}_T=\frac{2+1+3+1+1+2}{6}=1.67 and \hat{\theta}_W=\frac{3}{1+\frac{1}{2}+\frac{1}{3}}=1.63

So in this case, \hat{\theta}_T gives us a similar estimate compared to \hat{\theta}_W (thanks to Marie-Julie Favé and her students for pointing out an error here). However, this will change once we introduce more rare mutations. The numerator in \hat{\theta}_T is the number of differences between each sequence. For example, if you compare the first 2 sequences, there are 2 differences. The first and third have 1 difference, etc.

Example 2

-*------------
----*---------
-------*------
-----------*--

\hat{\theta}_T=\frac{2+2+2+2+2+2}{6}=2 and \hat{\theta}_W=\frac{4}{1+\frac{1}{2}+\frac{1}{3}}=2.2
Here, because each mutation is a rare variant, the estimate of \theta is lower in \hat{\theta}_T than in \hat{\theta}_W. This effect is not very pronounced in this example because n is so small, but if we increase n to 100 and make every mutation a rare one (i.e. at a frequency of 0.01), we can see the effect more clearly.

Example 3

-*--------------------------------------------------------------------------------------------------
--*-------------------------------------------------------------------------------------------------
---*------------------------------------------------------------------------------------------------
----*-----------------------------------------------------------------------------------------------
...
--------------------------------------------------------------------------------------------------*-

\hat{\theta}_T=\frac{2(99.5)}{99.5}=2 and \hat{\theta}_W=\frac{100}{1+\frac{1}{2}+...+\frac{1}{99}}=19.31
Woah that’s a huge difference! Because of this difference, we can compare the two estimators and learn something about what kind of variants we have in the population.

Tajima’s D

Tajima’s D is the comparison between the average number of pairwise differences and the number of segregating sites in a sample. We expect positive selection (or selective sweeps) to give us a negative Tajima’s D in a population that doesn’t have any demographic changes going on (population expansion/contraction, migration, etc). This is because \hat{\theta}_W will be greater than \hat{\theta}_T. Why? Because after a selective sweep, most of the haplotypes in a population will be the same. Therefore, when mutations occur they will be rare. When you have a lot of rare mutations, \hat{\theta}_T underestimates \theta compared to \hat{\theta}_W and you get a negative Tajima’s D.

In the case of balancing selection, alleles are kept at intermediate frequencies. This produces a positive Tajima’s D because there will be more pairwise differences than segregating sites.

The effect of bottlenecks

Under demographic scenarios, Tajima’s D can exhibit signals that look like selection even under neutral simulations. Above, you can see how Tajima’s D changes before, during, and after a bottleneck. The effect of population expansion (recovery) on Tajima’s D is staggering. As soon as the population expands, Tajima’s D drops to about -0.45. It continues to drop and begins to recover, but stays negative. This effect is common even when the bottleneck size changes. Below, you can see what happens to Tajima’s D after a bottleneck when the recovery from the bottleneck event varies (10% of the population survives in the bottleneck):

Tajima's D is negative after a bottleneck

Why does this happen? On average, a bottleneck event will remove rare alleles due to sampling and intermediate alleles will be rare. On average, there will be more rare variants than intermediate variants and Tajima’s D will be negative. This makes it hard to distinguish signals of selection from Tajima’s D when a population has gone through a bottleneck. A neutral population with no selection will look the same as a population that has recently undergone a selective sweep. Below is a stronger bottleneck (1% of population survives):

1% bottleneck

and here is a weaker bottleneck (20% of population survives):

20% bottleneck

In all cases, Tajima’s D drops to negative values after a bottleneck. The 1% bottleneck has a much different shape than either of the other bottlenecks. This is because when such an extreme bottleneck occurs, every allele will be rare and Tajima’s D will start out negative. It recovers from the bottleneck in a similar way towards 0. However, in all of these simulations, Tajima’s D never converges to 0 (note that these are neutral simulations). This is because when we have a finite number of samples, the resting point of Tajima’s D is not 0, but a small negative number close to 0.

As we can see from these simulations, Tajima’s D is a complicated statistic and care must be taken when using it to analyze populations. For more on the effect of bottlenecks on populations, check out this presentation: http://www.slideshare.net/jrossibarra/bottlenecks-some-ramblings-and-a-bit-of-data-from-maize

Acknowledgements: Thanks to Jeff Ross-Ibarra for discussions and help with the simulations.

R code for simulations:

options(scipen=999)
library(ggplot2)
sims=1000000
bneck_end=runif(sims,0,2) # end comes first backward in time
bneck_start=bneck_end+0.005 # start comes 2nd so is bigger number
tbsfile<-cbind(bneck_end,bneck_start)
write.table(file="tbsfile",tbsfile,col.names=F,row.names=F,quote=F)
#we do with recombination, with theta=10 or N~80,000
#bneck then is 0.005480,000 = 1,600 generations
f=pipe(paste("~/Downloads/msdir/ms 10 ", sims, " -t 10 -r 10 1000 -eN tbs 0.1 -eN tbs 1 < tbsfile | ~/Downloads/msdir/sample_stats | cut -f 6",sep=""))
taj_d=scan(f)
qplot(bneck_end,taj_d,geom="smooth", xlab="Bottleneck end (4N generations)", ylab="Tajima's D", main="Effect of recovery time on Tajima's D")

PCA with ANGSD and ngsCovar

As a follow up to the last post (PCA with samtools and plink) I’m going to go over how to do PCA using genotype likelihoods from ANGSD and with ngsCovar. This method isn’t integrated in to angsd-wrapper yet, but we are looking to add it.

The documentation is a little light for ngsCovar, but there is a tutorial here (shout out to Peter Fields for linking me to that).

PCA with ANGSD/ngsCovar is pretty straightforward once you know which files you need to generate. ngsCovar takes in a binary genotype likelihood file (.geno). This file is distinct from the .glf files that are also produced by ANGSD. To generate the .geno file, you need to supply these options:

angsd/angsd -bam data/all_samples.txt -GL 1 -out results/all.test -doMaf 2 -doMajorMinor 1 -doGeno 32 -doPost 1 -nind 21 -P 8 -r 1:

Specifically, you need -doGeno 32 and -doPost 1. These should generate a .geno.gz file. Once you have this, you need to unzip your .geno.gz file (ngsCovar won’t accept your zipped .geno file):

gunzip all.test.geno.gz

Now we can run ngsCovar and supply the genotype file. The other options will depend on your data, but the command should look something like this:

ngsPopGen/ngsCovar -probfile results/all.test.geno -outfile pop.covar -nind 21 -nsites 100000 -call 0

Then, following the graphing part of the tutorial we can get a nice graph of our population structure. In order to get what we need from the graph, we have to change the script a little bit:

Rscript -e 'write.table(cbind(seq(1,10),rep(1,10),c(rep("Indica",12),rep("Allopatric",4),rep("Sympatric",4),rep("BC1", 1))), row.names=F, sep=" ", col.names=c("FID","IID","CLUSTER"), file="test.pops.clst", quote=F)'

Specifically, we have to change the part that labels the samples to match what our samples actually are: c(rep("Indica",12),rep("Allopatric",4),rep("Sympatric",4),rep("BC1", 1).

Then we can run the plotPCA.R script provided just like in the example:

Rscript scripts/plotPCA.R -i pop.covar -c 1-2 -a test.pops.clst -o test.pca.pdf

And then you get a pretty graph that looks something like this:

test.pca

PS: Following up from last time, we have included an individual that was supposed to be a backcross (BC1). This individual clusters very closely to the allopatric and sympatric populations and the same sample is an outlier. This suggests 1) ANGSD/ngsCovar and samtools/plink give pretty similar results (at least for this data) and 2) we have switched labels! The outlier should be the BC1 individual and the BC1 individual should be part of the sympatric population.

Population clustering with samtools SNP calling and Plink MDS

I was looking for a guide on how to do population clustering using samtools and plink but couldn’t find a concise guide on exactly what I need to do. So, after some hacking around, I figured it out and decided to write it down. In order to do MDS on bam files, you need to go through several steps. The general idea is:

bwa ref.fasta sample[0-9].fastq > sample[0-9].bam
samtools mpileup -uf ref.fasta sample[0-9].bam | bcftools view -bvcg - > samples.raw.bcf
vcftools samples.vcf -plink samples
plink --file samples --genome
plink --file samples --read-genome samples.genome --cluster --mds-plot 2

Then you can graph your result using R or whatever plotting software you fancy.

Mpileup: Mpileup will create a bcf file which is piped to bcftools, where the -c option will call SNPs. This file is written to disk and then converted to vcf files because vcftools doesn’t like the bcf output of samtools.

[Note: some bam samples removed to reduce complexity]

samtools mpileup -r 1: -uf students/Oryza_indica.ASM465v1.24.dna.genome.fa.gz students/og276.sorted.bam students/og278.sorted.bam | bcftools view -bvcg - > results/all.raw.bcf
bcftools view all.raw.bcf > all.raw.vcf

Convert vcf to ped: This step will take the SNP called vcf file and convert it to a format usable by plink (ped and map).

vcftools --vcf all.raw.vcf --plink --out all.raw

Make a .genome file: In order to do MDS, plink needs a .genome file of your samples. This can be created with something like this:

plink --file all.raw --genome --noweb --allow-no-sex --out all.raw

Do some multidimensional scaling:

plink --file all.raw --read-genome all.raw.genome --cluster --mds-plot 2 --noweb

Plot:
d <- read.table("results/plink.mds", h=T)
d$pop = factor(c(rep("indica", 12), rep("allopatric", 4), rep("sympatric", 4)))
plot(d$C1, d$C2, col=as.integer(d$pop), pch=19, xlab="PC 1", ylab="PC 2", main = "MDS of Oryza (samtools/plink)")
legend("topright", c("Indica", "Allopatric", "Sympatric"), pch=19, col=c(2,1,3))
text(d$C1, d$C2, labels=c(rep(NA, 18), "og276", NA), pos=1)

pca

If you’re wondering what’s going on with og276, I have no idea yet! It should cluster with the other sympatric samples, but it didn’t.

Searching for selection in the human genome with angsd-wrapper

The Nielsen lab has been working on a sweet program called ANGSD (Analysis of Next Generation Sequencing Data). It allows the calculation of many population genetic statistics from .bam files (along with a reference and ancestral genome). In order to make this program easier to use and to streamline analysis, I have been working on a set of wrapper scripts called angsd-wrapper. One feature of this wrapper that I’m excited about is the ability to graph the output using the awesome R package Shiny. Here, I’m going to run through using the software using widely available data from the 1000genomes project.

Download angsd-wrapper

$ git clone https://github.com/arundurvasula/angsd-wrapper.git 1000genomes

This code clones the angsd-wrapper repository into a folder called 1000genomes. There’s a readme in the repository, but a lot of the documentation is actually on the Github wiki page.

Download data:

Next, we can download some data into the data folder:

Bams:

$ wget ftp://ftp-trace.ncbi.nih.gov/1000genomes/ftp/data/HG00096/alignment/HG00096.mapped.ILLUMINA.bwa.GBR.low_coverage.20130415.bam
$ wget ftp://ftp-trace.ncbi.nih.gov/1000genomes/ftp/data/HG00097/alignment/HG00097.mapped.ILLUMINA.bwa.GBR.low_coverage.20130415.bam
$ wget ftp://ftp-trace.ncbi.nih.gov/1000genomes/ftp/data/HG00099/alignment/HG00099.mapped.ILLUMINA.bwa.GBR.low_coverage.20130415.bam

Human reference:

$ wget ftp://ftp.1000genomes.ebi.ac.uk/vol1/ftp/technical/reference/phase2_reference_assembly_sequence/phase2_reference_assembly_sequence/hs37d5.fa.gz

Chimp reference:

$ wget http://hgdownload.soe.ucsc.edu/goldenPath/panTro4/bigZips/panTro4.fa.gz

We’re almost done grabbing the data. Now we just need to create a file that tells ANGSD where the bams are located:

EUR_samples.txt

/home/adurvasu/1000genomes/data/HG00096.mapped.ILLUMINA.bwa.GBR.low_coverage.20120522.bam
/home/adurvasu/1000genomes/data/HG00097.mapped.ILLUMINA.bwa.GBR.low_coverage.20130415.bam
/home/adurvasu/1000genomes/data/HG00099.mapped.ILLUMINA.bwa.GBR.low_coverage.20130415.bam

Cool! And then we create a file that hold the inbreeding coefficients in a file (assumed to be 0 for this case, but really should be estimated):

EUR.F.txt

0
0
0

Now we’re really done.

Run the software

Okay, now we get to run the software! This is where angsd-wrapper attempts to make things easier. We eventually want to calculate theta and some neutrality test statistics. First, we need to create an SFS and then we can run the thetas part of ANGSD.

In order to do this, we modify the sfs_example.conf configuration file to point to our data and do a few other things:

sfs_example.conf

UNIX_USER="adurvasu"
PROJECT_DIR=/home/$UNIX_USER/1000genomes/
ANGSD_DIR=${PROJECT_DIR}/angsd

ANC_SEQ=./data/hs37d5.fa.gz
REF_SEQ=./data/hs37d5.fa.gz
TAXON=EUR
REGIONS="1:"
DO_SAF=1
OVERRIDE=false
N_CPUS=16

Regions is set to “1:” so that we only look at chromosome 1 for now. Also, the ancestral sequence points to the human reference because ANGSD did not like how the chimp and human genomes aligned. We can run this with the following line of code:

$ bash scripts/ANGSD_SFS.sh scripts/sfs_example.conf

Once that’s finished, we should have a nice shiny file called EUR_Derived in the results folder. This file contains the site frequency spectrum. Now that that’s over, we can run the theta calculation workflow. Similar to above, the thetas_example.conf file is modified:

thetas_example.conf

UNIX_USER="adurvasu"
PROJECT_DIR=/home/$UNIX_USER/1000genomes/
ANGSD_DIR=${PROJECT_DIR}/angsd

ANC_SEQ=./data/hs37d5.fa.gz
REF_SEQ=./data/hs37d5.fa.gz
TAXON=EUR
REGIONS="1:"
PEST=results/${TAXON}_DerivedSFS
OVERRIDE=true
SLIDING_WINDOW=true
WIN=1000
STEP=500
DO_SAF=1

This file is pretty similar to the one above, but there are some options for pointing to the SFS and for the sliding window. This can be run in the same way as the SFS calculation:

$ bash scripts/ANGSD_Thetas.sh scripts/thetas_example.conf

Cool! Now we get to look at the data using the interactive Shiny graph!

Download data locally and download GFF and view in shiny

If you are using a cluster to run ANGSD, you will need to download the data to your local computer because Shiny is easier to deal with using a GUI. The data can be scp‘d over. Once you have it, you can navigate to the scripts folder in the terminal and run the Shiny application:

$ cd scripts
$ R
> library(shiny)
> runApp("shiny")

Then, load the data in the web app and wait for the graph (it’s a lot of data!). Hopefully, you get something like this:

Unknown
Unknown-2

At this zoom it doesn’t really tell us anything, but because the graph is interactive, we can restrict the viewing window to any region we want.

Later on we are going to localize our search to interesting regions of the genome.

Forward simulation pipelines

A lot my work recently has focused on using forward simulations to test demographic models. Often times, this requires thousands (or more) of simulations. Here, I’d like to talk about my approach to running these simulations and keeping track of everything.

We use a cluster with the Slurm job queue system and while each cluster has different quirks, the basic ideas are mostly the same. First, we use SLiM to simulate our populations. The basic pipeline is:

slim | msstats sim.stats.$JOBID.txt

Slim uses a configuration file to create its models. This complicates things a little bit. In order to maintain flexibility of our model (i.e. drawing mutation rate from a distribution or from data and setting complex population scenarios) I have created an R script to write a configuration file for slim to read from (can be read here).

This R script has a few features. First, it keeps track of the job ID from the queueing system so that we can keep track of the file names. Next, each numerical value is set independently, allowing us to draw any of them from a distribution.

It also has a function to shape how the population grows over time. For example, in addition to choosing the initial and final population sizes, we can choose how it gets to that final population size, whether it is linear or exponential growth.

After the configuration file is written and saved to disk, slim is called from within R and the simulation starts. The simulation is piped to an awk script that prepares the output for msstats, and the output of msstats is saved with the same job ID.

This approach allows for flexible simulations and makes it easy to perform thousands of them. The pipeline is available on Github (extensive documentation pending): https://github.com/arundurvasula/domestication_sims

Data management software proposal

I’m thinking of writing software to solve a problem. Before that, I wanted to write a proposal and find out what other people think about it. If you have any comments, please leave them at: https://gist.github.com/arundurvasula/bc9ea4506270f555d1b3 or as a comment here.


tl;dr: data management software that logs access to files.

1. The problem:

Modern data analysis relies on many sophisticated tools that perform a wide range of calculations on data. While software continues to evolve along with methods, data management still remains a complicated problem.

First, data analysis involves a lot of trail and error. One method may work well on one dataset, but it may not work as well on another. In its nature, data analysis must be done many times to arrive upon the best solution (if there is one). This process of trial and error, however is costly in time and organization. While solutions exist to mitigate these problems (for example, software that runs other software for you), these solutions are not complete.

Specifically, organization is difficult because there is no obvious and systematic way to keep track of what has been done to data. For example, when assembling sequence data many assemblers must be used with different options to find the optimal assembly method and options. While it’s possible to keep track of what one has done in a script, this method will not capture any data analysis done outside of the script.

Second, over time, it becomes difficult to remember what analyses have been done on data. This can be addressed by appending descriptors to the filename (e.g. sample1.trimmed.qual.mapped.bam). However, this quickly becomes unwieldy and fails to capture exactly what has happened to the file (including program options).

2.The solution:

The project suggested here is a daemon that watches data directories using the inotify API. It stores information about what processes and users read and modify data and stores it in a hidden json log file in that directory. It will also support arbitrary metadata used to describe the data in the same json log file. For example, it can store information about how and when the data was collected. This information need not be present in all data, which provides flexibility in describing the data.

Second, this project will provide a local webserver to access and modify the logs in a user friendly manner. Because the log format is standard json, we can build upon previous web applications to quickly build a web front end, similar to CouchDB’s Futon (http://docs.couchdb.org/en/1.6.1/intro/futon.html).

Using synthetic lethality for targeted cell death

A lethal mutation is one that spells certain death for an organism. These mutations can be induced by a researcher, or occur spontaneously between generations. Sometimes, a mutation isn’t lethal unless it is combined with another nonlethal mutation – a condition called synthetic lethality. This kind of interaction is important because it can cause a daughter cell to die unexpectedly. One example of this kind of interaction is a mutation in the BRCA1 or BRCA2 gene and a mutation in the Poly-ADP Ribose Polymerase (PARP) gene, leading to a loss of function. Independently, mutations in either one of these genes would not cause cell death. However, when both genes are mutated, a synthetic lethal interaction occurs, and the cells can die. Additionally, mutations do not have to occur for a synthetic lethal interaction to occur. If the action of PARP is inhibited (for example, by a small molecule), it is possible that the synthetic lethal result can still occur.

This type of interaction can be taken advantage of when cell death is desired, namely, cancer cell death. Scientists have observed that most cancer cells have mutations in cohesin related genes, which are genes that create proteins that oversee how chromosomes separate during cell replication.

The goal of a study by Jessica McLellen et. al. was to see if PARP inhibitors could lead to cell death in cancerous cells with mutations in their cohesin related genes. They approached this by looking at model organisms representative of human systems, namely yeast (Saccharomyces cerevisiae) and a nematode worm (Caenorhabditis elegans). These organisms have very similar replication systems, and as the authors hypothesized, would be representative of how human systems work. The authors looked for, and found, processes that were required for survival after cohesin mutations in yeast. They then checked for these same systems in C. elegans. They hypothesized that if these systems are conserved between the two species, they are probably conserved in humans as well. Using this information, the authors found that when cohesin is mutated, several proteins related to replication fork progress and stability were required. This lead them to hypothesize that proteins not found in yeast but found in C. elegans would be required for other higher eukaryotes. These proteins included the aforementioned PARP. After testing human cells, the authors found that cells with cohesin mutations were less likely to survive when treated with a PARP inhibitor. This is significant because cancerous cells are the ones that likely have cohesin mutations, and those ones can be made less viable by treatment with PARP inhibitors.

Cancer treatments that target PARP inhibition are already on the way and are in phase II of clinical trials. This research supports their effectiveness with evidence from model organisms that have very similar systems when compared with humans. While the exact mechanism that contributes to the lethality is not yet fully understood, this work shows that PARP inhibition is a viable target for tumors with cohesin mutations, which represents a significant percentage of colorectal, ovarian, and breast cancers.

 

References:

McLellan JL, O’Neil NJ, Barrett I, Ferree E, van Pel DM, et al. (2012) Synthetic Lethality of Cohesins with PARPs and Replication Fork Mediators. PLoS Genet 8(3): e1002574. doi:10.1371/journal.pgen.1002574

Remote IPython notebook with Raspberry Pi

It recently stuck me that I’d really like to do some data analysis on my iPhone. This is pretty impractical because it’s a tiny screen and the keyboard often autocorrects and capitalizes things I don’t want it to, but I still wanted to do it because 1) I can, and 2) why not?.

To do this, I connected my Raspberry Pi up to the router, created a static IP address, set my router to port forward to that IP address, installed IPython and its friends, and set up a remote notebook. Pretty simple stuff!

Static IP

To set up the static IP, I basically followed these directions.

Port forwarding

I have a Netgear router so I just logged in at routerlogin.net, went to port forwarding, and forwarded to my Raspberry Pi’s static IP address at port 9999.

Installing IPython and friends

Since I’m using Raspbian, it’s as easy as:

$sudo apt-get -y install ipython-notebook
$sudo apt-get -y install python-matplotlib python-scipy \
python-pandas python-sympy python-nose

(copied from the IPython install guide)

Setting up a remote notebook

Setting up a remote notebook involved following these instructions.

  1. Set up a profile for the server:

$ipython profile create nbserver

  1. Create a password hash in python:

from IPython.lib import passwd
passwd()

  1. Configure notebook server (change stuff in /home/user/.ipython/profile_nbserver/ipython_notebook_config.py)

c = get_config()

Kernel config

c.IPKernelApp.pylab = ‘inline’

Notebook config

c.NotebookApp.ip = ‘*’
c.NotebookApp.open_browser = False
c.NotebookApp.password = u’hash’
c.NotebookApp.port = 9999

  1. Then to start the server:

ipython notebook –profile=nbserver

The directions are explained much better at the link. But now, I just point my iPhone’s browser at my IP address, and BOOM! I’m ready to be distracted in class.

I also added the line to start up notebooks into rc.local file located /etc/rc.local so that the notebook server starts up each time I log in.
photo
Now scientific computing on a 700 MHz CPU is not ideal, so next I’m going to figure out how to create a cluster of Raspberry Pis, but that will likely wait until summer.