A review of parentage analysis

I recently had a review paper come out in Molecular Ecology called “The future of parentage analysis: from microsatellites to SNPs and beyond”. This paper is the fastest paper I’ve ever had published: from inception to acceptance was only about 6 or 7 months.

This review emerged from my desire to do a parentage analysis on one of my datasets (additional papers emerging from analytical roadblocks is a theme for some of my recent papers, as I describe in these previous posts). I found that, although I could run the traditional parentage analysis program CERVUS on subsets of my large single-nucleotide polymorphism (SNP) dataset, that a number of new parentage programs might work better for SNP datasets — especially since CERVUS was designed for analyzing microsatellite data. Microsatellites are large genetic markers (usually several hundreds of basepairs) that typically have a large number of variants (called alleles), whereas SNPs are a single basepair that varies among individuals but commonly only has two alleles (i.e., they are biallelic).

To determine which analytical program I should use with my dataset, my PhD advisor Adam Jones and I read up on all of the programs we could find and I generated some simulated data which I ran through all of the programs. In doing this, we realized that there were enough new programs that it may be time for an updated review of parentage analysis programs (Adam has published several reviews in the past, before SNP-based datasets began dominating the fields of evolution and molecular ecology).  The simulated data helped us get a general sense for which programs are the easiest to run and how accurate they are — although we did not end up analyzing the simulation results closely enough to include in the review itself.

Why might people care about parentage analysis methods? The basic concept behind parentage analysis is that if you sample offspring and potential parents, and get their genotypes at some set of markers, that you can deduce which offspring belong to which parents (assuming you’ve sampled robustly enough). Parentage analysis is central in understanding mating systems, guiding breeding programs for livestock, and in modern tagging of fishery-reared individuals in critical fisheries applications (among other applications). This is a straightforward concept, but is made more challenging by sampling error (e.g., sampling siblings, which will lead to erroneous parentage calls), genotyping error, and the background allele frequency at each locus. Different parentage analysis programs deal with these issues in different ways, and therefore comparing the programs is an important task.

The focus of our review paper ended up being on the role that SNPs are likely to play in the future of parentage analysis. We found that SNPs generated by certain methods that sample specific, small portions of the genome are highly accurate in parentage analysis. Encouragingly, the literature we reviewed all agreed that a small number of SNPs (~200, though it depends on the organism) is sufficient for assigning parentage — meaning that the best existing programs will run with the necessary datasets and that the sequencing efforts can be dedicated to sampling more individuals rather than genotyping thousands of markers.

Now I am armed with the knowledge necessary to move forward with my analyses — and hopefully many others will be able to as well!

Advertisements

Looking back at 2018 and the transition from postdoc to faculty

In 2018 I:

  • got married
  • moved to New Zealand
  • started a job as a lecturer at University of Canterbury
  • had 2 papers published
  • had 1 book chapter accepted
  • submitted 3 papers and have had 1 accepted
  • submitted 3 grant proposals and received funding for 1
  • wrote 7 new lectures
  • gave 2 invited talks and 1 poster presentation
  • reviewed 7 papers and 1 grant proposal
  • supervised 1 undergraduate student
  • Read 263 papers in depth (not just skimming them)
  • Applied to 0 jobs (as opposed to the 60 I applied to in 2017)
  • drove 10,000 miles across the Great Basin in the US with my husband doing fieldwork
  • Caught 9 Stigmatopora nigra pipefish in New Zealand

Throughout the year, I tracked the hours I spent on each task. These hours do not include time spent traveling to and from field sites or to and from work, and importantly do not include the time I spend casually chatting with colleagues over lunch, unscheduled coffee breaks, happy hours, or in the hallways. Most days I spend 40 hours or more a week actually in the office or doing work-related tasks (e.g., making coffee), but those minutes add up to lost hours.

To re-set my priorities for 2019, I want to analyze the time I spent on various tasks. On average, I spent 72 days not doing any work tasks (these days include weekends). Considering that there are 104 weekend days during the year, this goes to show that I spend most of the year working. That being said, I worked an average of 1..47 hours on weekend days, and there were 41 days in which I worked less than 2 hours (but more than 0). I’ve plotted trends across the year in graphical form:

 

daily_hours

My daily number of hours has a pretty wide variance (for days that I spent some time doing work): 6.47, but this is mostly driven by those days when I worked a small number of hours (usually weekends). The median number of hours I spent working per day was 5.34. If I only consider workdays (Mon-Fri) that I did not take off, the median number of hours I worked was 5.98 — much closer to what I would have guessed. Some of you may still be shocked by this number (it’s not the 8-10 hours that academics are supposed to work!), but this makes perfect sense to me. I generally am in the office from 8am until 5pm, but spend 30-60 minutes at lunch and a fair amount of time walking to and from meetings, chatting with colleagues and students, and making myself coffee — not to mention answering emails (which is not counted in these tasks)! Additionally, for the months of July and August I was in the field, and a lot of time was spent driving from site to site and setting up camp — tasks which were not counted here. In sum, I spent 1460.55 hours actively working in 2018.

Because I switched from being a postdoc to being faculty in the middle of 2018, I can analyze how my time is spent differently as a Lecturer (Assistant Professor) as compared to a postdoc.

hours_per_task

The major switch happened after week 35, which is the week I moved to New Zealand. What this graph shows is that I’ve drastically increased the amount of time I spend in meetings, teaching, and performing administrative tasks as a Lecturer, at the cost of time spent on data collection and analysis. This is unsurprising (just ask any faculty member you know) but in 2019 one of my major goals is to prioritize my research time by thinking really carefully about which meetings and training sessions are necessary to attend. I’m also going to start tracking how much time I spend on email, as I think that will be enlightening.

Additionally, this year I diligently tracked the papers I read, with the goal of reading one per day (#365papers). At the beginning of the year, I tried to post a summary of each paper on twitter with the hashtag #365papers, and I tried to find the author handles whenever possible. However, by paper 144, which I read on 15 June, I was exhausted by this process. In many cases, I would read the paper when disconnected from the internet (in the field, or simply away from the computer to minimize distractions), and then I had to follow up the next time I went online. After that, I stopped trying to post the papers to twitter, and I found myself to be much less hesitant to pick up another paper.

I read a total of 263 papers in 2018, which is just about one per working day. I’m really happy with that outcome. I read on average 0.719 papers per day, but somewhat incredibly I did not read a paper on 227 days in 2018. On 73 days I read 1 paper and on 36 days I read 2 papers. I was fairly constant in my reading over the year, really, with a few major dips that correspond to vacations and times when I did a lot of fieldwork:

papers_read

I am quite satisfied with my #365papers achievement for 2018, and I will continue with it in 2019. Not only does it help me have a reading goal (even if I don’t achieve it most days), but my spreadsheet also helps me keep track of which papers I’ve read and the thoughts I had while reading those papers.

2018 was a year of change, and I’m hopeful that 2019 will be a year to hone and improve my productivity, as well as maintain my work-life-balance — I’m excited to continue exploring New Zealand with my new husband!^

 

(on a non-academic note, I also surpassed my goal of reading 50 books and completed 61 books in 2018 — thanks goodreads, for helping me keep track!)

^I want to highlight a great piece by my friend Katie Wedemeyer-Strombel about how grad school (and academic culture more generally) can be tough on marriages

How can modern genetic tools be used in conservation assessment and monitoring?

Species that are of conservation concern usually are facing declining population sizes (often due to negative interactions with humans), fragmentation of habitats that causes existing populations to become isolated from each other, loss of their habitat, and interactions with new species or diseases. For species to persist despite these threats, they need to have and maintain variation, especially if some of that variation is adaptive (for instance, helps individuals survive at higher temperatures). Conservation scientists want to be able to measure how much variation threatened populations have and monitor changes in variation over time. Monitoring changes over time becomes especially important if scientists intervene in some way to increase diversity in a population.

Last year*, the National Institute for Mathematical and Biological Synthesis (NIMBioS), where I’m a postdoctoral fellow, hosted a workshop called Next Generation Genetic Monitoring, which brought together more than 30 researchers** from around the world to discuss topics related to using recent*** sequencing technology to improve monitoring of species of conservation concern. At the end of 2.5 days, the group decided to publish our discussions in a special issue of the journal Evolutionary Applications in a series of at least 12 papers, six of which came directly from the workshop.

WS_nextgen_700x183.jpg

The participants of the Next Generation Genetic Monitoring workshop at NIMBioS. Reproduced from http://www.nimbios.org/workshops/WS_nextgen

We split into several sub-groups to focus on different sub-topics, and I was in a group discussing population-level variation. My group decided to write a guide for conservation biologists, who may not be familiar with the sequencing methods, helping them design and implement an effect assessment and monitoring of genetic variation in populations. In our paper, we highlight the key decisions researchers need to make while designing studies and provide guidelines for interpreting results to help inform conservation actions. I am also part of another paper that discusses some of the assumptions and misinterpretations of some commonly-used metrics of genetic diversity. I encourage you to check out the entire Special Issue for some really great looks into different scales of genetic monitoring!

I learned so much from my colleagues during the workshop and while working on these papers. It was exciting to take what I know about population genetics, selection, and sequencing methodologies and apply my knowledge to conservation issues. That’s one of the great things about these types of collaborations – you can gain new insight on your own topics by applying your knowledge to new questions, and from another perspective. This was an excellent experience, and I hope to participate in more workshops like this in the future!

*November 7-9, 2016

**I was one of those researchers!

***If a little over 10 years old is recent

Flanagan SP, Forester BR, Latch EK, Aitken SN, Hoban S. Guidelines for planning genomic assessment and monitoring of locally adaptive variation to inform species conservation. Evol Appl. 2017;00:118. https://doi.org/10.1111/eva.12569

RAD-seq in pipefish: a cautionary tale

At one point during my PhD my advisor joked that my dissertation could at least be titled, “RAD-seq in pipefish: a cautionary tale”. Luckily, that didn’t end up being the case*, but my recently-published paper Substantial differences in bias between single-digest and double-digest RAD-seq: a case study1 comes pretty close.

This paper summarizes some major differences in the genomic data that is derived from two different methods of sampling the variation that exists in the genome. Those two methods are both types of Restriction Site Associated DNA-sequencing (RAD-seq), which primarily differ in the way they cut up they genome (one is called single-digest and the other double-digest, based on the number of restriction enzymes used to chop up the genome). People have identified various sources of bias that result from the different ways of fragmenting the genomes and have used those to debate the benefits of single-digest versus double-digest2,3,4. My paper shows how out-of-whack the results of a typical analysis can become when data derived from the two different methods are analyzed together.

The origin story of this dataset is why I was reassured** that I could at least publish something about a “cautionary tale”. As a new graduate student, back in 2011, I wanted to find a link between animal behaviors and the genome. One way to do this was to compare the frequency of different genetic variants in successfully mating and non-successfully mating females in a natural population of pipefish (a species in which sexual selection acts strongly on females). I described this approach in more detail in a previous post. So I collected fish from a population near Corpus Christi, TX, and set out to do the original RAD-seq method, the single-digest approach5. After about a year of troubleshooting every step of the method, from DNA extraction to the final amplification step, I finally had a library with DNA from 60 barcoded individuals ready to sequence (a library is one test tube that contains the pooled DNA from a bunch of different individuals, and is what eventually gets sequenced). I sequenced it and the data that came back seemed to be pretty decent quality. I breathed a sigh of relief – it worked! – and went to prep the next library.

This is where I ran into problems. The single-digest step required me to use a piece of equipment (a sonicator) in another lab, and when I prepped the next library, the sonication step returned different results than what it had given when I prepared the first library! Uh oh. I wasn’t actually the one running the sonicator, and I struggled to troubleshoot why I was getting different results because of that. So I decided to switch to the double-digest protocol6, where I would have total control over every step, using similar enzymes to recover at least some of the same genomic regions. Unfortunately, I then spent another year troubleshooting that method.*** Finally I got the double-digest method to work (yay!) and eventually I processed my samples and sent them off to sequencing (a total of 4 double-digest libraries).

Fast forward to 2015, and I finally have my DNA sequencing data, and because of the overlap between the single-digest and the double-digest markers I analyzed the two sets of data together. When I set about comparing individuals within the population for selection components analysis, I got an incredibly puzzling result:

Fig. 1

My original comparison of males and females from a single population, using the merged single-digest and double-digest RAD-seq datasets. The colored points were deemed “outliers” based on their extreme values. Notice how there are basically two bands of points in the male-female comparison. These differences went away when only the double-digest dataset was analyzed.

See how the points form two separate bands? That’s because the single-digest and double-digest had so much bias that they were producing datasets with incredibly different allele frequencies!**** To continue with my selection components analysis, I focused on the double-digest dataset7 because I needed to finish my dissertation. Focusing only on the double-digest dataset, those two bands disappeared:

Figure 2.

Selection components analysis using only the double-digest RAD-seq dataset. Published in Flanagan, S. P. and Jones, A. G. (2017), Genome-wide selection components analysis in a fish with male pregnancy. Evolution, 71: 1096–1105. doi:10.1111/evo.13173

However, when I started my postdoc, I returned to the datasets and tried to figure out the major sources of the differences between the two datasets.

By analyzing various aspects of the datasets, re-analyzing them in a variety of different ways, and by modeling the different sources of bias with an in silico digestion of the genome (basically, taking the genome sequence and using the computer to mock up what the results should look like), I was able to identify a few major sources of bias: polymorphic restriction sites (where the enzymes cut the genome can be variable, too, leading to skewed results), PCR duplicates (extra copies of particular sequences due to random chance in one of the molecular biology steps), what the ‘actual’ frequency of the variant is, and the fact that I had skewed sample sizes (60 individuals sampled with single-digest and 384 with double-digest). To ameliorate the problems, a few steps can be taken: (1) analyze the datasets separately and then find overlapping loci, rather than doing the entire analysis together; (2) focus on loci with similar coverage levels in different datasets; (3) be aware of the different sources of bias and check to see if they’re impacting your dataset.

So, from unexpected (and very frustrating) bumps-in-the-road, I was able to compare two different commonly-used methods. Of course, this was not an ideal dataset for a comparison (better would have been to have the same individuals sequenced using both methods), but I was still able to provide some guidelines and insight into the issues facing researchers trying to make sense of multiple sources of data.

*For those who care, my dissertation title was “Elucidating the genomic signatures of selection using theoretical and empirical approaches”

**I wasn’t very reassured.

***One of the key breakthroughs was buying a Qubit, which is a much more accurate way of quantifying DNA than a Nanodrop. Another breakthrough was starting with many more pooled samples, even for troubleshooting – more DNA in meant more DNA out, which helped tremendously. For those who care.

****Also, I wasn’t stringent enough about pruning out low-quality points, and I analyzed the datasets together at every step of the analysis. In the published paper, those bands don’t show up, but the differences in allele frequencies between the two datasets is really extreme.

References

1Flanagan SPJones AG. 2017. Substantial differences in bias between single-digest and double-digest RAD-seq libraries: A case studyMolecular Ecology Resources00:117https://doi-org.proxy.lib.utk.edu:2050/10.1111/1755-0998.12734

2Andrews, Kimberly R., et al. 2016. Harnessing the power of RADseq for ecological and evolutionary genomics. Nature Reviews Genetics 17: 81-92. https://www.nature.com/articles/nrg.2015.28

3Andrews, Kimberly R., and Gordon Luikart. 2014. Recent novel approaches for population genomics data analysis. Molecular Ecology. 23: 1661-1667. http://onlinelibrary.wiley.com/doi/10.1111/mec.12686/full

4Puritz, Jonathan B., et al. 2014. Demystifying the RAD fad. Molecular Ecology 23: 5937-5942. http://onlinelibrary.wiley.com/doi/10.1111/mec.12965/full

5Baird, Nathan A., et al. 2008. Rapid SNP discovery and genetic mapping using sequenced RAD markers. PLoS One. 3: e3376. http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0003376

6Peterson, B. K., Weber, J. N., Kay, E. H., Fisher, H. S., & Hoekstra, H. E. 2012. Double digest RADseq: an inexpensive method for de novo SNP discovery and genotyping in model and non-model species. PLoS One. 7: e37135. http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0037135

7Flanagan, S.P., and Jones AG. 2017. Genome‐wide selection components analysis in a fish with male pregnancy. Evolution 71: 1096-1105. http://onlinelibrary.wiley.com/doi/10.1111/evo.13173/full

Finding limitations with common analysis methods: my new paper

A common goal in evolutionary biology is to understand how selection acts on traits and how genetic variants associated with those traits are affected by selection. The effect of selection on the genome is particularly interesting because there are situations where we know that populations are likely under different selection pressures (for example, one population of fish lives in freshwater and the other lives in saltwater), but the exact traits that selection is acting on may not be known or measurable. In the freshwater-saltwater fish example, the relevant trait experiencing selection pressure may be related to the ability for the gills to extract oxygen from the water – but measuring that might be tricky. So, researchers turn to the genome to attempt to understand how selection is acting on populations.

A basic distinction can be made between directional and balancing selection – is selection favoring one particular trait within the population (directional selection) or is selection favoring a mix of traits (balancing selection). To return to the freshwater-saltwater fish example, you might think that directional selection is most likely to be involved, because the freshwater and saltwater environments are incredibly different. But what if the freshwater environment is really a brackish environment that experiences fluctuations of low-salinity? Then perhaps the population will maintain variation among individuals in their ability to extract oxygen from the water because of variation in the micro-climate or temporally fluctuating conditions.

At the genetic level, the difference between directional and balancing selection can be thought of in this way: under directional selection, the populations will likely diverge, so the loci experiencing the effects of selection will have different allele frequencies (high FST between populations). However, directional selection will also erode genetic diversity (each population will tend towards only having one allele). With balancing selection, genetic diversity will be maintained (there will be many alleles in the populations) so the populations won’t diverge very much (low FST between populations).

A common approach to detecting these differences was proposed by Beaumont and Nichols in 1996, which it essentially identifies loci that have extreme FST values relative to their expected heterozygosity (which is a way of measuring their genetic diversity) by comparing the actual data to a simulated dataset with similar sampling parameters. This method then identifies loci that are under directional vs balancing selection by comparing FST values based on how much genetic diversity is expected for each locus. The simulations that are used to identify which loci are more extreme than expected (and therefore likely to be experiencing selection) are based on the infinite island model, which is a model of migration that assumes that there are infinite islands from which migrants arise. Although this is an abstraction from reality, Beaumont and Nichols showed that as long as a large number of independent populations are sampled (>10), the abstraction doesn’t skew the results very much. The Beaumont and Nichols (1996) approach has been widely used, especially since it has been developed into a user-friendly program called LOSITAN (Antao et al. 2008).

However, when I was conducting my population genomics study, I ran my data in LOSITAN and found some surprising results. I had sampled 12 populations, so I thought I should have enough samples, but I ended up with this graph:

ScovelliLOSITAN

My pipefish genomic data analyzed by LOSITAN. The light grey area in the middle background is the region that is supposedly full of neutral loci, and the darker grey areas represent areas under balancing selection (bottom – darkest grey) and under directional selection (top – medium grey).

This graph was surprising because it identified hundreds of loci as being under selection, and it looked disturbingly skewed. For comparison, the figure below is from a study of lamprey populations by Hess and colleagues (2012), and shows what an expected distribution should look like:

Hess2012_lampreyLOSITAN

Genetic data from lamprey. Figure from Hess et al (2012), published in Molecular Ecology – not my own! Copyright held by Hess et al (2012)

My PhD advisor (Adam Jones) and I decided to investigate whether this skewed pattern was a symptom of a larger problem in our dataset or whether it was a common pattern in the literature. We found that the majority of studies reporting figures from LOSITAN analyses have unexpected patterns. Using simulations, we found that these patterns are caused by the relationship between FST and expected heterozygosity (FST is calculated using the expected heterozygosity), and that the skewed patterns like the one I found occur primarily when few independent populations are sampled, especially when migration rates are low between them. The skewed patterns are not a problem, per se, as they do result from a mathematical constraint between FST and heterozygosity. However, the confidence intervals used to identify putatively selected loci do not align with the actual patterns, leading to an excess of outlier loci – and therefore those outliers are not as reliable as candidate genes of interest. The results of these analyses have just been published in the Journal of Heredity

But wait, you might be thinking, didn’t you sample 12 populations? Good memory! Yes, I did. However, those populations clustered into larger clusters, due to isolation by distance, suggesting that they may not be truly independent. Therefore, the FST-heterozygosity distribution of my data reflects more closely the distribution of a sample from only 3 or 4 populations.

popstructure_revised

Genetic groupings: the populations sort into 3-4 groups (Flanagan et al. 2016)

So what do my recent results mean for researchers? First, be aware of the assumptions underlying the analysis methods you’re using! I was incredibly surprised by the number of studies that found an odd or skewed pattern that also didn’t meet the specified requirements (>10 populations). Second, if your study doesn’t fit the assumptions of the models you’re using, it may be best not to use that model! I was also amazed that no other researchers had mentioned the skewed Fst-heterozygosity relationship in their papers! Of the 112 papers presenting LOSITAN figures, 87 of them likely have an excess of outlier loci. This will affect inferences regarding the signature of selection as well as the future use of those loci as potential candidate regions for targeted studies. If people really want to use the Fst-heterozygosity comparison, especially if their dataset is only a little skewed, I have developed an R package called fsthet that will allow you to identify loci using quantiles drawn from the distribution of your data (rather than from simulations with model assumptions). This has its own drawbacks but might be useful for some people. Finally, using multiple approaches may help identify when an analysis isn’t right for your dataset. – one of the reasons the LOSITAN results stood out to me was because it identified so many more ‘significant’ loci than the other analyses I did. To summarize: think critically about your data, your analyses, and your results.

References (with links)

Antao T, Lopes A, Lopes RJ, Beja-Pereira A, and Luikart G. 2008. LOSITAN: a workbench to detect molecular adaptation based on a FST-outlier method. BMC Bioinformatics. 9:323.

Beaumont MA and Nichols RA. 1996. Evaluating loci for use in the genetic analysis of population structure. Proceedings of the Royal Society of London B. 263:1619–1626.

Flanagan SP, Rose E, and Jones AG. 2016. Population genomics reveals multiple drivers of population differentiation in a sex-role-reversed pipefish. Molecular Ecology. 25(20): 5043-5072. doi: 10.1111/mec.13794

Flanagan SP, and Jones AG. 2017. Constraints on the FST-heterozygosity outlier approach. Journal of Heredity. esx048. doi: 10.1093/jhered/esx048

Hess JE, Campbell NR, Close DA, Docker, MF, and Narum SR. 2013. Population genomics of Pacific lamprey: adaptive variation in a highly dispersive species. Molecular Ecology. 22:2898-2916.

Why I Marched

On Saturday, April 22, 2017, an unprecedented number of scientists and science enthusiasts turned out around the country to rally and march for science.

I showed up to march (and to help administer a social/political science survey–I helped do science at the science march!) for many reasons. Most importantly, the current political climate has demonstrated how the country has in many ways has devalued science. This devaluation of science is reflected in the proposed budget cuts, but has been evident for many years in the numerous ways in which scientific consensuses have been viewed with unnecessarily skeptical opinions.

This current anti-science (“post-truth”) social climate is not different from the world  scientists live in — we all live on the same planet. Society has gotten to where it is because scientists haven’t been vocal, have (generally) avoided politics, and have not taken responsibility for communicating our findings to the general public in a way they can understand. We scientists are in part to blame for the current political climate, and I believe that we need to make up for lost time and start defending what it is we do!

Another important message I hope the March for Science sent is the value of science to society. The programming at the March for Science in Washington, DC did a good job of highlighting the importance of basic science: it has led to many discoveries of economic and public good, all of which would have been impossible to predict. Supporting these basic science research programs is an important part of what has made the US a leader in science. Even though supporting basic research may seem in some ways like a waste of money (because it has no obvious direct benefits), the real benefit of basic research is that it can yield unforeseen and inconceivably transformative results. SCIENCE MATTERS!

OLYMPUS DIGITAL CAMERA

A snapshot of the diversity of signs at the march

The march was inspiring because so many people turned up to show their support for science and science-based policy. Despite the rain, despite concerns about potential backlash for becoming politically engaged, people showed up! And everyone was optimistic and hopeful and excited to be there. I know the job isn’t done, and there is still much to be done to promote science in our society. But the March for Science was an excellent start.

20170422_133510

Before the march people completely covered the National Mall near the Washington Monument

Pipefish pairing

In my recent paper published in Behavioral Ecology and Sociobiology, I described the results of some of the work I did while in Sweden (which I’ve written about previously 1,2,3). I discovered that individual quality (both male quality and female quality) and timing of reproduction impact reproductive success in the broad-nosed pipefish, Syngnathus typhle. This is an important finding because it highlights the complex dynamics of mating systems. The results are covered in a press release, and I wrote about my experiences for Biosphere Magazine, an online nature magazine. My story in Biosphere just came out (Issue 23) and you can read it here.