Ted's Blog

    Interesting useR 2017 Talks


    Since I didn’t get to go to useR 2017 this year, I’m compiling the interesting talks. This is an ongoing list.

    • https://user2017.sched.com/event/AxqM/automatically-archiving-reproducible-studies-with-docker
    • https://user2017.sched.com/event/Axq4/clouds-containers-and-r-towards-a-global-hub-for-reproducible-and-collaborative-data-science
    • https://user2017.sched.com/event/Axq9/scraping-data-with-rvest-and-purrr
    • https://user2017.sched.com/event/Axq1/using-the-alphabetr-package-to-determine-paired-t-cell-receptor-sequences
    • https://user2017.sched.com/event/AxqG/show-me-the-errors-you-didnt-look-for
    • https://user2017.sched.com/event/AxqR/community-based-learning-and-knowledge-sharing
    • https://user2017.sched.com/event/AxqT/r-based-computing-with-big-data-on-disk
    • https://user2017.sched.com/event/AxqA/codebookr-codebooks-in-r

    How to Not Be Afraid of Your Data


    I’m going to be giving a talk for the PDX RLang Meetup on July 11 called “How to Not Be Afraid of Your Data: Teaching EDA using Shiny”. Abstract below.

    Many graduate students in the basic sciences are afraid of data exploration and cleaning, which can greatly impact their downstream analysis results. By using a synthetic dataset, some simple dplyr commands, and a shiny dashboard, we teach graduate students how to explore their data and how to handle issues that can arise (missing values, differences in units). For this talk, we’ll run through a simple EDA example (combining two weight loss datasets) with a general data explorer in shiny that can be easily customized to teach specific EDA concepts.

    Some Lessons We Learned Running Cascadia-R


    Well, the first Cascadia R Conference has come and gone. I have to say that it was super fun, and well attended (over 190 people!). I had a blast meeting and chatting with everyone. Hopefully, we showed newbies that R is learnable and others that there are lots more things to learn about R.

    The following is my attempt to document what we learned from organizing Cascadia-R. It’s not complete; I may add and subtract from it as I think of more things to say about the planning process.

    Decide the tone. Our goals with Cascadia-R were modest. We wanted to get a diverse group of R users together in a safe and encouraging environment. We wanted our workshops to be accessible to even beginners, and encourage them in the use of R.

    Part of meeting these goals of this is setting the tone. We really wanted to encourage all levels of R users to attend. All of our flyers, emails and promotional tweets encouraged beginners to come. We got help with making a Code of Conduct for the conference. Part of creating a supportive environment is encouraging diversity in both speakers and attendees. We did our best to reach out to current groups that encourage diversity, such as Women in Science Portland, and R-Ladies Global.

    We also offered diversity scholarships to encourage people from diverse backgrounds to attend, and made diversity part of our criteria for selecting talks.

    Start planning early. As junior faculty at OHSU, I’m lucky enough to be able to book facilities here, including the large learning studios where we held the conference. Having the venue secured early on made the remaining logistics of the conference much easier.

    Much like wedding planning, there are plenty of conference planning services out there who would be happy to take over aspects of your conference, for a fee. You can spend however much you want to on these things. However, I believe that such a approach is not financially responsible. I also feel that taking a more DIY/bespoke approach can make a conference most engaging (see csvconf). We tried to do most things ourselves (including design, promotion, talk submission, workshops, and registration/logistics).

    Iterate your budget. Think of a conference as a project with lots of linked dependencies. Your first plan is probably not going to be your final plan. Start a plan, iterate, realize that things are going to shift, have a backup plan. What if registration is not going to pay for the venue rental fee? Talking to simpatico sponsors can take much of the financial stress. In our case, the Rstudio foundation and ROpenSci stepped up to contribute some money as a cushion.

    Remember, there are fixed costs (such as venue rental, and recording/streaming costs) and variable costs that scale with the number of attendees (food, badges, alcohol). Separate these out. When possible, pay off the fixed costs first, so that it’s easier to manage the variable costs.

    Again, who is your desired audience and can they afford your conference? We decided to make our conference as affordable as possible to encourage as many different kinds of people to attend. We initially wanted to make attendance free for students. The problem with free is that literally it’s free. It has no value in the mind of a person who accepts free admission. So we decided to charge students a small fee just to emphasize that the conference has value.

    Talk with others who have done it. We were very clueless about much of the logistics side at OHSU. I managed to get through by talking with a number of people here (including Robin Champieux and Shannon McWeeney) who have done conferences here at OHSU. Thank you so much for your invaluable advice.

    Encourage each other and delegate. No one of us could have done all of the conference planning alone. Each of us took on various aspects of conference organization and brought in the others as support as needed. Some of us selected talks, some of us did design, and we all pitched in to get registration working as efficiently and quickly as possible.

    Our slack channel on pdxdata.slack.com is full of our decisions. Slack was so useful as a planning mechanism that we only met online via Google Hangouts a few times, and only had two in-person planning sessions.

    Be Willing to Make Mistakes. Lord knows I made a bunch of mistakes when I made announcements and hosted the lightning sessions. However, I owned up to these mistakes, shrugged, and moved on. Improvising in the moment can be just as important as planning.

    Think about the future. What should the next Cascadia-R look like? I know it just happened, but we’re trying to envision what it would look like. Based on the feedback we’ve gotten so far, people really want more workshops!

    In a following post, I’m also going to talk about lessons I learned when Chester and I put on our tidyverse workshop.

    On Breadth and Depth in Your Academic Career


    I was talking with a student and they were complaining that when at conferences, they would try to inject other topics of interest (such as cooking) into discussions with colleagues. Unfortunately, one of the after effects of this was that they were looked at as “not a serious scientist”. There’s an expectation that a scientist must be all depth, only talking and thinking about their sub-field.

    As a cross disciplinarian, I have to say that is hogwash. The genesis of so many creative ideas in science has happened because of cross-pollination across disciplines. For example, microwave technology might never have been invented without the intersection of disciplines. We know that the Arts Foster Scientific Success - a large number of Nobel and National Academy members do art in some form or other. Bernstein et al theorize that

    “there exist functional connections between scientific talent and arts, crafts, and communications talents so that inheriting or developing one fosters the other.”

    Having breadth and depth enables you to make connections that no one else has. It is the hallmark of a curious and creative person. These kinds of people are desparately needed to push science in new directions.

    I have a parallel career in performance and improvisational music. Music, for me, is endlessly inspiring and has forced me out of my introverted shell. One of the reasons I took up cello is that I can play many roles; accompanist, rhythm, solo. This flexibility in playing music has translated to my flexibility in collaboration. Being able to adjust to new circumstances and improvise new ideas to explore is a critical component of being a responsible scientist. My background improvisation has helped me pivot ideas. I have become less attached to dogmatic ideas. Many of my good ideas come from idle wondering about data that has captured my imagination. This is part of the reason why I teach students how to explore their data.

    So, the next time another scientist looks down at you for being a polymath, pity them. Their world and their ideas are not as rich as yours.

    Further Reading

    Fostering a Peer Mentoring Culture


    I realize that it has been an embarrasingly long time since I updated this blog. I had all sorts of grandiose plans for it, and I think my problem was that I was thinking too broad, too pie-in-the-sky. I’m going to try to focus on short and informative blog posts.

    One of the things that I have been thinking about graduate school is the idea of building a Peer Mentoring culture in our department. I believe that students should help and support each other, and we need to provide a forum to do that. Not just assign mentors, but provide a time and a place to do that.

    We try to foster a mentoring culture within our student group, BioData-Club. Students are free to talk about issues that concern them, especially about datasets, and are encouraged to share their experiences of software that they’ve used. I believe that we try to give students a psychologically safe place to talk about their issues with data. We try to make people feel like they’re not alone, and coach beginners so they can get over the hump.

    We’re now embarking on an experiment to reach even more people at OHSU, because we know there are lots of students who struggle with practical skills in data analysis. Our group is growing, and that’s exciting.

    I’m going to try and get everyone in our group to write a paper about Peer Mentoring Culture and how to encourage it in other schools.

    Surrogate Oncogene Paper is Published


    My dissertation paper, A Network-Based Model of Oncogenic Collaboration for Prediction of Drug Sensitivity is now published! Here’s a lay summary:

    One outstanding issue in analyzing genomics in the context of personalized medicine is the incorporation of rare or infrequent genetic alterations (copy number alterations and somatic mutations) that are observed in individual > patients. We hypothesize that these mutations may actually ‘collaborate’ with known oncogenes in the genesis of tumors through their interactions. In order to show this effect, we assess whether these interacting rare mutations cluster around known oncogenes and assess these mutational clusters, which we term surrogate oncogenes. We assess their statistical significance using a simple model of mutation. We show that surrogate oncogenes are predictive of drug sensitivity in breast cancer cell lines. Additionally, they are prevalent in three different cancer cohorts (Breast, Glioblastoma, and Bladder Cancer) from The Cancer Genome Atlas. Within the Breast Cancer and Bladder Cancer populations, surrogate oncogenes are predictive of overall patient survival. The chief strength of the surrogate oncogene approach is that it can be run at a single-patient level in comparison to other methods of assessing mutational significance.

    If you’re interested in learning more, you can check out the Surrogate Oncogene Explorer in order to understand the nature of surrogate oncogenes, and my R/Bioconductor Package on GitHub if you’d like to try out the analysis.

    There’s a follow-up paper that I’m working on that I’m very excited about. More news soon.

    Why Short-Order Bioinformatics Doesn't Work


    Unfortunately, many researchers look at computational biology and bioinformatics as a black-box: you put in data, and you get results out. The bioinformaticians and computational biologists are seen as mostly computer operators who push the button and not as true collaborators. One of my co-workers calls this “short-order” bioinformatics.

    There is great danger in simply pushing a button to get results. One type of analysis, Gene Set Enrichment Analysis (GSEA) is highly dependent on how mutations are incorporated into a gene set. If done carelessly, the results can be spurious. One paper dependent on GSEA analysis was Dixson, et al. This paper, Identification of gene ontologies linked to prefrontal–hippocampal functional coupling in the human brain was retracted. A single SNP was assigned to 8 genes and was thus over counted. Their GSEA result of “synapse organization and biogenesis” was spurious due to this assignment.

    There is a lot of impatience from collaborators when results are not immediate. Understandably, much of this work is done to support a grant and there are always looming deadlines. However, there is a lot of work between a request and well-executed computational results. Potential collaborators need to be aware of these steps.

    A well-executed workflow is thus essential for the computational results to be valid. This may include the following steps.

    • Mapping of identifiers for entities for each platform to the appropriate gene construct. In the case of the SNP paper, it was appropriate assignment of SNPs to genes. However, with Systems Biology that integrate multiple OMICs types, this can include mapping protein isoforms to the mRNA transcripts if one is interested in the impact of alternative splicing. A clear strategy must be decided on and then executed.
    • Data Management Oftentimes, we need to work with the experimentalists who are executing the research in order to understand and identify potential confounders in the data. We do this by collected and integrating metadata into our analysis, that is, data about how the experiments were executed. We need to identify technical issues such as batch effects, and scheduling time with the experimentalists is our best way of identifying these potential issues.
    • Flagging of potentially spurious samples. This part of the process requires exploratory data analysis on gross measurements used in the high-throughput platforms. For example, we may visualize boxplots of mean expression for each sample to see if the expression levels can be compared.
    • Selection of the appropriate statistical protocol given the experimental design. This may require a couple of back and forths between the computational biologist and the researcher. A good computational biologist never assumes anything about the data or design.

    Without a well-mapped strategy of data cleaning, the results from any bioinformatics analysis may be suspect. A good bioinformatics collaborator will ask these questions and will not take no for an answer. Any information that you withhold from your collaborator will affect their analysis.

    In short, treating computational biology as a black-box is done at the researcher’s peril. Instead, a collaboration should be fostered. The best level of collaboration with computational biologists is to include them from the beginning, as part of the experimental design. This is obviously a greater level of commitment and time than simply considering them as a service core. However, the benefits and rewards are much greater at this level of collaboration.

    Interesting interview with the developer of statcheck


    Due to the usual postdoc busy-ness, I haven’t had the energy to update this blog as much as I would like, but I thought this interview on Retraction Watch from Michèle B. Nuijten, the developer of the R-package statcheck to be fascinating. Her package essentially automates the checking of p-values given published data in papers, from converting the papers from pdf to text, and sees if the calculated p-values are correct. There was a lot of trial and error in parsing known formats for p-values, but now the package is available.

    I see an potentially really interesting master’s thesis in forensic bioinformatics in using the package to assess reproducibility of results in a field. Note that the student probably wouldn’t make any friends in high places, but it would be a potentially high impact thesis.

    Somatic Mutations in Skin Paper


    This paper, High burden and pervasive positive selection of somatic mutations in normal human skin is fascinating. It suggests that the mutational burden is much higher than we expected in skin cells due to UV exposure. In addition, subclones exist in the skin that are positively selected for oncogenes.

    It also makes me want to stock up on sunscreen.

    High Impact Factor Journals Have Higher Retraction Rates


    Very interesting New York Times article about the rise of frauds and retractions in High Impact Factor journals. The retraction rates for High IF journals (such as Science, Cell, and Nature) are much higher than lower IF journals.

    From the article:

    Journals with higher impact factors retract papers more often than those with lower impact factors. It’s not clear why. It could be that these prominent periodicals have more, and more careful, readers, who notice mistakes. But there’s another explanation: Scientists view high-profile journals as the pinnacle of success — and they’ll cut corners, or worse, for a shot at glory.

    I would say that this is sad, but this is a consequence of the currently terrible funding climate and unreasonable expectations of study sections. If study sections dismiss grant writers because of an unreasonable expectation of past productivity, then it shouldn’t be surprising that the drive to make oneself look productive actively encourages fraud to get ahead.

subscribe via RSS