I'm an epidemiologist turned data scientist, currently working as a Staff Data Scientist at One Medical. My work has included analytic and data science support, including program/intervention evaluation, identifying data-driven insights, and building machine learning models. Often, this involves being a "data thought partner" for stakeholders to help break down nebulous problems into a series of actionable experiments. While my work history has primarily been in healthcare, my interests span virtually all applications of data in consumer technology. In my free time, I enjoy volunteering on marine mammal rescues and rehabilitation.

Below you'll find some examples of my work. If you're interested in connecting, you can find me on LinkedIn or we can connect via email at john@schrom.io.

 

Predicting Reason for Visit Peer Reviewed Python Machine Learning Graph Theory

Using an adaptation of PageRank we can dramatically improve our predictions of why a patient is seeking an appointment.

Patients seek care for hundreds of different reasons; being able to predict why a patient might book an appointment in the near future has valuable implications for both visit forecasting and design of a patient self-booking flow. However, this problem is quite difficult: data is incredibly sparse and naturally limited, and there are hundreds of classes we’re attempting to predict.

Rather than use traditional classification methods like random forests or regression, I created a network based on historical appointment data and used a version of PageRank to identify likely reasons for visit. This approach was successful about one-third of the time, which was substantially better than other attempted approaches.

More Info: Abstract, Poster

Effective Data Communication Peer Reviewed Python Experimental Design Regression

Changing how we framed clinical data led to significant improvements in depression screening rates

We started out just wanting to improve depression screening rates in our clinics, and intuitively understood the importance of clinics using data to drive that improvement. However, as I built out the reports, it was clear there were two different beliefs on our team: one group believed trended rates were the most valuable, while the other believed that raw numbers of patients would be more impactful.

So, in response, I built two versions of the reports, designed a randomized trial to test their effectiveness, and launched and analyzed the results. We found that keeping reporting on a patient-level led to around a 20% increase odds of patients being screened for depression. These results were presented at the 2020 Virtual AMIA Informatics Summit.

More Info: Abstract, Slides, Video

Changing Search Results Peer Reviewed Python Program Evaluation Observational Design Statistics

By making a subtle change in how clinicians see medication search results, we were able to change prescribing behavior by up to 1300%.

Our technology team implemented a new feature that allowed for the association of synonyms within the medication search results. We added branded medication names as synonyms to their corresponding generic medication; this way, when a provider searches for a brand (e.g., “Lipitor”) they will first be shown the corresponding generic medication (e.g., “atorvastatin”). Generic medications tend to be substantially less expensive than their branded counterpart, so this can have direct financial impacts for patients, providers, and payors.

I evaluated the impact of this change use a pre-post observational design. The results varied based on medication, but did find a universal improvement in the percent of medications prescribed as generic. We subsequently expanded this four-medication trial to all relevant medications.

More Info: Abstract, Poster

Predicting Blood Pressure Improvement Python Machine Learning Neural Networks

Using neural networks adjusted for confounding, we were able to use machine learning to drive prioritization of hypertension patient outreach

One major problem with using electronic health record data for secondary purposes is that there is that many relationships end up being confounded by patient complexity. That is, treatments are given to sicker patients so you must disentangle the effect of the treatment from the fact the patients receiving the treatment were sicker to begin with. In this case, we wanted to identify patients whose high blood pressure was not improving in order to target them for additional outreach.

I built a deep neural network to predict the patients hypertension status in six months. In addition to standard features, I included an additional layer of nodes for treatment propensity. These individually-trained features were able to account for the confounded relationship between complexity and treatment, and led to improvements in model performance.

More Info: Abstract

Improving Application Usability Peer Reviewed Python Machine Learning Association Rule Mining

This proof-of-concept used data mining techniques to identify commonly completed patterns within the application, which could result in over a million fewer “clicks” for the same actions.

Application usability is a frequent concern, especially in the context of electronic heath records and physician burnout. Part of why physicians burn out is the difficulty associated with doing the same activities over and over again. These activities quickly become predictable: if a patient is coming in for insomnia, there are a finite set of medications a provider is likely to prescribe. Why make them type them into a search box when you could just present them to begin with?

I did exactly that using association rule mining (sometimes called “market basket analysis”), looking at what reason for visit concepts were associated with different activities like lab orders, medications, referrals, or diagnoses. This approach is also plainly interpretable, which is useful in highlighting relevant information to providers to help make the decisions transparent (and thus avoid becoming a medical device under FDA rules). This won the “most geektastic” award at an internal hackathon, and was later presented at an American Medical Informatics Association meeting.

More Info: Abstract, Poster

Evaluating Feature Adoption Peer Reviewed Python Program Evaluation Observational Design Regression

A new optional follow-up patient survey was found to have varying predictors of engagement for both providers and patients.

For many acute conditions (e.g., sprained ankle, cold), patients do not follow-up unless their condition gets substantially worse. This leads to many unresolved problems cluttering electronic health records, as well as a lack of data about efficacy of treatment for those conditions. The technology team implemented a new feature that would send automated follow-up checkins to patients recently seen for acute problems. Providers could opt-out of having these sent, and patients could simply not respond.

To better understand how this feature was being used, I built two penalized logistic regression models: one predicting if a provider would opt-out and another predicting if a patient would respond. These models found multiple interesting association, leading to tweaking of the roll-out of this feature.

More Info: Abstract

Routing Incoming Messages Peer Reviewed Python Machine Learning NLP Naive Bayes

Using natural language processing, we were able to move almost 10% of messages from clinicians’ inboxes to administrative queues for faster processing

Patients send thousands of messages to their doctors each year, and many of them are about non-clinical issues that end up being resolved by administrative staff. This is thought to contribute to provider burnout, and likely leads to increase response time for patients.

Colleagues of mine created a training dataset of messages and built an initial NLP model to identify messages that could be moved out of clinical inboxes. I did the performance evaluation and model tweaking, evaluated additional (non-NB models), and provided the interpretations of these models. This problem proved to be a good use case for NLP, and versions of this model are currently in production.

More Info: Abstract, Poster

Group Visits for Anxiety Peer Reviewed Python Program Evaluation Observational Design Statistics

The evaluation of a new group visits program for patients with anxiety found a decrease in both utilization and anxiety symptoms.

Anxiety is a highly-prevalent condition, and is often managed by medications or expensive one-on-one therapy. My collaborators developed a new mindfulness-based group visit program to help patients with anxiety. After running the program for over a year, they approached me asking for assistance is evaluating the program’s effectiveness.

I conducted a pre-/post- statistical analysis of the program, finding significant improvements in both the clinic utilization and severity of anxiety symptoms. The results of this study were presented at the Academy Health conference.

More Info: Abstract, Poster

Genes and Infections R Machine Learning Support Vector Machine Clustering

Secondary gene expression data can be used to accurately identify infectious disease

Gene expression data can contain tens of thousands of data points indicate how much particular genes are being used. When someone becomes infected with a disease, say the common cold, their body kicks off a series of changes to fight that infection. I was curious if those changes were detectable at the gene level.

Using published data from multiple studies, and using statistical approaches to correct for differences among those studies, I trained multiple support vector machines to predict the infectious agent a particular patient had. I then used recursive feature elimination to identify the specific genes that were most over- or under- expressed when a patient gained that infection.

More Info: Poster, Blog

Signatures of Treatment Resistance R Machine Learning Support Vector Machine

Regions of HIV RNA susceptible to treatment resistance can be identified using machine learning

HIV treatment was revolutionized with the development of “highly active antiretroviral therapy” (HAART) in 1996: a three- drug cocktail targeting two distinct mechanisms unique to HIV. This proved to be incredibly effective at stopping the progression of HIV. While new drugs continue to be developed, HIV’s rapid mutation rate has led to the development of resistant strains. I sought to understand the impact of antiretroviral medications on the genomic evolution of HIV-1.

Using publicly available HIV genome data, I calculated nucleotide diversity measures and used those to calculate the rate of genomic evolution. I then trained a support vector machine to predict treatment resistance, and used recursive feature elimination to identify specific regions of the genome that appear to be driving resistance.

More Info: Poster, Blog

Identifying Subtypes of Diseases Peer Reviewed R Machine Learning Association Rule Mining Regression

Using a combination of machine learning approaches, we can identify subtypes of diseases with heterogeneous treatment outcomes

As we’ve gained more data and information about disease processes, we’ve learned that diseases we once thought of as homogeneous (e.g., cancer) are actually comprised of many different diseases that all present similarly. Now, with ubiquitous electronic health record data, it’s increasingly possible to find this heterogeneity of diseases that was previously impossible (informaticists call this “ehr-based clinical phenotyping”). The research group I worked for set out to do this for pre-diabetes.

I used association rule mining (ARM) to identify groups of patients that were likely to progress to diabetes. I then used penalized logistic regression to generate propensity scores for patients receiving statins, and used those scores to match patients within the ARM-defined groups. Through this process, three different groups of patients, with different risks of diabetes progression, were identified. This work was presented at an American Medical Informatics Association meeting.

More Info: Paper

Drivers of Utilization Peer Reviewed R Observational Design Regression

Social factors were identified as more predictive of care coordination time than medical complexity

Resourcing care coordinators has been an ongoing issue, made particularly prominent by the changing payment schemes in healthcare (e.g., patient-centered medical homes, accountable care organizations, value-based contracts, etc). One common approach is to pay clinics based on the medical complexity of their patients, reasoning that more complex patients likely take more time and resources from care coordinators. However, anecdotally, that was not the case - this project set out to understand what the actual drivers of care coordination time are.

Two different data sources were identified: a database compiled by case managers and care coordinators (including utilization billed in 15 minute increments) and data from electronic health records. I combined this data and used a linear model trained with stepwise regression to identify drivers of care coordination time. Social factors, like housing status, were associated with substantially higher coordination time than medical complexity. This work was presented at an American Public Health Association meeting.

More Info: Abstract, Slides