CHAPTER FOURTEEN: PRODUCTIVITY

One of the most common currencies for scientists is publications. There are, of course, many other measures like patents, grants or other funding, Investigational New Drug (IND), (text)books, code repository forks/downloads, and many more. In general though, every measure of productivity breaks down into some condensed summarization of a ton of work, crafted for consumption by others. 

Across almost all fields, not just science, people use resumes or curriculum vitae (CV, which is just an extra long resume basically) in order to quickly and clearly communicate career productivity. Of course, resumes and CVs are imperfect, as they will never fully capture the nuance of someone’s life and experience, but that also highlights how important written communication is. There’s a whole rant here about scientific productivity being measured most commonly by written communication, while never really getting trained well in written communication as STEM majors, which is in part what launched my whole aspirational goal this month of writing 50k words. (Aside, to be clear, 50k words is not happening, as it’s 7PM on November 30 here and I have at the moment just over 15k words, so I don’t think it’s possible to crank out another 35k words in five hours. I’ll probably write tomorrow a conclusions or summary post that’ll continue from this one as a reflective retrospective of how I think this inaugural #NovemberWritingChallenge went. Spoiler: I’m of course disappointed that I didn’t even come close to 50k, but also I’ve learned a lot about what holds me back from writing.) 

I’ve written previously about the COMMUNICATION aspect of publications and other avenues for dissemination of scientific work, but since the last chapter on MESS I’ve been thinking about it from the standpoint of repeatability and reproducibility in scientific literature. First, to clarify, repeatability typically refers to the same person repeating the same experiment with the same system and getting the same result, while reproducibility refers to a different person attempting the same experiment with the same or similar system and getting the same conclusion. Anything published should really be repeated, because if you can’t get the same result you’re claiming in your publication, then you probably shouldn’t publish it. But reproducibility can be harder. Borrowing from statistical descriptions of missingness, in my mind, there’s irreproducibility “at random” and irreproducibility “not at random”. The former is where biology is just hard and there’s some hidden aspect of the experimental system that is unknown, and this is where the scientist is not really at fault for irreproducibility. Irreproducibility “not at random” is where the scientist just did a terrible job of describing the methods, the system, or the analysis. I’m assuming laziness here and not straight malicious lack of detail, although there are of course examples of malicious intent by manipulated data or straight fake analyses.

Irreproducible “not at random” is at least in part bad methods, speaking to the specific section of a scientific manuscript. Methods sections are the second easiest place for me to start writing a paper, aside from the results or figures, because I’m just describing what I did. Usually my methods are pretty generic and widely used in the field so they don’t need much detail, but sometimes there’s specific twists that I’ve added. It’s not unlike cooking and having recipes. Most people have some idea of what goes into a chocolate chip cookie recipe, but some people might have a specific twist based on their personal taste preferences, like using brown butter instead of regular, or based on necessary accommodations, like adjustments to account for baking at high elevations. So the equivalent is that maybe the scientist of the irreproducible work is just not realizing that their method works great for them because their experiment is being done in Denver, but a scientist at sea-level in Boston needs a different recipe, i.e. protocol or method.

Not to get back into the whole artificial intelligence (AI) debate, but maybe AI would be helpful for reproducibility of analyses. I’d be shocked if a lot of the papers coming out now aren’t using analyses that were written, at least in part, by AI like ChatGPT, Claude, etc. If people are already relying on AI to write their data analyses (and therefore guide their conclusions), then it’s not a huge leap to use the same AI to take things one step further and capture the whole “chat” and publish that as a supplemental method. At the bare minimum, people should be capturing the code and publishing those scripts or notebooks alongside their papers for repeatability, but I know a lot of people put terrible code out there that can’t be rerun by anybody else due to hardcoded paths or missing dependencies, and many more people just never even make their figure-producing code available.

Maybe AI could go one step further though? If we capture the protocols, alongside the data processing, and the result-producing post-processing analyses, that should be the most ideal reproducibility scenario. I don’t know exactly what that might look like in practice, but that’s something that would massively help me in implementing other people’s methods in my own lab. Upload someone else’s paper, and have the AI generate a shopping list based on the methods so that I can get all the supplies I need, and then spit out the protocols based again on the methods and a bit of the results, maybe. There’s probably also something like that out there, if only for cooking.

Anyway, all of this reproducibility ramble got me away from the initial thought, which was productivity as measured by writing. In management-speak, what goes into the resume or CV are the OKRs (Objectives and Key Results) and what helps get you to the OKR is the KPIs (Key Performance Indicators). Setting goals, or even “resolutions” as we’re now coming up on the end of 2025, is usually at the OKR level, but without some KPI to help get you there, you’re probably never going to make your goal or resolution. When you set out to run a marathon, you don’t usually just walk up to the start line and then bang out 26 miles. Usually you decide that you want to run the marathon (OKR) and then break it down in a training plan with gradually increasing mileage each week (KPI). As another example, this whole writing challenge this month to write 50k words (OKR) came with some clear daily mini-goals like shooting for 2k words/day (KPI). 

One place I think people struggle, both with methods for papers and just in general productivity measurement, is figuring out whether the KPI is really what the audience needs to know, or if the audience really just cares about the OKR. For the methods section, you really need to be specific and detailed, but for reproducibility, it’s enough to be shooting for the OKR – in fact, it’s probably even better for the scientific community and furthering human knowledge if we can reproduce the idea or conclusion by orthogonal means, rather than directly reproducing the exact experimental conditions which might be correlated but not causative to the conclusion being drawn by the original scientist.

Similarly with personal productivity, it’s easy to punch out a bunch of KPIs and make progress in ticking off to-do list boxes, but if you’re not keeping the OKR in mind, you may be doing a bunch of busy work without making meaningful progress on the real goal. “Not everything that can be counted counts, and not everything that counts can be counted”, as William Bruce Cameron is quoted as saying. A few weeks ago, I was talking about this with some other women at a professional event, where we were discussing the challenges of accommodating alternative paths. Specifically, the conversation turned to the subject of childcare stipends for conferences, and a couple young mothers emphatically supported the idea. To be clear, I love the concept and stipends to help people afford traveling to professional events and career advancement opportunities. That said, a couple other women, myself included, cautioned that bringing children to the conference may prevent you from getting the full value of the conference, which isn’t usually in the formal programming but rather the informal networking that happens after the official programming ends and tends to happen in the evenings. For me this is a personal observation, as when I’ve tried to bring my child to conferences, it just ends up with me doing a pretty terrible job both professionally and personally, and nobody got my full attention because I couldn’t engage fully in either setting. While, yes, I hit the KPI of “attend conference, give talk” and “conduct bedtime routine”, I didn’t really make progress on the OKRs of “advance career” or “be a present, available parent”. That said, I also recognize that I’m lucky to have the option of leaving my child with my partner when I go to conferences now, so that while I miss spending time with my family when I travel to events, it’s a choice that not everyone has the luxury of making.

I certainly wish there was a better system, both speaking specifically of conference networking and also more broadly of productivity measurements, but until scientific society at large changes, I think we’re stuck with the aforementioned productivity metrics and figuring out tools to manage it or otherwise cope with it.

CHAPTER TWELVE: COMMUNICATION

A brief break from the deep subject matter expertise posts, because it has me thinking broadly about scientific communication. Specifically, that formal training (i.e. undergrad and grad school) never prepared me for real scientific communication.

The focus in school was always communicating science to non-scientists, usually children from middle school through high school. While getting kids interested and excited about STEM subjects, this focus on middle/high school scientific communication really did me a disservice in my professional life and probably even my personal life.

There are so many more facets to scientific communication that really don’t get addressed sufficiently in higher education circles. I’m thinking about how I have, professionally, had to figure out communicating very technical, jargon-heavy concepts to a variety of really smart people who have expertise outside of my hyper-focused niche. The way I might approach communicating what I do (mass spectrometry, proteomics, transcription factors, etc) completely depends on the audience. A venture capital investor, for example, is thinking about things from a financial perspective, while a pharmaceutical scientist is thinking about this from a drug discovery perspective, and an oncologist is thinking about the outcome and impact on patients in a clinical trial. Nobody is “wrong”, nobody is smarter than anybody else, everyone’s just thinking about the same thing from a different perspective and with a different lens. 

It follows, then, that communicating the same topic needs to be framed specifically for different audiences. It doesn’t mean that the core concepts are changing, just that the language needs to be different for the most effective communication. Maybe language is an interesting parallel, where translating the same message into different languages shouldn’t change the core message, but using the right language for the right audience is going to make communication much easier than forcing everyone to do the translations themselves, or even worse just zone out and not even listen to the message at all.

None of my undergraduate or graduate school outreach opportunities touched on this concept of science communication to adults, really. As far as I can recall, it always focused on making fun, hands-on “labs” or demonstrations for kids to learn scientific concepts. Then I got into the “real world” and suddenly cute little demonstrations aren’t really working anymore.

An obvious example of where scientific/STEM communication can go really right or insanely wrong is with policy. Policy makers might consult scientists and doctors and other professionals, piece together all of the best expert advice to write into laws or regulations or recommendations, but without effective communication, policies are relying solely on people following guidance based on an appeal to authority. Sometimes that works, but a lot of times it doesn’t. In my own work, appeal to authority has very rarely worked out. I don’t have much authority outside of my hyper-niche specialty, and so the communication (good or bad) is weighed much more heavily.

I think there’s a lot we could learn about communication, as a scientific community, from novelists and screen/scriptwriters. Crafting a story can help hook an audience into a message. This is another place where grad school trained me, but maybe trained me specifically to give scientific talks to scientific audiences; I’ve had to relearn how to build a “story” based on the sort of “plot” or pacing that communicates the message best.  I think there’s parallels between classic literature tropes (e.g. hero’s journey, tragedy, comedy) that could get translated really well through the lens of telling a scientific story. There’s some examples of nonfiction biographies or histories that have done this for biotech stories, like the books Living Medicine and Billion Dollar Molecule, that retell the history of bone marrow transplants and the pharmaceutical company Vertex’s founding respectively, but they do it with a framing that helps tell the historical story and scientific journey in a way that is nice to read.

I’m sure with both of those books that the exact history isn’t perfectly captured, and in part can never be because the way the stories are told involves so many peoples’ specific memories and emotions and motivations, but the general approach is something I really admire from a communication standpoint.

I think the books work so well, in part, because the reader can pattern match the general story arc to other novels. There’s some backstory setting up the scene and the characters, there’s some tension or suspense that puts the main characters through a challenge, and then there’s a resolution by the end of the book. There’s some subplots along the way, maybe some romance or comedy. 

Lack of storytelling is why a lot of scientific communication falls so flat. I’ve sat through a lot of scientific presentations that are just a linear chronology of all the experiments that the presenter has ever done. The right story, though, is almost never the chronological story. Although there’s some contexts where the chronological story is motivational to the audience, I think most audiences are thinking ”Why should I care?” so you need to directly say out loud why they should bother listening and paying attention to you. For startup pitch decks, usually the first slide is either directly stating the problem that the company proposes solving, or it’s the financial opportunity (market size); both of which directly tell the audience why they should care because it’s a problem that they themselves can recognize or it’s an opportunity to make money. For a scientific presentation, usually an element of teaching goes a long way, so that the audience cares because you’ve taught them something new. (Put another way, making the audience feel smart/smarter is a good motivator for a scientific audience, who is probably always looking to learn more.)

In scientific manuscripts for peer reviewed journals, there’s definitely a certain pattern that I’ve come to expect from papers. For a basic four-figure paper, the first figure is some kind of method or overall experimental schematic. The second figure is a high-level visualization of the data, like a heatmap or a dimension reduction like principal component analysis (PCA), t-SNE, UMAP. The third figure is a deep dive into some slice of that big dataset, just visualized differently. And then the final figure is some orthogonal experiment to prove figure three correct, and/or a schematic of some biological mechanism that the data suggests. Pattern matching that template helps get through papers pretty quickly, because I can just flip to the figures and usually they follow some general flow like that.

Having some portion of the communication being predictable helps get the message across. If the message itself is unexpected or difficult, then having the medium be predictable or the presentation be predictable can help, I think. Predictability isn’t a bad thing. There’s something comfortable about knowing what to expect, and when it changes abruptly, it can be jarring. I’m thinking about things like when your favorite band has a particular style of music that they produce, but then there’s that one weird random song that doesn’t fit the vibe and sticks out badly. (Of course, some people are amazing across genres, and there’s some scientists like that, too, who can easily hold their own across multiple fields.)

Something I use almost always in my scientific presentations is a “three-act” structure. My talks almost always start with some brief introduction to set the “scene”, maybe 10-15% of the total talk. Then, I set up three main, take-home messages for the audience, and each of the three is about 25-30% of the talk and somewhat builds on each other. Finally, with the remaining 10-15% of the talk, I have some “cliff-hanger” future work, but not whatever the next obvious logical step would be based on what I’ve said, but some more distant future vision. It’s not something that I happen to do, it’s something that I pointedly intentionally do whenever I sit down to organize a talk. I usually start by deciding on the three main messages, set each of those up, then do a little scene-setting/exposition at the beginning that gives just enough context for the three messages, and then a little bit of forward-thinking “cliff-hanger” at the end. I don’t claim to be the best presenter or anything, but I’ve been invited to speak quite a bit so I figure something must be resonating.

None of this is meant to be an immediate solution to any scientific communication struggles, and again to be clear I don’t mean to imply that I’m a significantly better communicator than anybody else. In part, this is because I don’t think we scientists get enough training on communication, and in part because it’s just hard anyway, even if we did get trained. I’m inspired by writers, though, because I think if we structured scientific communication to lean on common literature, like plot devices and story structure, we’d probably capture a wider audience and have more buy-in and support from policy makers, funding agencies, and even the general public.

CHAPTER NINE: LIMINAL

The purgatory of being in-between naive beginner excitement and rational experienced master is basically where I live for most skills, just right there in the Trough of Disillusionment on the Gartner hype cycle. It feels like there’s a ton of support for “getting started”, and a ton of support for highly niche specialization, but just not a lot that helps you get through the purgatory of “intermediate”. This goes for learning a new language, or how to code, or baking, or entrepreneurship, or writing, or managing, or whatever. There’s so much out there to support the zero-to-one initialization, and then there’s deep subject matter expertise, but the “messy middle” is really hard, seemingly endlessly wandering through a liminal space.

I don’t really feel like I’m hitting “enlightment” in anything, just maybe finding how much farther down the “disillusionment” goes, but I’m also not anywhere near “inflated expectations” for any of my skills so that seems to put me pretty solidly in the “intermediate” range. I love picking up new things – I mentioned before that I challenged myself to learn how to bake macarons, and that I wanted to learn how to code so I joined a machine learning lab – and it’s so frustrating to get the basics down then just have zero resources to get through “intermediate” to “fluent” or “advanced”. There’s the 10,000 hours rule, which says that it takes 10,000 hours to master a skill, but that seems to emphasize the point that there’s always tons of resources to help you with the first 10-100 hours, but after that it’s just supposed to be grinding until you reach near-mastery and can get into the ultra-deep niche groups, I guess.

So how do you get through “intermediate” to be considered a “master”? Although it’s the 10,000 hour rule, realistically that’s more like 5 years of fairly dedicated training, so about the average American science PhD program. The first year is structured classes like high school or college, with more in depth materials, but after the first year or two it’s all unstructured research, largely self-guided with some input from your PhD advisors and thesis committee members. In the end, when you defend and get the “PhD” letters after your name, society generally recognizes you as a “master” in that subject, which is itself a hilariously sub-sub-sub-field specific niche, a tiny drop in the vast vast ocean of human knowledge.

In the things where I’m “intermediate”, I don’t feel like I make a lot of progress after those first 1-2 years of structured learning. Maybe because the rest all needs to be self-guided? I wish there was more structure out there for intermediate anything, to at least learn more about what I need to learn. 

I probably just need to learn to embrace the journey that is being “intermediate” and find ways to enjoy the process more.

CHAPTER EIGHT: ACCOUNTABILITY

Yesterday’s confession had me thinking about the “looking busy” aspects of building or working “in the open”. The thought had crossed my mind that I could just avoid posting anything until I’d  built up a ton of writing that I would push all on one day in order to make up for all the days I’d missed, then make it seem as though I had been perfectly productive the whole time. I’m not sure why I thought of that. Maybe the shame of missing days, but then I’d never actually committed to a specific number of words per day, just the overall monthly goal of 50k, which in theory could have been 50k all at once. 

I’ve mentioned before that I’m more of a “point me in a direction and then let me work on it independently” so having this daily-posting accountability feels kind of weird, even if I’m the only one holding myself accountable. And this working style is great for some of my work, where it’s almost entirely dependent on just me to get it done, but a bigger proportion of my work is collaborative and can’t be done by myself. For those things, I definitely envy people who are more collaborative and do best with social aspects of working in a group. I just get kind of antsy when I’m in a group project and I’m waiting around for someone else to hand something off to me, because I think about all the other things I could be doing while I’m waiting for other people to get around to my handoff. 

There’s probably also a toxic side to “building in the open” or otherwise actively involving people in touch points that they can’t really influence themselves. It strikes me as a sort of “performative productivity” where people make their to-do list into something that needs to be witnessed by others while they’re actively doing it. If there’s more than 2-3 people at the table, but it’s only two people talking to each other the whole time, I feel like it becomes “performative” in the sense that all the other people are really just an audience to the two people having the discussion. Even worse, the meeting is just one person talking at everyone else, with no discussion. There’s some edge cases there, like presentations or lectures that are designed to be seminar-style dissemination of information, rather than discussion, but I think we all know the type of meeting I’m referring to, where you can go the entire meeting without ever saying anything. 

There’s some caveats to that too, where even if the meeting is designed to be discussion focused, it turns into just a few people dominating the conversation due to power dynamics or personality traits. Sometimes the meeting is just too many people and there’s no chance for everyone to weigh in on every topic. I think a thoughtful agenda can negate some of that, and I’m a fan of the “Inform, Discuss, Decide” format where everyone gets a set of “pre-read” information to help them prepare for the meeting or orient to the material, then a handful of discussion points with discreet decisions that have to be made. 

I have a lot of opinions about meetings because I spend probably the majority of my week in some kind of meeting, whether for my full-time job work or committees that I volunteer to sit on or for my professional society involvements or for my own career development mentorship. There’s some I look forward to and some I dread. 

I will say – and this probably marks me as a “manager” – that there’s some interesting tradeoffs with in-person meetings versus virtual meetings. I think virtual meetings, overall, tend to be more productive because there’s a layer of impersonal-ness to them, being through a screen or a phone. In-person meetings, while less productive, are more emotional (for better or worse) due to the interpersonal-ness of seeing or sensing people’s body language. Both are good, but for different reasons. I actually prefer 1-1 meetings to be in-person, but meetings with more than 3-4 people to be virtual – a 1-1 is more about the interpersonal relationship while the larger meetings are more about getting something decided. Well, usually the big meetings are about getting something done, but not always, there’s certainly some larger meetings that are also about interpersonal relationships or general vibe checking that is best done in-person versus remote. 

On one hand, then, I understand the “return to office” argument of morale and culture building. I do think hybrid teams end up being more productive overall, having a mix of both, although I’m not using any statistical data to back that up, just my own vibes and my own bias having a scientific background where at least some work must be in-person at the bench and some work can be done remotely like computational analyses.

Getting some space to work remotely gives me space and time to think a little deeper. I also prefer the “give me a direction and then let me work” management style, so that makes sense that I wouldn’t want to be constantly “building in the open” with collaborative group-project style work, Id rather get to some meaningful milestone before I share what I’ve been working on. 

While that’s what fits my work style best, I also completely understand it’s not always practical or responsible to go near radio silence for stretches of time while I’m working things out (or, as yesterday proves, not doing anything…); there needs to be some way to measure my progress, my productivity, and ensure I’m meeting expectations and not stuck or blocked and not asking for help. Sometimes that can be as simple as keeping a running document with analyses or outcomes that can be accessed by all the stakeholders, for example like when I’m writing a paper with multiple coauthors, using a shared document where they can see my progress (or lack thereof) and adjust their expectations or reach out with questions and comments accordingly. Sometimes, it’s just shooting a quick email saying “hey, this and that to-do item is on my list, I haven’t forgotten, I just got stuck in step X”, like when someone is waiting on a figure or a dataset. Having some way for my managers or my mentors to get visibility in what I’m doing (while giving me the space to do it) requires that I understand whether my manager or mentor is okay with that approach. 

Maybe that’s why the CONFESSION of not having made progress was hard for me to own up to. There’s no mentor or manager here, just me holding myself accountable, and that was tough.

CHAPTER SEVEN: CONFESSION

Well, clearly I stalled out during the conference. In my defense, I did actually stay pretty busy and made the most of the in-person face-to-face meetings, so I’ll give myself some space for grace. Admittedly, I actually sat down a couple of times to churn out some words, but the combination of the blank screen and the shame of having missed days had me deleting whatever I wrote and laying in bed instead. I’ll have a lot of chapters and words to make up if I’m going to hit the 50k goal by the end of the month, but there’s still time! There’s still a chance! 

In part, I probably just need to make this easier. I’m already noticing I’m holding myself up to a certain quality standard, when the whole point is to break that habit and just get myself into a quantity mindset where editing can happen later, and the main goal is to just get words onto the screen, even if they’re deleted later. It’s that “editing as I go” process that holds myself back in my grant writing and manuscript preparations, where I have some ideal story in my head that I never end up getting out onto the screen because I’m mentally in the revising stage before I even have anything to revise. If that doesn’t make sense, you’re probably better at grinding out that first draft than I am!

Funny enough, I’ve also had a lull in my leisure reading, so I wonder if there’s also some correlation between consuming content and generating it? It might also just be a spurious correlation, where I read for leisure when I have “spare” time, and I also write more when I have spare time; the basis is having spare time for both, but they end up looking correlated from dependence through that shared variable.

I’m also just openly admitting that I’m not writing as much as I hoped. Earlier today, I confessed to not making any progress on a manuscript for which I’d promised to have some updates by now, and while it wasn’t surprising or really blocking anybody’s work from getting done, it felt good to just admit that I’d dropped the ball. It doesn’t feel good to not have made progress, but it felt somewhat relieving to just admit that I’m struggling.

So hopefully this can be the day that I get myself back into the swing of things, and make a renewed effort to build up the writing habit!

CHAPTER ONE: PEDAGOGY

Top of my mind today was getting my lecture slides submitted for the ASMS Fall Workshop on Fundamentals of [Mass Spectrometer] Instrumentation. Although I first got my lecture topic assigned back in June (“Instrumentation for Quantitation of Large Numbers of Analytes”), I honestly didn’t think too deeply about it until early October when we were asked to submit a rough draft on October 14. The rabbit hole here is the theme of “procrastination”, but in an attempt to stay on topic, let’s explore my approach to building lectures.

First, a disclaimer that I’m not formally trained in education or teaching. I got a crash-course on putting together lesson plans and curricula, and general teaching/learning approaches when I first arrived in South Korea for my Fulbright scholarship to teach English, specifically, speaking/listening to English at a rural high school although I also prepped a smaller group of high schoolers for the Test of English as a Foreign Language (TOEFL). (Hello out there, Chungnam Internet High School!) This was back in 2009, so all our materials were paper print outs and chalk boards, and unfortunately I don’t have anything besides my memory to pull from, but a few things did stick with me and I keep them in mind when I’m putting together lectures today.

Second, I never have to teach from a textbook or follow a prewritten curriculum of learning objectives, so I have a lot of freedom on what exactly I deem “important”. I assume that if I had to teach specific exam materials or follow a textbook, a lot of my approach would fall apart. I usually am lucky to get even a general theme/topic to teach about, let alone have specific objectives or test materials.

Third, these days I’m always teaching professionals who are for the most part self-motivated to learn the material, so classroom management isn’t much of a concern. When I was teaching the high schoolers in Korea, classroom management was much more of a challenge. Punishments were handled primarily by their “real” teacher, who accompanied them to my classroom, but it was still expected that I make an effort to maintain the students’ attention and discipline where/when needed. These days, I might teach some undergraduates here and there, but usually I’m working with professionals who are paying good money to take my courses, so my motivation is more my own desire to make sure they get their money’s worth of material.

Finally, all of this is predicated on and influenced by my own personal learning preferences. I think, in general, preferred teaching style complements preferred learning style; similarly, your preferred management style mirrors how you like to be managed, how you mentor others mirrors how you prefer to be mentored yourself, etc. More on that in a later post, I think.

With all that established, here’s how I approach putting together a lecture or workshop.

STEP 1. Decide on the 2-3 main things you want the participants or students to remember.

Generally, starting at the end is always a good idea with any kind of presentation or communication. If you set the objective before you get carried away in the details (or in recycling old material), you can keep that “North Star” in mind in building out the background, proof, and conclusions for each of the main points you need to make. (Note: The idea of having a “North Star” as specific verbiage is something that is talked about a lot in the startup space, in the sense of keeping some guiding principle or vision and tying everything you do back to that singular goal.)

These 2-3 things become your “home slide” (a term I’m shamelessly stealing from the seminar course at the University of Washington’s Genome Sciences department), which is a slide that you’ll use as a recurring anchor point throughout your lecture or presentation. It’s a sort of agenda or syllabus, which reinforces the points that you’ve made, the points that you’re going to make, and then is a summary reminding the participants or audience what you want them to remember from your lecture. (This ties into “Step 3: Repeat everything three times”.) Sometimes I’ve seen this so that each of the points on the home slides is a question that is answered at the end of the section, which also works.

Structuring your lecture around key concepts allows you to set up multiple break points for the participants to check in with themselves and make sure they understand each point before you move on to the next. It also makes it crystal clear what you want the participants to learn so there’s no ambiguity. 

STEP 2. Incorporate more than one learning style.

This whole entry is pretty poorly cited (actually, there’s not really any references) but I recall some learning styles being visual, auditory, and hands-on. (There’s probably real pedagogical terms for these, but again I’m not formally trained.) Personally, I learn best with the “hands-on” approach – I can read a text book until I have it memorized, or listen to hours of lecture, but I won’t really grok something until I have to use/apply/do it myself. For example, learning a new experimental protocol or how to calibrate an instrument. I could read about the theory and watch people do it dozens of times, but it won’t really sink in until I’ve done it myself.

It’s almost always a given that a lecture will consist of a slide deck and presenting said slide deck, so arguably there’s always at least two learning styles accommodated. However, some slide decks seem to be entirely text on slides, which I think goes against the spirit of visual learning. I try to keep text minimal on my slides and use more pictures, schematics, or even just icons to communicate the core vocabulary I’m using verbally. 

And when building a workshop agenda or a lecture that’s more than 30 minutes long, getting at least one “hands on” or applied piece worked in. A lot of workshops I teach involve software usage (specifically, how to use the Skyline software for mass spectrometry data analysis) so there’s a whole slew of step-by-step tutorials and demonstrations I’ve built up over the years. But for conceptual lectures or workshops, sometimes this can be done by posing a thought question that bridges into application. For example, when I’ve taught statistical design of scientific experiments, I’ve posed some sample sets and asked participants to either think for a minute or divide into groups to talk about how they might block and randomize those samples, given a particular experimental question. When I’ve taught day-long workshops, I’ve closed out the day with a good ol’ Kahoot quiz, focusing on the 2-3 main points of each of the day’s lectures to reinforce the most important things I wanted people to remember from the day.

As a super-niche technical aside, I think this is one of the hardest barriers with (almost) all the mass spectrometry and proteomics workshops out there. I can think of only three that include a wet-lab “hands on” component: Cold Spring Harbor’s Proteomics course, MRM Proteomics workshop, and Brett Phinney’s proteomics school at UCD. To my knowledge, there’s no undergraduate, masters, or PhD program specifically for mass spectrometry or proteomics, unlike genetics, so to break into the field the only real options are to start out in analytical chemistry or biology, and pick up the mass spectrometry or proteomics through research experience.

STEP 3. Repeat everything three times.

We were drilled at Fulbright teacher training to repeat everything at least three times, ideally once in each learning style (Step 2). Thinking back, this is also how I was officially trained in assays during my internship at Wyeth Pharma (purchased by Pfizer since then) with a “watch one, do one together, do one being observed” triplicate before I was signed off as being trained. I still try to do training in the Talus lab with that in mind, and don’t expect independence on any task or protocol until we’ve done at least one iteration of “watch one, do one together, do one observed”. This also follows Step 2 of using more than one learning style, with one being visual and one being hands-on.

As a personal aside, it also follows some of the parenting books I’ve read, like Hunt Gather Parent which (although problematic in some ways, in my opinion) raises a good point that toddlers and young kids learn a lot by observing and then mimicking.

This is maybe something I took too much to heart, because some of the faculty feedback I recall receiving during my PhD training was that I repeated myself too often in my research report presentations. Habits die hard, I guess.

FAILURE MODE 1. Firehose of information.

A lot of really smart people seem to have a hard time placing lecture material in context and default to a tidal wave of information. I’m definitely guilty of the firehose approach, especially on super condensed course timelines where I’m trying to cover as much ground as possible in limited time, hoping that my Step 1 2-3 main points per lecture will be an oasis for participants to hang onto while I barrage them with references, rabbit holes, and random factoids that might be jumping off points for them to dive deeper in the future. Without having those 2-3 main points to anchor the core take-home messages and reiterate the high level objectives, getting a deck crammed full of paper citations and heavy content is disorienting and disheartening for participants, in my opinion.

FAILURE MODE 2. Overwhelming jargon.

Mass spectrometry and proteomics is hugely guilty of jargon overload. It becomes almost a second language, and teaching in a “second language” makes it even harder for participants trying to learn core concepts. Lecture material should strip away as many acronyms and jargon as possible, and ideally provide a dictionary or quick reference for participants to keep handy as they navigate the content.

FAILURE MODE 3. Failing to connect to the audience.

The exact same lecture topic should be presented completely differently if the audience changes. A lecture on “figures of merit” given to a room full of analytical chemists looks completely different from a lecture on “figures of merit” to a room full of biologists. And even more so if that room goes science-adjacent, like operations or business development. This kind of reframing is hard to do without empathy – the audience isn’t more or less smart, they just come with a different set of prior knowledge and a different motivation for learning the material. The 2-3 main points that an analytical chemist should walk away with from a lecture on “figures of merit” should be different from the 2-3 main points that a Vice President of business development walks away with. Starting from that very first Step 1, then, the entire content shifts to match the learning objectives.

SUMMARY

I wouldn’t call myself an excellent teacher, but over the last 16 years (yeesh, that math hurt to realize) since I got that initial crash-course on teaching and pedagogy, I’ve continually refined my specific lectures and my overall approach, and I’m getting pretty good course reviews these days. I’ll always be iterating and incorporating new feedback, but I think these core concepts will probably always remain a foundation in how I put together teaching material.

EPILOGUE

On my run today, I let my mind wander and spiral across the idea of “pedagogy” and teaching. Below are some of the things that spun out of the main theme as I jogged through the park on a beautiful autumn afternoon.

I struggled quite a bit in college as I worked for my bachelor’s in biochemistry and molecular biology. My degree required multiple chemistry courses: inorganic, organic, physical (thermo and quantum); however, hilariously, I did not have to take analytical chemistry, which is the one I’ve basically landed in professionally. It also required multiple physics courses: mechanics, electricity and magnetism, wave motion and quantum. My grades were, as the now-classic Chernobyl HBO series meme goes, “Not great, not terrible.” Started pretty strong but more and more C’s as I got deeper into the trenches. I wonder if part of it was that the lab portions got less and less “hands-on” and more abstract as the semesters went by. I love the inorganic and organic chem labs, where you got to “make” things; the physical chem labs were more mathematical proofs and practice problems than the hands-on stuff that helped me learn. (Not that I’m suggesting I know of any hands-on labs for quantum that would have helped me in the exams, but just reflecting back on my own failures and wondering what I could learn from them.)

While having a “hands-on” preference might suggest that I don’t learn well by reading or listening, I do love to read, although I’m not sure I really fully read every word. Instead I kind of skim over the lines and pages and get the general vibe, which is great for leisure reading but pretty terrible for studying. When I applied for the Fulbright to Korea during my senior year of undergrad, I started studying the language with two semesters of Korean classes. Nothing stuck quite as much as when I moved there and was suddenly fully immersed – my home-stay family spoke some English, but obviously used Korean with each other so I was surrounded at home, at work, and going about my day. I picked up so much more Korean with that kind of hands-on practical experience, even things I should have learned during undergrad Korean classes like simply introducing myself. 

It’s no surprise then that when I went to graduate school, under the advice from my excellent post-bacc PI to learn statistics and programming, I ended up with my PhD joint-advised in a machine learning laboratory having only just learned the very first basics of coding ~6 months prior. (I did, technically, take some visual BASIC in high school and then again in college, but that’s a poster for another day.) To learn the language of statistics and computational biology, there was no better method for me, personally, than to go “full immersion”, sink-or-swim by throwing myself directly into the deep end. I wouldn’t say I ended up the strongest computational biologist that ever graduated from that machine learning lab, but I also wouldn’t say that I couldn’t hold my own. I still keep the physical print-outs of my performance reviews from that PI at my desk as a reminder to do hard things.

(PS – I may return to this topic, or branch off it at least, since I already have a few more thoughts about how it relates to science communication. Nevertheless, today I got to ~2300 words, about 3x more than yesterday! Whether it was the theme, or the run, I’ll take it.)