How long do you think is an appropriate time for students to commit to their PhD? If you ask around, the perceived time range varies quite a bit, 3-4 years, 4-6 years, or even double-digit years. If we can't agree on the time length of a doctoral degree (like Med, Pharmacy, and Law school), there must be other cemented parameters that guide students to graduation? .... Right....?
How do you know when you are ready to graduate?
Most STEM doctoral students travel a similar path. They conduct research until their project is complete, then after writing a thesis and defending it, they are conferred with the title of "Doctor." The question is, when does one actually "complete" a research project? Research is never done. One question is answered, which leads to another question, which leads to another and to another.
So how does one judge if a graduate student's work is finished? I tweeted out this question sometime ago and received various answers. Most responses were number of publications, completion of proposal aims, or the amount of years in the program. A few other less common answers were, "vibe," "loss of funding," "up to the PI" and....
So let's break the most common answers down:
Number of publications: Many programs require 1-2 publications for a student to graduate. Often, this is considered a fair. Other times, it puts the student at a disadvantage. This parameter neglects to normalize the support systems within the lab. Some students have no lab-techs, post-docs, or collaborators, meaning that publishing is a much more arduous task than their counterparts. If a student is lucky, they receive a project that is low-risk or partially finished, while other students work on high risk projects for years without a payout.
Completing proposal aims: At the surface, this seems to be an equitable stipulation for graduation. But often, students write proposals off-topic, rendering this parameter impossible. Frequently, projects evolve and take the student in another direction than their proposal. If they veer off path for the sake of science exploration, should they be still held to the same proposal they wrote years ago?
Years in program: Not all projects are created equal, and not all students put in the same effort over the same period of time. However, I argue that putting a time cap drives productivity, encourages streamlined research, and motivate PIs to support their students in finishing their projects.
Graduate student labor.
Graduate students are cheap labor, most making about $30,000 a year. PIs are reluctant to allow students to graduate, thereby forfeiting a valuable resource. Besides the project a student is working on, a grad student is also expected to train incoming lab members, maintain lab equipment, contribute to lab chores, and work on side projects other than their thesis.
We have to ask ourselves if more years in the same program with the same mentor is beneficial to a student's education and training. Is a seventh-year student still learning from their mentor, or are they underpaid employees receiving typical on-the-job training? Furthermore, extending a student's PhD training can certainly have setbacks.
Time at school should not be taken for granted.
A long PhD can have detrimental effects on a students' life and career:
From the many student-PI conflicts I've seen, it's naive to believe the current system of arbitrary graduation guidelines is working. To give more protection to graduate students, I propose two policies: (1) A 4-year time cap for students with a Master's degree or a 5-year time cap without a Master's degree, and (2) salary raises for students throughout the program.
Providing a time cap.
With no time cap, graduate students are encouraged to work on non-thesis research and to tackle high-risk projects that are likely to not pan out. A time cap can motivate both the student and the mentor to come up with a practical project plan and remain focused at the proposed thesis work needed for graduation. Consider Parkinson's Law which states that work will expand to fill the time allotted. Under this principle, a student will finish their dissertation research in four years if given a four year time limit. Without the time limit, a student will linger in the program until an external factor (funding, unhappiness, or a job offer) influences them to wrap up their project.
However, there are plenty of reasons to fight against a time cap: variability between disciplines, discrepancies in work ethic, and neurodiversity of students. This is why I recommend a 4-5 year time cap with an opportunity to extend. Extensions can be offered to students with disabilities, or if a student, PI, and thesis committee mutually agree staying in the program is beneficial.
Demanding raises for long-standing students
Paying senior students more money is another way to ensure equity to graduate students. Senior students take on more responsibility and are [usually] more skilled. Granting raises rewards students for more time in the program and also removes the cheap labor bias.
It's not all the PIs fault.....
It's easy to blame the thesis advisor. But if PIs aren't given the proper tools and support system to keep a lab running smoothly, can we blame them for wanting to keep students on longer? Here's a few ways institutions can support PIs more:
I am aware that many European Universities have time limits. This post is written by an American graduate student whose program has a loose 7-year limit. I am a fourth-year student who intends to graduate during my fifth year. Even with three pubs (one published, one submitted, and one underway) and a Master's degree, I receive pushback that I should stay well into my 6th year. I'm convinced that more time in my PhD will not further my education or career prospects, but I am certain that it will affect my financial and mental health.
We’re a year into this pandemic, and although the numbers seem to be improving, video-chatting is here to stay. Even once we reach “normal,” the convenience and flexibility of virtual meetings likely means we have plenty of web-based interactions in our future.
So, when it’s time for you to plan your next zoom event, here are a few things to consider:
1. Schedule time for IT issues. Plan on time for connectivity issues and microphone checks for all speakers. And if using multiple speakers and break out rooms, plan for some adjustment time. Don’t expect all changes between speakers to happen smoothly and instantly.
2. Schedule breaks. Just because we are sitting at our laptops, doesn’t mean we don’t need coffee, lunch, and bathroom breaks. During long meetings, your participants' brains likely need a break from that dense content.
3. In person format ≠ zoom format. I’ve participated in a few events where the organizer took the same schedule from previous in-person events and used it over webchat without modification. With the lack of in-person socialization, the audience members likely have decreased attention spans. So, for web-based events, less is more. Consider shortening your format. We don’t want to stare at our computer screen for a full day!
4. Ask questions to improve engagement. You may have noticed there’s less questions and participation during webchat events. Utilize the poll functions and don’t be afraid to have fun quiz questions with your audience.
5. Take advantage of break out rooms (when necessary). Break out rooms can be great to facilitate conversation, ie: for panel discussions or to answer discussion questions. But gage your audience, will breaking out into smaller rooms facilitate more conversation? Or will it dilute the pool of participants likely to actively participate?
6. Say good-bye to weekends. Often, conferences may be held on a weekend due to availability of parking, hotels, and conference rooms. But with the internet, availability is endless. Please organizers, leave my weekends alone.
Although we complain about zoom-life, I love it! I can easily meet and talk with people around the world. Covid-19 brought about terrible atrocities, but at least it acclimated us all to the lovely world of video-chatting.
When we think of the hard-working scientist, we think about a scientist that enters that lab early in the morning. They work through the day using multiple cups of coffee to keep their energy up. Late at night, they can be seen writing on a whiteboard, making the prime discoveries in their field.
Scientists have come to romanticize workaholism. We believe that the person who works the longest hours and sacrifices the most for their work will be the most successful. This idea comes from "grind culture" or "hustle culture."
If we believe in the "grind culture," we believe a lie. The person who works the longest hours is really the person losing out on the enjoyable things in life.
Think about the last time you put in a long day. At the end of that day, did you think, "Oh, I just keep wanting to do this forever?" Probably not. Instead, you probably thought about how much you want to go home and how you do not want to come in tomorrow.
As scientists, we can truly love and enjoy our work, but too much of anything can be a bad thing. The grind culture leads scientists to burnout, neglect self-care, and actually become less productive.
When I started graduate school, I believed in this lie. I tried to devote nearly every waking hour to my work to be successful. After every week, I wouldn't get out of bed on Saturdays until the afternoon. Even if I woke up around 10 a.m., I would just lay there questioning what was wrong with me. Once my partner would finally convince me to get up, I would eat and then start doing more work.
Not only did I waste a large amount of my time, but I also dealt with very high anxiety and depression through this time. Any moment that I wasn't working, I felt guilty. At my core, I believed that not working was an expression that I wasn't serious about my work.
In reality, this idea is a fairly disturbing notion.
After seeking therapy, I realized that my long hours and constant work are not what made me successful. What made me successful was my determination, problem-solving skills, and ability to develop ideas. Yet, burnout decreases all of these abilities that lead to success.
How to Really be Successful
Therefore, instead of working more hours, you should focus on becoming more efficient in your work. We are all inefficient in our work. In fact, a recent study found that typical workers only work less than 3 hours in an 8-hour workday.
Think about your regular day. How much of your time did you truly spend working on things that move your science forward?
On a typical workday, I spend time socializing with my colleagues, checking out my social media, watching shows or YouTube, and staring at my screen, not wanting to do work. Yet, I would be at work for over 10 hours, saying I worked 10 hours that day.
If you give in to grind culture and think you should work all the time, then you lose your motivation to do work. If completing work doesn't allow you to leave work sooner, what motivation do you have to complete work?
There are two principles that can help you become more productive by working less: Pareto's principle and Parkinson's law.
Pareto's principle states that 80% of your success comes from 20% of your effort. Therefore, if you think about your typical workday, only about 20% of your time is creating 80% of your success in science.
Parkinson's law states that work will expand to fill the time that it is allotted, meaning that if you give yourself 10 hours in a day to complete a task, it will likely take all 10 hours, even if it only really requires 2 hours of work.
If you apply both of these principles to your approach to work, you can work less and accomplish more by becoming a more efficient worker.
While you may be on board about becoming an efficient worker, you may still wonder how to become more efficient. So let's go step-by-step through a system that I created for myself, which has proven to make me more productive while decreasing burnout.
Set Work Hours
The very first step of becoming efficient is to set your work hours. You should set your work hours based on your lifestyle and work requirements.
Do you want to work 8 hours and be off in time to make an exercise class? Then set your work hours to complete your workday in time for your class.
However, you also need to take into account the needs of your work. What times do you have meetings? When does your boss or supervisor expect you to be around?
The benefit of setting your work hours is that you are already combating Parkinson's law. You now have fewer overall hours that work can expand to fill. Additionally, you can regain motivation because you know that you need to finish your task by a specific time so that you can leave work accomplished.
Make To-Do and Not-To-Do Lists
Once you have your work hours, you need to concentrate your efforts on the things that are bringing you success in your science. The best way to focus is to create to-do lists and not-to-do lists.
First, think about all of the things that you genuinely need to do to make progress. If you think you need to do everything, ask yourself, "If I could only work 2 hours a day, what would I do?". Suddenly, your brain will flood with the most important things that need to be done for you to be productive in science. Write these things down to make your to-do list.
Now, make a list of at least three things that you do that waste your time, such as tasks that make you feel productive but don't result in actual progress. For many graduate students, I believe that reading scientific papers for the sake of reading them should be on your list. Reading papers should be done for a specific reason, not simply so that you can feel productive or say you read so many papers that week.
Personally, my not-to-do list includes checking my social media, checking my email, and watching shows during my day. Place your to-do and not-to-do lists somewhere where you can see them regularly.
Block Out Your Time
The third part of becoming more efficient is to block out your time. The essence of this idea is to prevent you from task switching multiple times and wasting time as you move from one task to another.
There are two ways that I like to block out my time. The first is to theme my days, and the second is to create time blocks.
If you have specific themes to your work, then it is nice to theme your days. For example, if you are a graduate student, you may have coursework, research, and teaching. On a day that you teach, make it a teaching day. Take the time during the day to grade assignments and plan for the next week's lesson. On a day that you attend multiple classes, take the free time you have to study and do homework. On days that you have research meetings or primarily free days, focus that time on research-related activities.
Themed days help you plan your day, keeping you focused and allowing you to make progress on one task all day long.
Time blocks allow you to work on a single task for 45-90 minutes. Maybe this task is a meeting, class, or writing a paper, but after your set work time, you have the margin to move from one task to another. The way I prefer to do this is 90 minutes of work with a 30-minute margin. However, depending on your schedule, a 45-minute block with a 15-minute margin may work better.
Overall, the idea that you need to work longer hours to be successful is not only a lie but counterproductive. Instead, by increasing your motivation and efficiency of your work, you can become more successful while maintaining your personal life. To become more efficient, I have a 3 step system that I employ:
Operation Warp Speed, launched in early 2020, helped speed up the pace of vaccine innovation, turning the normally 10+ year clinical trials process into one that takes less than a year. To learn more about the vaccine development process, check out this post by Nidhi.
Although vaccine development has been significantly accelerated, it is essential to understand that vaccine development has not been rushed. In fact, despite operating in a public health emergency (the COVID-19 pandemic), vaccine research has been thriving. This is thanks to scientific collaboration, funding, and a quick and thorough review process, allowing scientists across the globe to develop the COVID-19 vaccines in under a year.
In this article, we will discuss Johnson & Johnson's COVID-19 vaccine. The Johnson & Johnson (J&J) vaccine will likely be approved by the US Food and Drug Administration (FDA) for use by late February or early March. The J&J vaccine is different from other COVID-19 vaccines in that it only requires one dose. As such, it may be the saving grace to the seemingly slow and clunky vaccination rollout in various countries, including the United States.
Why might the J&J vaccine be the pandemic saving grace?
The J&J vaccine may be the next one to receive approval -- i.e., after the Moderna vaccine and the Pfizer vaccine. There are various advantages and disadvantages to using this vaccine.
The biggest drawback of the J&J vaccine is that it has lower efficacy than the Moderna and Pfizer vaccines. To understand the science of vaccine efficacy better, check out Sheeva's post. More specifically, the J&J vaccine has an efficacy of 72% in the United States, 66% in Latin America, and 57% in South Africa. By contrast, Moderna's COVID-19 vaccine has an efficacy of 94.5%, and the Pfizer COVID-19 vaccine has an efficacy of 95%.
Despite the lower efficacy rate, the J&J vaccine remains quite promising. The vaccine only requires a single dose, significantly simplifying the logistics required for local health departments and clinics. Additionally, the vaccine is stable in a refrigerator for several months (36°F - 46°F or 2°- 8°C). Contrarily, other vaccines, such as the Moderna and Pfizer vaccines, require freezing at significantly lower temperatures of -4°F or -20°C and –94°F or –70°C, respectively.
Johnson & Johnson's COVID-19 vaccine and the AdVac technology
The Johnson & Johnson vaccine in development (which is now seeking FDA approval in the United States) goes by two names -- JNJ-78436735 or Ad26.CoV2-S. The vaccine is developed by J&J's pharmaceutical arm, Janssen, using Johnson & Johnson's AdVac technology.
According to the Janssen website, AdVac technology is "based on development and production of adenovirus vectors (gene carriers)." The AdVac technology enables effective development of an adenovirus-based vaccine in response to emerging diseases, such as COVID-19, in a cost-effective and large-scale manner.
What is an adenovirus-based vaccine?
To explain what an adenovirus-based vaccine is, we first have to talk about the basics of viral vector vaccines. The Oxford/AstraZeneca and J&J vaccines are both viral vector immunizations, meaning that a non-infectious virus is used as a shuttle to deliver the virus's genetic contents into our bodies.
Think of a viral vector vaccine as a "cut-and-paste" vaccine. Parts of one virus are cut and pasted into another to create a viral vector vaccine. An adenovirus-based viral vaccine uses part of an adenovirus as a shell and a gene encoding a part of another virus (such as the novel coronavirus) is shoved into that shell.
In both the Oxford and J&J viral vector vaccines, the gene encoding the coronavirus spike protein is pasted into a "hollow" shell of an adenovirus. J&J specifically uses an adenovirus strain named adenovirus 26 (Ad26). When the vaccine (i.e., the Ad26 shell and with the spike protein center) is administered, it invokes an immune response in the body. (Learn more about the spike protein here.)
After being vaccinated, our body will be able to respond to the virus more effectively to eliminate the risk of infection. This is done through the quick and effective recruitment of immune cells and antibodies to prevent the virus from inducing COVID-19 disease. To learn more about J&J's COVID-19 vaccine, check out Nidhi's post here. You can also learn more about the other top COVID-19 vaccines here.
Johnson & Johnson's FDA Emergency Use Authorization
On February 24, 2021, the FDA stated J&J's single-shot COVID-19 vaccine will receive formal Emergency Use Authorization (EUA) approval. The company's EUA approval is based on the efficacy and safety data from the Phase 3 trials. The J&J vaccine will be a pivotal step towards putting an end to the pandemic due to its single-dose requirements that also require normal refrigeration rather than super cold storage.
Yes, there are more similarities between solar cells and pizza than you might think.
I'm a solar energy researcher working towards eliminating the defects in and improving the performance of industrial solar cells. A PhD is a long journey full of untimely experiments and countless sleepless nights, so I often find myself eating while working (definitely not in the labs!) and working while eating. One day, intrigued by how delicious a cheese pizza is, I realized how alike the pizza and the cell samples I work with are.
Build your own Solar Cell
A solar cell is a device that generates electricity when the sun shines over it. A combination of these cells linked in sequence makes a solar panel that can generate significant power, and that is what you see on people's rooftops, in solar-powered streetlights, and calculators. As a good pizza starts with a perfect dough base, solar cells begin with a very pure form of silicon wafer (which is also the second most abundant element in the Earth's crust), scientifically called a 'base.' Some extra elements like boron or phosphorus are then added to this silicon base to make it more conductive.
Then comes the toppings. Yes, both for the pizza and the cells! Pizzas are loaded with a bunch of toppings for various flavors; solar cells are also coated with some very thin layers that help enhance their performance. These layers are called 'dielectrics.' They help reduce the reflection off the surface to increase the light absorption, passivate some surface defects, and possess some hidden benefits for the base. The most common dielectric, silicon nitride, used in the industry is also responsible for the blue color you see on most solar panels (a silicon wafer is otherwise grey!).
We all know that pizza does not taste great after sitting in the fridge for a week. A similar degradation occurs in most solar cells. Once the panels are installed and are out in the sun, their performance degrades after the first few years (anywhere between 2-10% relatively). And losing a slice or two of a pizza might not make a dent in your pocket, but this degradation is responsible for a loss of billions of dollars every year. We call it 'light-induced degradation' (or LID).
Light-Induced degradation (Staleness)
LID is a family of defects that occur in the presence of light; however, technically speaking, the resulting charge carriers are responsible, not the light. This degradation is not a new phenomenon, and researchers have been working on understanding and solving it for years. The good news is one of the most common defects responsible for LID has now been nearly solved in most panels worldwide. Unfortunately, we now have a new variant of LID in all kinds of panels. However, it only occurs at high-temperatures under light: "light- and elevated temperature-induced degradation," or LeTID. More sunlight is essential for higher electricity generation from the solar panels, but higher temperatures are detrimental (we only need the light, not the heat, for solar electricity generation).
This new kind of degradation is a focus of numerous researchers globally, including me. In my research, I work on mitigating this degradation by simply playing with the dielectrics (after all, it is all about the toppings, right?).
Firstly, we have found that reducing the thickness of the dielectrics can significantly mitigate this degradation (1). You can imagine how applying less tomato sauce can prevent the pizza from going soggy. Reducing the thickness means using less material and thus lower costs. However, there is a threshold beyond which reducing the thickness might lead to other kinds of losses.
Secondly, we also devised that the placement of the dielectrics plays a vital role in the extent of potential degradation (2). By studying multiple industrial cells, it was observed that adding a very thin layer of a second dielectric can strongly modulate the degradation. This reduced the degradation by creating a barrier layer between the first dielectric (Silicon nitride) and the silicon base. Another solution we found is the dependence of degradation on the silicon wafer thickness (3). By thinning the wafers, severely low degradation was observed. Similar to how a thin-crust pizza can help prevent you from gaining extra calories if you are on a diet!
These three solutions effectively alleviate the degradation in current solar cells without increasing their manufacturing cost. With solar installations progressing at record levels each year, the mitigation of these defects will accelerate the transition to a cleaner world. So, we can leave the next generations with tastier pizzas and a healthier planet!
1. U. Varshney, M. Abbott, A. Ciesla, D. Chen, S. Liu, C. Sen, M. Kim, S. Wenham, B. Hoex, and C. Chan, "Evaluating the Impact of SiNx Thickness on Lifetime Degradation in Silicon," IEEE J. Photovoltaics, vol. 9, no. 3, pp. 601–607, 2019.
2 . U. Varshney, C. Chan, B. Hoex, B. Hallam, P. Hamer, A. Ciesla, D. Chen, S. Liu, C. Sen, A. Samadi, and M. Abbott, "Controlling Light- And Elevated-Temperature-Induced Degradation with Thin Film Barrier Layers," IEEE J. Photovoltaics, vol. 10, no. 1, pp. 19–27, 2020.
3. U. Varshney, M. Kim, M. U. Khan, P. Hamer, C. Chan, M. Abbott, and B. Hoex, "Impact of Substrate Thickness on the Degradation in Multicrystalline Silicon," IEEE J. Photovoltaics, vol. 11, no. 1, pp. 65–72, 2020.
If I had to go back in time and give myself advice, I’d tell myself to be cautious of advice. Advice isn’t necessarily good or bad, but it’s often misguided or the wrong fit. In your scientific career, especially early on, it’s tempting to trust all the guidance tossed your way. More experienced scientists should know better than you, right? Not necessarily.
Here’s some advice I received and learned — through experience — to disregard.
An upside to academia is the freedom to make your own choices. But with freedom comes uncertainty. Grad school and science careers are challenging to navigate. Suitable, appropriate guidance will help you through, while erroneous, biased advice can hold you back. Practice healthy skepticism, and in the end, always choose what's best for you.
Have you been given extraordinarily ill-fitting advice as a scientist? If so, tell us in the comment section below! Or, tweet at us, @BoldedScience, #BadAdvice.
Entrepreneurship offers a unique opportunity to continue exploring and researching while learning new skills and tackling challenges that will dramatically enhance your career. However, it is paramount to recognize the stark differences between life in academia vs. life as an entrepreneur. Here are some examples of approaches that may need to change as you embark on your entrepreneurial journey.
As scientists, we like to gather as much data as possible before making a decision. Unfortunately, this isn’t possible in entrepreneurship. You will need to adapt and become comfortable making tough critical decisions with only 50% of the information.
Presentations & Discussions
Whenever I read a paper, attend a conference lecture, or make academic presentations, the same setup is used: background, rationale, results, discussion, and finally, material/methods. In the business world, the goal is to share the most important information in a short amount of time. You are expected to concisely explain the problem you are solving, your solution, and how you will achieve results.
Scope of Work
In research, we have our primary project, and we immerse ourselves in that topic, working days, weeks, and months with an intense singular focus. In contrast, entrepreneurs will need to maximize their limited time by conducting multiple initiatives concurrently. Learning how to effectively switch context between subjects to solve problems is a skill that will help you tremendously. One minute you may be engineering and the next you may have to close a sales deal.
While in academia, there are few things you can do to prepare yourself for this transition.
1. Contribute to lab members’ projects or collaborate with other labs.
The reason I’d advocate for this is that it forces you to work with other people/groups. Academia can often lead to working solo, which prevents you from learning the critical soft-skills needed to succeed in a team-based environment when there are multiple chefs in the kitchen. In entrepreneurship, great teamwork and effective communication can make the difference between success and failure.
2. Leverage your network and ask questions
If you’re looking to build a business in the life-sciences sector, you are in the prime spot to do some target market research. Reach into your network, speak with your colleagues, and investigate your business problem. There is no better time to do this. Scientists are far more likely to answer your questions as a graduate student than when you cold-call them as an entrepreneur.
3. Reading literature or books about building start-ups are helpful, but the best learning comes from experience. Find a start-up in the industry where you want to build your business and work there part-time. Listen, learn, and observe. Pick up on how teams self-organize, how company culture develops, and keep an eye out for solid work practices. How are meetings structured to maximize efficiency? How are large teams coordinating their work? How are company leaders communicating with each other? When it’s time to break out on your own, you’ll have a point of reference for how processes work. Stitch together practices that worked well, and learn from the mistakes you’ve made during this experience.
For those in academia who have already begun their transition into entrepreneurship, there are two important considerations you should make before diving in.
Intellectual Property (IP)
The very first thing you need to do is read through your institution’s intellectual property (IP) agreement to find a clause that details who really owns the IP.
Most institutions stipulate that the IP generated within the scope of your employment belongs to the institution. Here are some things to consider:
Feasibility of Entrepreneurship
Secondly, you should look to sort out the logistics of the transition. Prior to your capital raise, you'll be responsible for the business's operating costs and your own living expenses. All businesses are different, but I'd recommend preparing for 8-12 months with no income.
Transferable skills and interests
In academia, we are passionate about our research and motivated to contribute to the scientific community. This is one of the greatest things about being a scientist. But there is more than one way to make an impact. Entrepreneurship in life-sciences provides a unique opportunity for us to solve problems that the community faces while staying close to our research roots. At BioBox, our team remains deeply connected to our academic origins and are committed to solving the challenges that we faced while in academia. The days are long, and the pressure is high, but it is a feeling that we are used to during our time grinding in the lab and spending countless hours trying to get our experiments to work.
One of the best things you learn in academia is the importance of self-sufficiency. In your research project, you are most likely the single champion and key driver of that project. Your years of work in a self-directed and independent environment prepares you very well for the challenges you will face when building your own business.
In many ways, academia is an excellent training ground for entrepreneurship. For those who are considering breaking out and building their own business, I can assure you that it will be one of the most rewarding experiences you’ll have in your career.
A wave of controversy and outrage followed the recent publication of a study by AlShebi et al. from the New York University of Abu Dhabi on November 17th. In their Nature Communications paper, the authors analyzed 3 million mentor–protégé pairs to assess the impact of mentorship quality on the future scientific career of protégés. In addition, the gender of both mentors and mentees was analyzed as a potential factor affecting the quality of such mentorships.
The study found that increasing the number of female mentors was associated with a reduction in post-mentorship impact (fewer articles published) by female mentees. Likewise, a decrease in citations of the papers published by female-female mentor pairs was reported during the mentorship period. It concluded by saying that 'opposite-gender mentorship may actually increase the impact of women who pursue a scientific career and that current diversity policies should be revisited.
Shortly after its release, a myriad of scientists from a diverse range of backgrounds expressed their concerns and/or disagreements on social media about the interpretation of the data, claiming the paper amplified every bias experienced by female academics nowadays. Unsurprisingly, two days later, the journal indicated the paper was under investigation. Finally, a bit over a month after its publication, the paper was retracted on December 21st. The authors attributed the retraction to issues in the validation of 'key measures' identified by the reviewers. The journal itself added that this experience reinforced their commitment to equity and inclusion in research, and in support, they will be launching further initiatives to support female scientists.
Despite being a relatively large study, most of the criticism was directed towards the methodology and criteria used in the study, with emphasis on two particular aspects: the use of co-authorships as a measure for informal mentorship and the use of scientific publications (or citations of those publications) as a metric for success.
The other side of the equation
After its release, several articles collecting the opinion of experts and established scientists in the field have been released. But, what about the opinion of the other side of the equation: the mentees and protégés? To gain insight into junior scientists' experiences and opinions on the impact of mentorships in their careers, I conducted interviews with researchers or professionals in the early stage of their careers.
Pina Knauff, Dr. rer.nat/ Life Sciences
Pina is a Postdoctoral researcher in the field of Molecular neurogenetics at Charité-Universitätsmedizin Berlin in Germany. She has been active in the field for nine years, during which she has had five mentors. From those, two of them were women. When asked about the differences between having a male mentor over a female mentor, she explained that her female mentors were more empathetic and invested in providing guidance in her experience. Dr. Knauff has published five articles throughout her career, two of which were published under female mentorships.
Edna Gómez Fernández, PhD/ Economy
Edna is currently a freelance consultant. She completed her doctoral degree in social sciences at the University of Arizona, focusing on government and public policy and economics. Before leaving academia, she was active for 12 years. Throughout this time, she had only one female mentor out of seven. When choosing her mentors, gender was not a determinant factor for Edna. She believes that choosing a mentor who is an expert in the field that aims for efficient and professional communication is more important. As she did not publish under the supervision of a female mentor, Edna believes that the publishing process can benefit from having a male mentor. However, she also expressed that her last mentorships experiences - all of them with male mentors - influenced her decision to leave academia.
Ethiraj Ravindran, PhD/ Life Sciences
Ethiraj is a postdoctoral researcher also working at Charité-Universitätsmedizin Berlin. He has been active for 12 years in the field of Clinical Neuroscience. From his experience, he noticed that female mentors take their roles more seriously and that personal growth is also considered. He has felt more encouraged by his current female mentor and confirms that this experience played a significant role when deciding to continue in academia after his Ph.D.
Lucía Trías, MSc/ Design
Lucía is an independent design researcher. She recently completed her Master's degree at the Anhalt University in Dessau, Germany. During her 11 years of career, her first mentor - a female mentor – played a major role in Lucía's decision to further pursue a degree in academia. She thinks that in her area of research, decolonial design, having a male mentor can be detrimental as the female perspective mostly influences the field. 'In my experience, my female mentors exhibited more openness and a broader capacity to listen to critical thoughts without taking things personal,' she added. 'Accepting mistakes was also something that my male mentors were not good at, and I felt this had something to do with arrogance and feeling of superiority in their position.'
Success in academia: what matters the most?
Conversations regarding which factors define a successful academic path have long been on the table. Traditionally, academic success is measured in the form of research performance. Yet, career success is composed of two components: objective and subjective. The objective success is determined by the system, is measurable, and therefore more public. The subjective success is personal; it reflects one's own sense of career. A recent study by Sutherland K. A. found that personal success is changing among early-career academics, who report factors such as contribution to society or influencing students' lives as better metrics of success.
In this regard, interviewees also agree that the number of publications or citations should not be used to quantify success. Assessing the quality of mentorship based on the end product prioritizes the objective component of success. By doing this, AlShebli et al. neglected various other factors related to subjective success, including the current climate in academia that is strongly advocating for healthier working environments, where the 'publish or perish' vicious circle is not appealing for the upcoming generation of researchers.
Furthermore, success is often defined differently by men compared to women. Evidence also indicates that male scientists cite themselves more often than women. These, coupled with the reality that women leave academia earlier than men, contribute to the discrepancies in the number of publications between genders. Finally, 'the publishing process can be sometimes shifted by the journals' interest, and this should not diminish the quality of the submitted work' says Dr. Ravindran.
If correctly interpreted, this study could have contributed to the ongoing call for career development strategies that acknowledge the predominant patriarchal environment. Instead, it crashed and burned, demonstrating the importance of drawing proper conclusions and the immediate rejection of anything that perpetuates the systemic biases against women in STEM.
“What made you want to be a Scientist?”
This question always takes me aback. I have been in science so long it’s hard to pinpoint that exact moment where I went from not-a-scientist to the path I’m on now. I was always good at science… but that didn’t make me want to pursue a career in it. After sitting on the question for some time, I realise there isn’t a what that made me want to pursue science, but rather, a who.
Travel back to 2014. My hair is long, my experience isn’t, I’m doing my undergraduate degree in biology, and I am lost. My studies killed my love for the subject. I had such a miserable time that I didn’t see myself staying in science at all. Monthly career fairs on the university lawn showcasing non-science-specific careers; “train for another three years to become an accountant!” - is this all my hard work and studying can give me? I felt lost, unsure as to where my degree could take me. In a bid to work out what I wanted to do, I secured a placement in a lab at University College London (UCL), researching the effects of diet on aging in fruit flies. I (in my naive mind) thought that all I needed was a taste of lab-work, and the answer to my uncertainties would magically reveal itself.
Without a shadow of a doubt, my placement at UCL changed the course of my career. However, this wasn’t because of the technical skills I gained or how good it looked on my CV. Rather, it was the guidance and mentorship I didn’t know I was missing. During my placement, I worked under a post-doc named Adam. He took the time to explain the area of science to me (on many occasions more than once). In the lab, he taught me techniques, and before long, I was proficient enough to collect data for his project. We had frequent discussions about the results and what it meant in the area of research, but most importantly, he gave me the confidence to speak freely and ask questions.
If I had to identify when my path shifted to the science-trajectory, I could pinpoint the exact moment; a conversation I had with Adam one day during my placement. Curious about our methodology, I asked, “How come we only use female flies in these experiments?” “Because the field assumes that males don’t respond in the same way,” he replied. I pondered on that for a bit, “Is there any evidence of this? Has anyone shown it?” “No, but you can be the first!” And that was the birth of My First Project . Prior to that placement, I had hardly been entrusted with a pipette, let alone an entire project. The independence empowered me in an indescribable way.
I spent the whole summer working on my project – discerning the differences between males and females in response to changes in diet. There was no feeling greater than having a tray full of vials labeled with my name. All experimental work I had done before that had been pre-arranged university practicals. I had never had anything with my name! Here I was, being trusted with a full tray!
The following weeks flew by and were the most enjoyable working days I had experienced. I caught the train to London every day with energy and enthusiasm. I would get to work setting up my experimental flies, doing dissections, imaging slides on expensive microscopes, and analysing my data. Adam and I would meet frequently. He would help me with my analysis and truly catapulted my love of programming and data visualisation with R (something I now use every day in my PhD). He would ask questions to challenge me, and we’d have thought-experiments over the implications of my results. After a couple of months, it looked like I had meaningful results!
But most importantly, I would often get praise and told I was doing a good job, not just from Adam but from many members of the lab. In the two years of my undergraduate up to that point, I was only told I wasn’t doing enough; this was a significantly positive experience for my confidence and mental health. Adam’s and the lab’s support also extended far beyond my project; we had many open conversations about my next steps and further education. It was Adam who not only suggested I pursue a Masters but also told me I’d be good at it.
Now, here I am, seven years later, in the final year of my PhD. I may be far away from flies, but I still experience the impact of Adam’s support. He enabled me to see my love and passion for science, to realise I actually have something to offer. At the time, I didn’t realise how much I needed a good mentor to provide inspiration, guidance, and support (both technical and mental).
As I reflect, I can identify the characteristics that I feel make a good mentor:
Patience: Those earlier in their career have less experience and need support.
Two-way Conversation: Exchanging what you both expect from the relationship and what you continually need.
Sharing Experience with openness & honesty to allow learning from one’s mistakes.
Giving Credit & Praise: We are all humans and thrive off encouragement.
Enthusiasm: It’s contagious!
Trust: A mentor is not a babysitter.
It is important to note that while some of these points are universal (I think every mentor should be a good communicator), some are specific to me. Others may require something else from mentorship. Some trainees require motivation, reality checks, etc. This is why honest and frequent conversation of your and your mentor’s needs and expectations is vital for the relationship to flourish.
During my later scientific career, I have been fortunate enough to mentor two lab-placement students, in addition to dozens of academic students, via my job as a tutor. I have utilised everything I learnt from Adam, and other great mentors, to give the best support and guidance I can. At the time of writing, both lab placement students have decided against pursuing lab-based careers, but this makes no difference to me. A successful mentorship doesn’t result in training the next you; from a successful mentorship emerges two people who feel they have grown and developed from the experience.
Don’t get caught thinking that it’s just lab supervisors or training directors who can take on this role. We all have the opportunity to be mentors. You may not think it – but there are those out there for whom your experience is exactly what they are missing.
In less than one-year, scientists created not one, not two, but three vaccines with over 90% efficacy for Covid-19. Which begs the question, what gives? Why don't we have cures for all our other ailments? After spending billions of dollars on cancer research for decades, where is that cure? Regrettably, there is none. And it's not in a vault protected by the government and big pharma either. The reality is, cancer is extremely hard to treat, study, and understand.
Infectious disease vs. cancer
Infectious diseases are easily targeted in comparison to cancer cells. Bacteria have a unique cell wall compared to animal cells. Some antibacterial medications attack their membranes and cell wall synthesis, killing bacteria and minimally harming ourselves. Bacterial enzymes differ from animal enzymes, too. Newer antibacterials inhibit bacterial proteins, leaving our enzymes unperturbed. Viruses are also easily distinguished from our cells. They produce their own distinct proteins and have unique genes. We use vaccines to train our immune system to be alerted to their "foreign" presence.
But cancer cells? They're a bit trickier than bacteria and viruses.
What is cancer?
Cancer is a mass of our cells that have uncontrolled growth, avoidance of cell death, and the annoying habit of acquiring genetic mutations that enable unwanted behavioral adaptations. Ironically, killing a cancer cell is easy; we do it in test tubes all the time. The challenge is killing the cancer cells and not our healthy cells. Unlike bacteria and viruses, cancer cells have our genes, our proteins, and our molecular machinery. So how do we kill cancer cells inside a human body?
The answer lies in the research. Through years and years of experimental research, we are learning what makes cancer cells tick. By figuring out how cancer cells behave differently, anti-cancer drugs can be designed that do not harm non-cancer cells.
What makes a cancer cell?
It all comes down to abnormalities in the DNA. Genetic mutations can be inherited from your parents, acquired over time due to general wear and tear, or result from environmental stress (cigarette smoke, UV, etc..).
We have an increased susceptibility to cancer when an oncogene or tumor suppressor is mutated. Oncogenes are genes that contribute to cancer if they are over-expressed or hyper-activated, i.e., epidermal growth factor receptor (EGFR). Tumor suppressors are genes that, if turned off, contribute to tumorigenesis. The most famous tumor suppressor is P53, a protein that signals cells to die. When P53 is weakened or inactivated, cells don't die when they are injured or malfunction.
Having one mutation in an oncogene or a tumor suppressor may increase the probability of getting cancer, but it won't cause cancer on its own. An estimated 2-6 cancer-related genes must be mutated for tumorigenesis, and further mutations can contribute to disease severity.
Barriers to treating cancer.
Cancer is a heterogeneous disease. When we refer to cancer, we are not mentioning one disease, but a family of hundreds of diseases. A treatment that works well for breast cancer might not necessarily work for pancreatic. To complicate matters further, each tumor can be characterized by its unique genotype, meaning that not every patient will respond to treatment similarly.
Cancer cells mutate and adapt. After wrapping your head around the infinite permutations of cancer types and genotypes, also consider that an individual tumor may be heterogeneous as well. Cancer cells divide so quickly in stressful environments that they often incorporate mutations during DNA replication. Mutagenesis creates subpopulations within a tumor, causing issues such as metastasis and chemotherapy resistance.
The difference is subtle. As mentioned, the drug designed to kill cancer cells shouldn't poison the rest of our body. But if the difference between the cancer cells and our cells are only 2-6 genetic mutations, designing a specific drug is no trivial task. Currently, most chemotherapies are designed with one of the following anti-cancer strategies:
Modeling Cancer. Early cancer research takes place in test tubes. Protein function and structure are investigated in solution, while cancer cells are manipulated in dishes. When in vitro studies seem promising, experimentation moves on to mouse models. Unfortunately, what works well in a test tube often does not work well in the body. And sometimes, a drug that works marvelously in a mouse is toxic to humans. Therefore, preclinical studies are tweaked and repeated extensively to ensure that clinical studies are as safe as possible. Computational models can help bridge the gap between animal testing and human testing, but with limitations. On average, it takes 30 years from drug design to clinical trial approval. And as careful as scientists are during those 30 years, cancer clinical trials have the highest failure rate. Most trials fail due to adverse side effects (aka: when the drug also attacks our healthy cells).
So, will there ever be a cure?
To answer that question, we'd have to define the word "cure." A cure is a gold-standard treatment to eradicate or alleviate a disease completely. To date, only two diseases have been successfully eradicated from the earth, smallpox and rinderpest (both viruses that were wiped out by vaccination — yay, vaccines!).
Likely an end-all cure to cancer is not in our near future. But wait! Before you close out of this blog post, annoyed by my pessimism, I have a message of hope for you. Although "cures" are not on our immediate horizon, adequate treatment and prevention are already here — with more to come. Let's look at breast cancer as an example. In 1971 the 5-year survival rate of all breast cancers was 53%. Today, the 5-year survival rate is nearing 90%! Encouraging numbers, to say the least, these stats are representative of the advancements made in cancer research.
Cancer researchers have chosen an arduous career, often riddled with disappointment and setbacks. Although earth-shattering, groundbreaking discoveries are preferred, in science, it's often the small victories that will culminate in clinically relevant treatments. Cancer research may seem slow, but it is undoubtedly progressing.