A post in collaboration with The Shared Microscope
Caution: Smoking tobacco is harmful to health, and as most of you know, is carcinogenic. This article in no way encourages to use of tobacco as protection against COVID-19.
In response to the COVID-19 pandemic, several drug companies, including AstraZeneca, Moderna, Pfizer, Johnson & Johnson, Sinovac, and Novavax, have successfully created vaccines that are currently being deployed to individuals worldwide. Medicago and GSK have also joined forces to develop a unique vaccine against COVID-19.
Medicago, a biopharmaceutical company headquartered in Canada, focuses on developing plant-based therapeutics in response to global health challenges, COVID-19 being no exception. In response to the pandemic, Medicago (alongside GSK) developed a COVID-19 vaccine unique to all the other vaccines recently developed against COVID-19. So what's special about the vaccine? Interestingly, this is the first COVID-19 vaccine that is entirely plant-based. Yep, you heard that right — made in plants! Tobacco plants (ish)!
What plant is the Medicago COVID-19 vaccine made in?
Medicago's vaccine is made from a close relative of the tobacco plant, Nicotiana benthamiana. Ironically, tobacco has plagued humans with lung conditions ranging from COPD to lung cancer for decades. Surprisingly, this notorious plant has been critical in manufacturing vaccines against COVID-19-related pneumonia in the past year.
Plant-based vaccines are cheap to produce, safe, and rapid for developing vaccine ingredients such as the Coronavirus-like particles. Plant-based vaccines are also highly scalable and provide manufacturers with increased flexibility. As highlighted by the peer-reviewed paper linked here, the Nicotiana benthamiana is the core production host for various companies aiming to further human medicine, including Medicago, Icon Genetics, iBio, and Lead Expression Systems.
Nicotiana benthamiana is often the production host of choice because it has a defective plant immune system, making it easy to produce new vaccine ingredients. Research suggests that the plant sacrificed its defensive system in favor of a hastened reproduction cycle. A strategy that enabled the plant to cope with severe drought weather in central Australia. This susceptibility to infection enables the plant to quickly undergo genetic transformations and transient gene expression, making it an excellent mini-factory for protein production.
How are plants used in vaccine development?
For over a decade, Medicago has been developing a technology that harnesses the power of plants to develop vaccines. More specifically, using a bacterium, the plant is "fed" information about a virus to produce the main ingredient of a vaccine (also known as the active ingredient).
The plant is programmed to produce the vaccine -- to do this, Medicago only needs the virus's genetic sequence rather than the virus itself. The genes are then introduced to the plant via a bacterium. The plant's normal machinery will produce the vaccine ingredient against the virus through a natural process within the tobacco plant. The plant can produce the vaccine's active ingredient in approximately one week from the date of initial exposure to the pathogen. The active ingredient is then harvested and purified for use in a vaccine candidate. The vaccine candidate can then be produced within 5 to 6 weeks.
For the coronavirus vaccine in development, Medicago focuses on producing Coronavirus-Like Particles that mimic the virus's structure. Please note: although the Coronavirus-Like Particles are structurally identical to the SARS-CoV-2 virus (which causes COVID-19), they do not contain any of the genetic material of the virus, and therefore, are unable to cause infection.
To learn more about the COVID-19 vaccine in development by Medicago, feel free to check out the following video:
However, the production of the vaccine candidate is only half the battle won: the vaccine candidate then has to pass various safety and efficacy tests before the vaccine can be commercialized for use in humans. This testing process is further explained in an article linked here.
Tell me more about the Coronavirus-Like Particle Technology
Virus-like particles, such as the coronavirus-like particles manufactured by Medicago, are structurally identical to wild-type viruses. However, they lack the genetic material inside the virus. Because of this, the virus-like particles are unable to replicate or cause infection in the vaccinated individual.
Below is an image of Medicago’s CoVLP compared to an image of a wild-type SARS-CoV-2 virus:
It can be argued that vaccines produced in plants are faster and more accurate because no manipulation of the virus is required. The vaccine development also does not require that viruses be handled in the laboratory. Research so far suggests that virus-like particles have had an equivalent or superior immune response in mice when compared with live viruses.
Virus-like particles cannot develop any mutations and are also structurally stable, unlike mutations that may sometimes develop from traditional vaccine manufacturing. The virus-like particles can elicit an immune response via antigen-presenting cells (a type of white blood cell) found in the human body, leading to a robust immune response in the vaccinated individual.
The virus-like particles fool the immune system to protect an individual from disease. Virus-like particles look like the disease-causing virus that can trick the immune system into making antibodies that will protect against the actual infectious pathogen if naturally exposed to it.
Isn't Medicago working with GSK for this vaccine?
Yes, Medicago has joined hands with GSK for the development of their COVID-19 vaccine. As part of the partnership, Medicago manufactures the active ingredient to be used in the vaccine. GSK provides its pandemic vaccine adjuvant system for the same.
What is the importance of the adjuvant? The adjuvant plays a vital role in the pandemic and has done so amid the last flu pandemic of 2009. The pandemic adjuvant reduces the amount of vaccine protein required per dose, allowing more vaccine doses to be produced overall. In other words, the Medicago vaccine can be “diluted” using the Medicago pandemic adjuvant, which can help to protect more people overall. This “dilution” using the adjuvant enhances the immune response and provides long-lasting immunity against infection.
Is the Medicago vaccine vegan?
This seems like an apt question here. Is the Medicago COVID-19 vaccine really vegan? Medicago has not directly responded to this question (yet). But one thing is clear, the active ingredient of the vaccine, the coronavirus-like particle, is not of animal origin. More information is required about all the other ingredients used in the vaccine before we can certainly comment about whether or not the vaccine is vegan.
What's the current situation?
The Candian company, alongside GSK and Philip Morris, recently reported promising results from their Phase I and II clinical trials. The vaccine is now in the final phase of human trials. To learn more about the vaccine development process, feel free to check out this article here.
Depending on the results of the Phase III trials, the company has reached an agreement with the Canadian government to accelerate its COVID-19 vaccine candidate efforts. The government has agreed with the company to supply 76 million doses of its COVID-19 vaccine within Canada.
Carpe Fiscus! The time is ripe to stimulate the "pathway to independence" for early career researchers.
In August 2017, faced with the increasing probability of drastic budget cuts under the Trump administration, the NSF Directorate of Biological Sciences announced it would no longer be funding its long-praised Doctoral Dissertation Improvement Grants (DDIGs). This decision marked a turning point in funding opportunities for graduate students, who are particularly vulnerable to a lack of grant options. As a graduate student at the time, I wrote about the decision and its ramifications for myself and my peers, citing the NSF's choice to slash the program as a brand of "trickle-down" academics. I noted that the decision to cut DDIGs was worrisome for early-career researchers, heralding a further consolidation of academic power at the very top levels of the hierarchy and diminishing agency for already-vulnerable trainees.
Just under four years later, the world and I have both moved on – I passed my dissertation defense and began a new chapter in my academic training. The US handed the reins of power over to a markedly different administration, one constantly challenged by the lingering watermarks of its inherently anti-science predecessors. With this country-wide transition and the eyes of the world increasingly on academic research during a global pandemic, we sit at the cusp of an incredible opportunity to push funding opportunities for early-career researchers further than ever before.
Funding for Early Career Researchers
Most graduate students in STEM are funded through a combination of research and teaching assistantships, the money for which comes from grant and institutional funding, respectively. Grant funds are often awarded directly to the student's principal investigator or PI — the person directly responsible for mentoring and supervising graduate students. Teaching assistantships are often seen as less desirable by the students and their PIs, as teaching takes time away from research. Therefore, research assistantships are prized but depend entirely on the PI for funding and tend to be fairly restricted in subject matter. The PI faces pressure to publish and present on the experiments laid out in the original proposal as this evidence of success is instrumental in future funding decisions; there is usually little room for creativity or independence from the grad student.
Funding opportunities for postdoctoral fellows are often similarly awarded to the PI rather than the fellows themselves, with fairly rare exceptions. The NSF has a single program aimed solely at biology postdoctoral fellows called the postdoctoral research fellowship in biology (or the PRFB), while the NIH, the other major funder of foundational research in the US, has several opportunities geared directly at postdocs, including their F32, K25, and K99/R00 awards. Competition for these awards is fierce. It is much more common for postdocs to be funded as a component of grants awarded to established PIs, leaving little opportunity for postdocs to control the direction of their research. Instead, they remain bound to research that can directly tie into the goals of the grant they are funded under – goals which they may have had no hand in setting.
Rightly or wrongly (and it would be the topic of a whole other post to unpack whether it's right or wrong), we predominantly train our graduate students and postdocs to be future PIs. PIs must be able to independently find funding opportunities, generate innovative research proposals, and follow through on those proposals' aims. These steps all involve thinking creatively, responding to unpredictable events, and adjusting accordingly. While funding decisions are based strongly on publication record, they also depend heavily on a proven track record of securing independent funding. Depriving our early-career trainees of opportunities to establish a funding track record makes their professional and academic journeys much harder. It takes away their agency, leaving them more reliant on their PI.
Hopefully, if you've read this far and are still invested in reading further, I've convinced you that funding opportunities for early-career researchers should be expanded. What can we all do to make sure that this expansion happens? Well, the answer will likely depend on who you are and your specific role within publicly-funded research.
For fellow academics, especially those further along in their careers:
While most people are aware of the threats posed by climate change, few know of just how drastic those threats are to biodiversity. According to the World Wildlife Fund (WWF), the Earth loses roughly 10,000 species every year, roughly 5,000 times higher than the natural extinction rate.
While zoos are an effective way to house endangered or threatened species, the reproductive biology of these animals is largely unexplored but is becoming increasingly important for species conservation. Two of the most pressing issues facing zoos today are space and lack of genetic diversity. Even when zoos are well-managed and internationally connected, zoo populations rarely contain large enough animal populations for long-term sustainability. Moreover, when new animals are brought in to revitalise captive population genetics, the logistics of moving animals between zoos can be extremely challenging (imagine the logistics and costs of moving an elephant or giraffe from New Zealand to New York). This is where assisted reproduction can play a significant role.
What is assisted reproduction?
Broadly speaking, assisted reproduction involves managing an animal's reproductive cycle or manipulating gametes to achieve fertilisation and a subsequent pregnancy/live birth. Some of the most common assisted reproductive techniques in our arsenal are gamete cryopreservation, artificial insemination, and in-vitro fertilisation (IVF). Assisted reproductive techniques have become very well defined in humans that, since the birth of the world's first IVF baby in 1979, around 8 million children have been born from assisted reproductive techniques globally. Assisted reproductive techniques have also become so commonplace in laboratory rodents and farm species that we often forget the incredible difficulty in defining the fundamentals of a novel species' reproductive biology. Unfortunately, this is exactly the case with many endangered or threatened species. Even artificial insemination, one of the more basic assisted reproductive techniques, requires an in-depth understanding of male and female reproductive physiology before we can even think of making an attempt. Although daunting, once even simple techniques like AI or reproductive cycle management are defined, assisted reproductive techniques can be incredibly useful in supporting captive breeding efforts.
As I mentioned earlier, the difficulty of transporting some animals between zoos (let alone continents) is extremely challenging. Sperm cryopreservation is an effective procedure for many species, where semen is collected either voluntarily or through electroejaculation and frozen without dramatically affecting sperm viability. Similarly, even cells from wild individuals can be collected, frozen, and used in captive breeding programs. Cells frozen correctly can (in theory) remain viable forever and be shipped around the world far more cheaply and simply compared to shipping an entire animal. Several institutes around Australia, including the Taronga Conservation Society and Monash University, have adopted this idea and have established the futuristic concept of a 'Frozen zoo.' Frozen zoos store cells from endangered animals and plants in liquid nitrogen until they're needed for future genetic reintroduction programs into captive or wild populations through techniques such as artificial insemination or IVF.
I think it needs to be clearly stated that assisted reproductive techniques never intend to (or I think ever will) replace captive breeding. Assisted reproductive techniques are tools that scientists, conservationists, and zoo staff can use to more effectively increase captive animal numbers without replacing traditional breeding methods.
Have frozen zoos and assisted reproductive techniques been useful before?
In practice, assisted reproductive techniques are rarely used in captive settings due to their technical complexity and perceived costs. However, assisted reproduction continues to make headlines in scientific literature and the media, including artificial insemination in giant pandas and jaguars, cryopreservation in coral and fish species, and, most recently, the cloning of black-footed ferrets from cells frozen over 30 years ago. While it may seem drastic to start cloning rhinos or freezing sperm from lions, climate change poses incredible threats to species biodiversity, which we are doing a terrible job in mitigating. The Earth is losing roughly 10 million hectares of forest every year, and, as a result, animal populations are becoming increasingly fragmented and isolated, limiting gene-flow between populations. By not having enough genetic diversity between populations, a species can suffer from inbreeding depression: the reduced biological 'fitness' of a species and their ability to reproduce and survive in the wild. Reliable techniques for preserving and transporting species genetics between captive settings (or from the wild to captive settings) enable better management of genetic diversity while increasing that species' biological fitness.
So, what does the future of assisted reproduction look like?
While assisted reproductive techniques have clear immediate and future benefits to species conservation, their use is unfortunately not up to the conservationists and scientists but up to funding bodies and political big wigs.
The importance of assisted reproductive techniques in the future of species conservation cannot be understated, and researchers continue to build the case for assisted reproductive techniques as reliable, effective tools for the protection of biodiversity. Conservationists and assisted reproductive biologists have chosen a difficult career, often restricted by funding issues and a pervasive misunderstanding of the importance of biodiversity in the general population. Although everybody loves the trailblazing, revolutionary discoveries, or achievements in science, these discoveries are only possible after decades of fundamental research. Without the proper funding or public interest in biodiversity, species conservation will remain an incredibly tough, arduous field. That being said, although progress may seem slow, if we continue to fight the uphill battle against climate change, we will be glad we invested in assisted reproduction science when we had the chance.
How long do you think is an appropriate time for students to commit to their PhD? If you ask around, the perceived time range varies quite a bit, 3-4 years, 4-6 years, or even double-digit years. If we can't agree on the time length of a doctoral degree (like Med, Pharmacy, and Law school), there must be other cemented parameters that guide students to graduation? .... Right....?
How do you know when you are ready to graduate?
Most STEM doctoral students travel a similar path. They conduct research until their project is complete, then after writing a thesis and defending it, they are conferred with the title of "Doctor." The question is, when does one actually "complete" a research project? Research is never done. One question is answered, which leads to another question, which leads to another and to another.
So how does one judge if a graduate student's work is finished? I tweeted out this question sometime ago and received various answers. Most responses were number of publications, completion of proposal aims, or the amount of years in the program. A few other less common answers were, "vibe," "loss of funding," "up to the PI" and....
So let's break the most common answers down:
Number of publications: Many programs require 1-2 publications for a student to graduate. Often, this is considered a fair. Other times, it puts the student at a disadvantage. This parameter neglects to normalize the support systems within the lab. Some students have no lab-techs, post-docs, or collaborators, meaning that publishing is a much more arduous task than their counterparts. If a student is lucky, they receive a project that is low-risk or partially finished, while other students work on high risk projects for years without a payout.
Completing proposal aims: At the surface, this seems to be an equitable stipulation for graduation. But often, students write proposals off-topic, rendering this parameter impossible. Frequently, projects evolve and take the student in another direction than their proposal. If they veer off path for the sake of science exploration, should they be still held to the same proposal they wrote years ago?
Years in program: Not all projects are created equal, and not all students put in the same effort over the same period of time. However, I argue that putting a time cap drives productivity, encourages streamlined research, and motivate PIs to support their students in finishing their projects.
Graduate student labor.
Graduate students are cheap labor, most making about $30,000 a year. PIs are reluctant to allow students to graduate, thereby forfeiting a valuable resource. Besides the project a student is working on, a grad student is also expected to train incoming lab members, maintain lab equipment, contribute to lab chores, and work on side projects other than their thesis.
We have to ask ourselves if more years in the same program with the same mentor is beneficial to a student's education and training. Is a seventh-year student still learning from their mentor, or are they underpaid employees receiving typical on-the-job training? Furthermore, extending a student's PhD training can certainly have setbacks.
Time at school should not be taken for granted.
A long PhD can have detrimental effects on a students' life and career:
From the many student-PI conflicts I've seen, it's naive to believe the current system of arbitrary graduation guidelines is working. To give more protection to graduate students, I propose two policies: (1) A 4-year time cap for students with a Master's degree or a 5-year time cap without a Master's degree, and (2) salary raises for students throughout the program.
Providing a time cap.
With no time cap, graduate students are encouraged to work on non-thesis research and to tackle high-risk projects that are likely to not pan out. A time cap can motivate both the student and the mentor to come up with a practical project plan and remain focused at the proposed thesis work needed for graduation. Consider Parkinson's Law which states that work will expand to fill the time allotted. Under this principle, a student will finish their dissertation research in four years if given a four year time limit. Without the time limit, a student will linger in the program until an external factor (funding, unhappiness, or a job offer) influences them to wrap up their project.
However, there are plenty of reasons to fight against a time cap: variability between disciplines, discrepancies in work ethic, and neurodiversity of students. This is why I recommend a 4-5 year time cap with an opportunity to extend. Extensions can be offered to students with disabilities, or if a student, PI, and thesis committee mutually agree staying in the program is beneficial.
Demanding raises for long-standing students
Paying senior students more money is another way to ensure equity to graduate students. Senior students take on more responsibility and are [usually] more skilled. Granting raises rewards students for more time in the program and also removes the cheap labor bias.
It's not all the PIs fault.....
It's easy to blame the thesis advisor. But if PIs aren't given the proper tools and support system to keep a lab running smoothly, can we blame them for wanting to keep students on longer? Here's a few ways institutions can support PIs more:
I am aware that many European Universities have time limits. This post is written by an American graduate student whose program has a loose 7-year limit. I am a fourth-year student who intends to graduate during my fifth year. Even with three pubs (one published, one submitted, and one underway) and a Master's degree, I receive pushback that I should stay well into my 6th year. I'm convinced that more time in my PhD will not further my education or career prospects, but I am certain that it will affect my financial and mental health.
We’re a year into this pandemic, and although the numbers seem to be improving, video-chatting is here to stay. Even once we reach “normal,” the convenience and flexibility of virtual meetings likely means we have plenty of web-based interactions in our future.
So, when it’s time for you to plan your next zoom event, here are a few things to consider:
1. Schedule time for IT issues. Plan on time for connectivity issues and microphone checks for all speakers. And if using multiple speakers and break out rooms, plan for some adjustment time. Don’t expect all changes between speakers to happen smoothly and instantly.
2. Schedule breaks. Just because we are sitting at our laptops, doesn’t mean we don’t need coffee, lunch, and bathroom breaks. During long meetings, your participants' brains likely need a break from that dense content.
3. In person format ≠ zoom format. I’ve participated in a few events where the organizer took the same schedule from previous in-person events and used it over webchat without modification. With the lack of in-person socialization, the audience members likely have decreased attention spans. So, for web-based events, less is more. Consider shortening your format. We don’t want to stare at our computer screen for a full day!
4. Ask questions to improve engagement. You may have noticed there’s less questions and participation during webchat events. Utilize the poll functions and don’t be afraid to have fun quiz questions with your audience.
5. Take advantage of break out rooms (when necessary). Break out rooms can be great to facilitate conversation, ie: for panel discussions or to answer discussion questions. But gage your audience, will breaking out into smaller rooms facilitate more conversation? Or will it dilute the pool of participants likely to actively participate?
6. Say good-bye to weekends. Often, conferences may be held on a weekend due to availability of parking, hotels, and conference rooms. But with the internet, availability is endless. Please organizers, leave my weekends alone.
Although we complain about zoom-life, I love it! I can easily meet and talk with people around the world. Covid-19 brought about terrible atrocities, but at least it acclimated us all to the lovely world of video-chatting.
When we think of the hard-working scientist, we think about a scientist that enters that lab early in the morning. They work through the day using multiple cups of coffee to keep their energy up. Late at night, they can be seen writing on a whiteboard, making the prime discoveries in their field.
Scientists have come to romanticize workaholism. We believe that the person who works the longest hours and sacrifices the most for their work will be the most successful. This idea comes from "grind culture" or "hustle culture."
If we believe in the "grind culture," we believe a lie. The person who works the longest hours is really the person losing out on the enjoyable things in life.
Think about the last time you put in a long day. At the end of that day, did you think, "Oh, I just keep wanting to do this forever?" Probably not. Instead, you probably thought about how much you want to go home and how you do not want to come in tomorrow.
As scientists, we can truly love and enjoy our work, but too much of anything can be a bad thing. The grind culture leads scientists to burnout, neglect self-care, and actually become less productive.
When I started graduate school, I believed in this lie. I tried to devote nearly every waking hour to my work to be successful. After every week, I wouldn't get out of bed on Saturdays until the afternoon. Even if I woke up around 10 a.m., I would just lay there questioning what was wrong with me. Once my partner would finally convince me to get up, I would eat and then start doing more work.
Not only did I waste a large amount of my time, but I also dealt with very high anxiety and depression through this time. Any moment that I wasn't working, I felt guilty. At my core, I believed that not working was an expression that I wasn't serious about my work.
In reality, this idea is a fairly disturbing notion.
After seeking therapy, I realized that my long hours and constant work are not what made me successful. What made me successful was my determination, problem-solving skills, and ability to develop ideas. Yet, burnout decreases all of these abilities that lead to success.
How to Really be Successful
Therefore, instead of working more hours, you should focus on becoming more efficient in your work. We are all inefficient in our work. In fact, a recent study found that typical workers only work less than 3 hours in an 8-hour workday.
Think about your regular day. How much of your time did you truly spend working on things that move your science forward?
On a typical workday, I spend time socializing with my colleagues, checking out my social media, watching shows or YouTube, and staring at my screen, not wanting to do work. Yet, I would be at work for over 10 hours, saying I worked 10 hours that day.
If you give in to grind culture and think you should work all the time, then you lose your motivation to do work. If completing work doesn't allow you to leave work sooner, what motivation do you have to complete work?
There are two principles that can help you become more productive by working less: Pareto's principle and Parkinson's law.
Pareto's principle states that 80% of your success comes from 20% of your effort. Therefore, if you think about your typical workday, only about 20% of your time is creating 80% of your success in science.
Parkinson's law states that work will expand to fill the time that it is allotted, meaning that if you give yourself 10 hours in a day to complete a task, it will likely take all 10 hours, even if it only really requires 2 hours of work.
If you apply both of these principles to your approach to work, you can work less and accomplish more by becoming a more efficient worker.
While you may be on board about becoming an efficient worker, you may still wonder how to become more efficient. So let's go step-by-step through a system that I created for myself, which has proven to make me more productive while decreasing burnout.
Set Work Hours
The very first step of becoming efficient is to set your work hours. You should set your work hours based on your lifestyle and work requirements.
Do you want to work 8 hours and be off in time to make an exercise class? Then set your work hours to complete your workday in time for your class.
However, you also need to take into account the needs of your work. What times do you have meetings? When does your boss or supervisor expect you to be around?
The benefit of setting your work hours is that you are already combating Parkinson's law. You now have fewer overall hours that work can expand to fill. Additionally, you can regain motivation because you know that you need to finish your task by a specific time so that you can leave work accomplished.
Make To-Do and Not-To-Do Lists
Once you have your work hours, you need to concentrate your efforts on the things that are bringing you success in your science. The best way to focus is to create to-do lists and not-to-do lists.
First, think about all of the things that you genuinely need to do to make progress. If you think you need to do everything, ask yourself, "If I could only work 2 hours a day, what would I do?". Suddenly, your brain will flood with the most important things that need to be done for you to be productive in science. Write these things down to make your to-do list.
Now, make a list of at least three things that you do that waste your time, such as tasks that make you feel productive but don't result in actual progress. For many graduate students, I believe that reading scientific papers for the sake of reading them should be on your list. Reading papers should be done for a specific reason, not simply so that you can feel productive or say you read so many papers that week.
Personally, my not-to-do list includes checking my social media, checking my email, and watching shows during my day. Place your to-do and not-to-do lists somewhere where you can see them regularly.
Block Out Your Time
The third part of becoming more efficient is to block out your time. The essence of this idea is to prevent you from task switching multiple times and wasting time as you move from one task to another.
There are two ways that I like to block out my time. The first is to theme my days, and the second is to create time blocks.
If you have specific themes to your work, then it is nice to theme your days. For example, if you are a graduate student, you may have coursework, research, and teaching. On a day that you teach, make it a teaching day. Take the time during the day to grade assignments and plan for the next week's lesson. On a day that you attend multiple classes, take the free time you have to study and do homework. On days that you have research meetings or primarily free days, focus that time on research-related activities.
Themed days help you plan your day, keeping you focused and allowing you to make progress on one task all day long.
Time blocks allow you to work on a single task for 45-90 minutes. Maybe this task is a meeting, class, or writing a paper, but after your set work time, you have the margin to move from one task to another. The way I prefer to do this is 90 minutes of work with a 30-minute margin. However, depending on your schedule, a 45-minute block with a 15-minute margin may work better.
Overall, the idea that you need to work longer hours to be successful is not only a lie but counterproductive. Instead, by increasing your motivation and efficiency of your work, you can become more successful while maintaining your personal life. To become more efficient, I have a 3 step system that I employ:
Operation Warp Speed, launched in early 2020, helped speed up the pace of vaccine innovation, turning the normally 10+ year clinical trials process into one that takes less than a year. To learn more about the vaccine development process, check out this post by Nidhi.
Although vaccine development has been significantly accelerated, it is essential to understand that vaccine development has not been rushed. In fact, despite operating in a public health emergency (the COVID-19 pandemic), vaccine research has been thriving. This is thanks to scientific collaboration, funding, and a quick and thorough review process, allowing scientists across the globe to develop the COVID-19 vaccines in under a year.
In this article, we will discuss Johnson & Johnson's COVID-19 vaccine. The Johnson & Johnson (J&J) vaccine will likely be approved by the US Food and Drug Administration (FDA) for use by late February or early March. The J&J vaccine is different from other COVID-19 vaccines in that it only requires one dose. As such, it may be the saving grace to the seemingly slow and clunky vaccination rollout in various countries, including the United States.
Why might the J&J vaccine be the pandemic saving grace?
The J&J vaccine may be the next one to receive approval -- i.e., after the Moderna vaccine and the Pfizer vaccine. There are various advantages and disadvantages to using this vaccine.
The biggest drawback of the J&J vaccine is that it has lower efficacy than the Moderna and Pfizer vaccines. To understand the science of vaccine efficacy better, check out Sheeva's post. More specifically, the J&J vaccine has an efficacy of 72% in the United States, 66% in Latin America, and 57% in South Africa. By contrast, Moderna's COVID-19 vaccine has an efficacy of 94.5%, and the Pfizer COVID-19 vaccine has an efficacy of 95%.
Despite the lower efficacy rate, the J&J vaccine remains quite promising. The vaccine only requires a single dose, significantly simplifying the logistics required for local health departments and clinics. Additionally, the vaccine is stable in a refrigerator for several months (36°F - 46°F or 2°- 8°C). Contrarily, other vaccines, such as the Moderna and Pfizer vaccines, require freezing at significantly lower temperatures of -4°F or -20°C and –94°F or –70°C, respectively.
Johnson & Johnson's COVID-19 vaccine and the AdVac technology
The Johnson & Johnson vaccine in development (which is now seeking FDA approval in the United States) goes by two names -- JNJ-78436735 or Ad26.CoV2-S. The vaccine is developed by J&J's pharmaceutical arm, Janssen, using Johnson & Johnson's AdVac technology.
According to the Janssen website, AdVac technology is "based on development and production of adenovirus vectors (gene carriers)." The AdVac technology enables effective development of an adenovirus-based vaccine in response to emerging diseases, such as COVID-19, in a cost-effective and large-scale manner.
What is an adenovirus-based vaccine?
To explain what an adenovirus-based vaccine is, we first have to talk about the basics of viral vector vaccines. The Oxford/AstraZeneca and J&J vaccines are both viral vector immunizations, meaning that a non-infectious virus is used as a shuttle to deliver the virus's genetic contents into our bodies.
Think of a viral vector vaccine as a "cut-and-paste" vaccine. Parts of one virus are cut and pasted into another to create a viral vector vaccine. An adenovirus-based viral vaccine uses part of an adenovirus as a shell and a gene encoding a part of another virus (such as the novel coronavirus) is shoved into that shell.
In both the Oxford and J&J viral vector vaccines, the gene encoding the coronavirus spike protein is pasted into a "hollow" shell of an adenovirus. J&J specifically uses an adenovirus strain named adenovirus 26 (Ad26). When the vaccine (i.e., the Ad26 shell and with the spike protein center) is administered, it invokes an immune response in the body. (Learn more about the spike protein here.)
After being vaccinated, our body will be able to respond to the virus more effectively to eliminate the risk of infection. This is done through the quick and effective recruitment of immune cells and antibodies to prevent the virus from inducing COVID-19 disease. To learn more about J&J's COVID-19 vaccine, check out Nidhi's post here. You can also learn more about the other top COVID-19 vaccines here.
Johnson & Johnson's FDA Emergency Use Authorization
On February 24, 2021, the FDA stated J&J's single-shot COVID-19 vaccine will receive formal Emergency Use Authorization (EUA) approval. The company's EUA approval is based on the efficacy and safety data from the Phase 3 trials. The J&J vaccine will be a pivotal step towards putting an end to the pandemic due to its single-dose requirements that also require normal refrigeration rather than super cold storage.
Yes, there are more similarities between solar cells and pizza than you might think.
I'm a solar energy researcher working towards eliminating the defects in and improving the performance of industrial solar cells. A PhD is a long journey full of untimely experiments and countless sleepless nights, so I often find myself eating while working (definitely not in the labs!) and working while eating. One day, intrigued by how delicious a cheese pizza is, I realized how alike the pizza and the cell samples I work with are.
Build your own Solar Cell
A solar cell is a device that generates electricity when the sun shines over it. A combination of these cells linked in sequence makes a solar panel that can generate significant power, and that is what you see on people's rooftops, in solar-powered streetlights, and calculators. As a good pizza starts with a perfect dough base, solar cells begin with a very pure form of silicon wafer (which is also the second most abundant element in the Earth's crust), scientifically called a 'base.' Some extra elements like boron or phosphorus are then added to this silicon base to make it more conductive.
Then comes the toppings. Yes, both for the pizza and the cells! Pizzas are loaded with a bunch of toppings for various flavors; solar cells are also coated with some very thin layers that help enhance their performance. These layers are called 'dielectrics.' They help reduce the reflection off the surface to increase the light absorption, passivate some surface defects, and possess some hidden benefits for the base. The most common dielectric, silicon nitride, used in the industry is also responsible for the blue color you see on most solar panels (a silicon wafer is otherwise grey!).
We all know that pizza does not taste great after sitting in the fridge for a week. A similar degradation occurs in most solar cells. Once the panels are installed and are out in the sun, their performance degrades after the first few years (anywhere between 2-10% relatively). And losing a slice or two of a pizza might not make a dent in your pocket, but this degradation is responsible for a loss of billions of dollars every year. We call it 'light-induced degradation' (or LID).
Light-Induced degradation (Staleness)
LID is a family of defects that occur in the presence of light; however, technically speaking, the resulting charge carriers are responsible, not the light. This degradation is not a new phenomenon, and researchers have been working on understanding and solving it for years. The good news is one of the most common defects responsible for LID has now been nearly solved in most panels worldwide. Unfortunately, we now have a new variant of LID in all kinds of panels. However, it only occurs at high-temperatures under light: "light- and elevated temperature-induced degradation," or LeTID. More sunlight is essential for higher electricity generation from the solar panels, but higher temperatures are detrimental (we only need the light, not the heat, for solar electricity generation).
This new kind of degradation is a focus of numerous researchers globally, including me. In my research, I work on mitigating this degradation by simply playing with the dielectrics (after all, it is all about the toppings, right?).
Firstly, we have found that reducing the thickness of the dielectrics can significantly mitigate this degradation (1). You can imagine how applying less tomato sauce can prevent the pizza from going soggy. Reducing the thickness means using less material and thus lower costs. However, there is a threshold beyond which reducing the thickness might lead to other kinds of losses.
Secondly, we also devised that the placement of the dielectrics plays a vital role in the extent of potential degradation (2). By studying multiple industrial cells, it was observed that adding a very thin layer of a second dielectric can strongly modulate the degradation. This reduced the degradation by creating a barrier layer between the first dielectric (Silicon nitride) and the silicon base. Another solution we found is the dependence of degradation on the silicon wafer thickness (3). By thinning the wafers, severely low degradation was observed. Similar to how a thin-crust pizza can help prevent you from gaining extra calories if you are on a diet!
These three solutions effectively alleviate the degradation in current solar cells without increasing their manufacturing cost. With solar installations progressing at record levels each year, the mitigation of these defects will accelerate the transition to a cleaner world. So, we can leave the next generations with tastier pizzas and a healthier planet!
1. U. Varshney, M. Abbott, A. Ciesla, D. Chen, S. Liu, C. Sen, M. Kim, S. Wenham, B. Hoex, and C. Chan, "Evaluating the Impact of SiNx Thickness on Lifetime Degradation in Silicon," IEEE J. Photovoltaics, vol. 9, no. 3, pp. 601–607, 2019.
2 . U. Varshney, C. Chan, B. Hoex, B. Hallam, P. Hamer, A. Ciesla, D. Chen, S. Liu, C. Sen, A. Samadi, and M. Abbott, "Controlling Light- And Elevated-Temperature-Induced Degradation with Thin Film Barrier Layers," IEEE J. Photovoltaics, vol. 10, no. 1, pp. 19–27, 2020.
3. U. Varshney, M. Kim, M. U. Khan, P. Hamer, C. Chan, M. Abbott, and B. Hoex, "Impact of Substrate Thickness on the Degradation in Multicrystalline Silicon," IEEE J. Photovoltaics, vol. 11, no. 1, pp. 65–72, 2020.
If I had to go back in time and give myself advice, I’d tell myself to be cautious of advice. Advice isn’t necessarily good or bad, but it’s often misguided or the wrong fit. In your scientific career, especially early on, it’s tempting to trust all the guidance tossed your way. More experienced scientists should know better than you, right? Not necessarily.
Here’s some advice I received and learned — through experience — to disregard.
An upside to academia is the freedom to make your own choices. But with freedom comes uncertainty. Grad school and science careers are challenging to navigate. Suitable, appropriate guidance will help you through, while erroneous, biased advice can hold you back. Practice healthy skepticism, and in the end, always choose what's best for you.
Have you been given extraordinarily ill-fitting advice as a scientist? If so, tell us in the comment section below! Or, tweet at us, @BoldedScience, #BadAdvice.
Entrepreneurship offers a unique opportunity to continue exploring and researching while learning new skills and tackling challenges that will dramatically enhance your career. However, it is paramount to recognize the stark differences between life in academia vs. life as an entrepreneur. Here are some examples of approaches that may need to change as you embark on your entrepreneurial journey.
As scientists, we like to gather as much data as possible before making a decision. Unfortunately, this isn’t possible in entrepreneurship. You will need to adapt and become comfortable making tough critical decisions with only 50% of the information.
Presentations & Discussions
Whenever I read a paper, attend a conference lecture, or make academic presentations, the same setup is used: background, rationale, results, discussion, and finally, material/methods. In the business world, the goal is to share the most important information in a short amount of time. You are expected to concisely explain the problem you are solving, your solution, and how you will achieve results.
Scope of Work
In research, we have our primary project, and we immerse ourselves in that topic, working days, weeks, and months with an intense singular focus. In contrast, entrepreneurs will need to maximize their limited time by conducting multiple initiatives concurrently. Learning how to effectively switch context between subjects to solve problems is a skill that will help you tremendously. One minute you may be engineering and the next you may have to close a sales deal.
While in academia, there are few things you can do to prepare yourself for this transition.
1. Contribute to lab members’ projects or collaborate with other labs.
The reason I’d advocate for this is that it forces you to work with other people/groups. Academia can often lead to working solo, which prevents you from learning the critical soft-skills needed to succeed in a team-based environment when there are multiple chefs in the kitchen. In entrepreneurship, great teamwork and effective communication can make the difference between success and failure.
2. Leverage your network and ask questions
If you’re looking to build a business in the life-sciences sector, you are in the prime spot to do some target market research. Reach into your network, speak with your colleagues, and investigate your business problem. There is no better time to do this. Scientists are far more likely to answer your questions as a graduate student than when you cold-call them as an entrepreneur.
3. Reading literature or books about building start-ups are helpful, but the best learning comes from experience. Find a start-up in the industry where you want to build your business and work there part-time. Listen, learn, and observe. Pick up on how teams self-organize, how company culture develops, and keep an eye out for solid work practices. How are meetings structured to maximize efficiency? How are large teams coordinating their work? How are company leaders communicating with each other? When it’s time to break out on your own, you’ll have a point of reference for how processes work. Stitch together practices that worked well, and learn from the mistakes you’ve made during this experience.
For those in academia who have already begun their transition into entrepreneurship, there are two important considerations you should make before diving in.
Intellectual Property (IP)
The very first thing you need to do is read through your institution’s intellectual property (IP) agreement to find a clause that details who really owns the IP.
Most institutions stipulate that the IP generated within the scope of your employment belongs to the institution. Here are some things to consider:
Feasibility of Entrepreneurship
Secondly, you should look to sort out the logistics of the transition. Prior to your capital raise, you'll be responsible for the business's operating costs and your own living expenses. All businesses are different, but I'd recommend preparing for 8-12 months with no income.
Transferable skills and interests
In academia, we are passionate about our research and motivated to contribute to the scientific community. This is one of the greatest things about being a scientist. But there is more than one way to make an impact. Entrepreneurship in life-sciences provides a unique opportunity for us to solve problems that the community faces while staying close to our research roots. At BioBox, our team remains deeply connected to our academic origins and are committed to solving the challenges that we faced while in academia. The days are long, and the pressure is high, but it is a feeling that we are used to during our time grinding in the lab and spending countless hours trying to get our experiments to work.
One of the best things you learn in academia is the importance of self-sufficiency. In your research project, you are most likely the single champion and key driver of that project. Your years of work in a self-directed and independent environment prepares you very well for the challenges you will face when building your own business.
In many ways, academia is an excellent training ground for entrepreneurship. For those who are considering breaking out and building their own business, I can assure you that it will be one of the most rewarding experiences you’ll have in your career.