Bolded Science is on vacation for the month of May. In the meantime...check out some of our older posts.
Considering writing for Bolded Science?
Are you looking to add more writing samples to your sci-comm portfolio? Do you have a science-related cause you'd like to advocate for? Are you conducting super interesting research that you are just dying to tell everyone about? Pitch a blog post to Bolded Science: TheBoldedScientist@gmail.com. It's simple, send an email introducing yourself and 1-2 sentences explaining what you'd like to write about. Then, we will pick a publish date. Once you are approved, write your post and email it 2 weeks before your publish date.
Questions? Please see the FAQ page: https://www.boldedscience.com/faq.html . Question not there? Please send an email and I'd be happy to help.
Thank you to all readers and writers of Bolded Science. This collaborative blog is made possible by our curious, knowledgeable, and passionate community of scientists. In just over a year, Bolded Science has grown to have > 4,000 twitter followers, thousands of blog views, and over 60 blog posts.
Accepting blog posts for the following publication dates:
(First draft is due two weeks before publish date)
All the best,
Kerry Silva McPherson
Creator and Editor of Bolded Science
PS: Please be patient with email replies, I'll be only checking my inbox a couple of days in the week.
What do you do when you no longer need an item? Unwrapping a Mars bar, what happens to the packaging? How much thought do we put into what we throw away? How much thought do we put into waste overall?
Waste is a big problem, well, actually HUGE. Yes, huge is the right word. National Geographic reports that an estimated 8 million tons of plastic end up in the oceans annually. Furthermore, researchers project that by 2050 the volume of mismanaged plastic waste will climb to 155-256 million metric tons per year. But we are not only talking plastic waste today, so the numbers are even higher!
Where could we possibly hoard all this waste? How can we possibly make all this waste? And how can we get rid of it?
"And then there was waste" - the origins
Definitions for waste:
(a) the useless remains of human activity
(b) materials that are useless to the producer
(c) materials that we are willing to pay to dispose of
In no uncertain terms, waste is the stuff we no longer need and are happy to discard. So, how do we handle waste? Well, categories. Since waste is a megaton problem, knowing its sources and properties can help us understand and deal with it better.
Categories of waste depend on the sources, the two overarching categories are (a) municipal waste and (b) industrial waste. Municipal waste is produced in our private lives, homes, areas of recreation, and activities. On the other hand, industrial waste refers to discarded, emitted, and leftover materials from industrial activities. The division is not always clear. For example, hospitals and restaurants are workplaces for staff but places of health services and recreation for patients and consumers, respectively.
We, as consumers of goods, have control over the waste we produce. We, as industrial teams, can control waste industrially produced. We, as the human race, are responsible for 100% of the waste deposited on the planet. The difference between human waste and animal waste is that our waste cannot be absorbed by nature because of its volume and consistency.
Industrial waste is overwhelmingly more than personal waste. A waste report of the UK (population of around 65 million people) for 2011 accounted for 27,300 thousand tons (27.3 million tons) of municipal waste and 41.1 million tons of industrial waste.
The story these numbers tell us is that for every 100 waste items that the UK produced in 2011, 40 waste items were produced in our private lives, and 60 waste items were produced in our business lives... but in millions of tons.
But recycling — you say?
I hear you. Let's take a quick look. There are many countries with recycling schemes, some of them better than others. However, we do not recycle fast enough, we do not manage waste well enough, and technologies on recycling are still at toddler age.
More effective recycling and management can bring numbers down. And the question remains, where does the waste go? Where do we deposit it?
A game of hide and seek – depositing waste
Deposit sites differ according to geographies, management policies, and laws. However, all countries have landfills to bury waste in the ground. In fact, landfills is the most common way of putting waste out of sight.
Some countries even allow depositing waste directly into water bodies. This practice is not necessarily by design. Rather, lack of legislation to make dumping in water illegal and lack of enforcement of current legislation enables large-scale water pollution. Even when illegal, companies, industries, and businesses still do it. Citizens still do it. Remember when you visited the beach with bags full of snacks? Did you remember to pick up the waste before leaving?
Another "eco-friendly" way to manage waste is incineration – we burn it. Incineration is not a new practice and involves a lot of chemistry and social impact. Incineration is now considered eco-friendly because it is used to produce energy. So we no longer just burn it. We burn it with a purpose. On the flip side, the by-products of incineration may include toxic substances or heavy metals. Ashes and pulp are the end by-product of incineration. Pulp is likely to end up in a landfill.
Take-away message: the Earth has afforded us the grace of not thinking about our waste for a few hundred years now, but we can no longer be as thoughtless. The numbers are daunting. When waste goes out of sight, it doesn't go out of existence. Landfills leak into the ground and poison the soil. Ocean life is obviously in danger from pollution, and the air is fast becoming a concern – think of citizens in industrial cities who wear masks all year round due to poor quality air.
In the current times, the lesson learned is that traditional ways to manage waste have gross negative effects. Despite the research, innovation, and promising new practices, the negative still outweigh the positive. More thoughtfulness of the content in our waste bins is necessary because awareness helps initiate action.
Are scientists born or nurtured?
Considering that men predominately hold STEM careers, arguments have been made that women aren’t inherently suited for science. Regardless of where you stand on this topic, this sex gap can partially be attributed to pervasive gender stereotypes in the sciences, specifically what it means to be a “scientist.”
I attribute my early life relationship with science to gender bias. Growing up, I remember loving science programs and toys, but I never considered it for a career. In fact, I avoided participating in science activities throughout school for fear of failure. And the more aware I became of what it meant to be a “scientist,” the more alienated I felt.
Contrastingly, writing, a more decidedly “feminine” pursuit, was where I felt at ease. And consequently, I focused my attention on the humanities as a student. This led me to pursue an undergraduate degree in media, with a dream of doing music industry research in the UK. I still flirted with science, though, by adding a second major, psychology, to my workload. But psychology was my “fun” major. Namely, I was interested in learning about people, disorders, and the brain, but in the “safe” environment of having it be my “second major.”
Nonetheless, despite my “fear” of science, I graduated with a 4.0 GPA in both media and psychology at Penn State, which got me into a great communications graduate program abroad. I loved living abroad; I’ve considered doing it again. But despite it being a positive experience, I was unfulfilled with the research that I was doing. I longed to do research that positively impacted people, much like my dad, who had a PhD in pharmaceutical chemistry. My dad had worked as both a senior industry scientist developing pharmaceuticals and as a pharmacist. I wanted to benefit others the way he had.
At this crossroads, I changed my research direction by applying to PhD programs in health communications in the US. My plan was to relate my background in media and psychology to public health. It seemed like an obvious transition. But despite interviews with Cornell and Michigan State, I was rejected from every school where I applied. Additionally, on the day that I received my final rejection letter, I witnessed my dad dying of an unexpected heart attack while we were shoveling snow. To clarify, my dad wasn’t aware of my rejection letters when he passed and died believing that I was following in his footsteps. Looking back on this, I’m not sure how I could continue functioning as a productive adult. But somehow, the experience helped me thrive.
With nothing to lose, I decided to pursue a Master of Science in psychology. I had always been fascinated by the brain, despite shying away from science. Over the years, I tentatively developed the desire to study the neurobiology of eating disorders. My interest in eating disorders stemmed from being a former figure skater. I was interested in how some individuals developed eating problems while others didn’t. Additionally, during my undergraduate media studies, I became frustrated with the argument that eating disorders resulted from a culture of thinness. While this is partially true, I wanted to understand these disorders from a biological perspective and enlighten the public about their complexity. This career ambition wasn’t something that I readily shared, though. How could I study neurobiology without a science background?
But I was willing to try.
Taking neuroscience courses as a graduate student has been very challenging and, at times, embarrassing. For example, after getting a low C on my first neuroscience exam, I set up a meeting to talk with faculty about switching my degree to counseling. My grade seemed to confirm that I wasn’t a scientist. But I thought about my dad and how proud he would be of me, and I reasoned that all I could do was try. Plus, I had earned a tuition-free scholarship, and I couldn’t give that up.
So, that first semester, I attended office hours with my neuroscience professor every week, sometimes for two hours at a time, and I got an A in the course. In my second semester, I joined one of the few biology-based labs in the program, a pharmacology lab that used serotonin and dopamine deficient mice to study behavior. This experience was symbolic for me, as my dad had worked in pharmaceuticals. It was rough, though. I had zero lab experience, no chemistry knowledge, and had never handled rodents. I fumbled over measurement conversions (I still do) and other basics that the undergrads did easily. Nonetheless, with a lot of humility and perseverance, I completed my thesis on the brain and behavioral effects of fluoxetine on binge eating in serotonin deficient mice in two years.
After earning an MS in psychology, am I a scientist? I didn’t feel like one after going through another cycle of PhD rejections, this time for neuroscience. I was tired. I’d worked too hard to give up, though, and focused on bolstering my CV. I volunteered in an eating disorders lab at Temple and proactively contacted potential advisors. This third round of applications stuck, and I am currently pursuing a PhD in neuroscience at Purdue, focusing on the neurobiology of eating disorders.
As a PhD candidate in a STEM field, am I a scientist? Occasionally I still have significant moments of imposter syndrome. Although, my perception of a scientist is changing.
I’ve realized that science is a messy, mistake-laden series of fields. The expectation is to follow perfectly laid out steps and complicated calculations, but we try, fumble, and try again. We improvise and problem-solve and often have no idea what we’re doing. Even more importantly, I’ve realized that the atmosphere of exclusivity and masculinity that enshrouds the scientific community is unnecessary. We need to communicate these misconceptions about what it means to be a scientist to the public to increase accessibility and diversity in science. By shattering the stereotype of what it means to be a scientist, we have the potential to broaden contributions in scientific enterprise while increasing the trust of science skeptics.
My advice for those who enjoy science but have avoided it because it “wasn’t their place”:
Looking towards the future, finding a career that highlights my diverse skillset is challenging. However, carving out new spaces for scientific enterprise is one way to update what it means to be a scientist. I hope that you, too, will be inspired to follow your curiosity down new paths, undeterred by failure but emboldened by discovery.
A post in collaboration with The Shared Microscope
Caution: Smoking tobacco is harmful to health, and as most of you know, is carcinogenic. This article in no way encourages to use of tobacco as protection against COVID-19.
In response to the COVID-19 pandemic, several drug companies, including AstraZeneca, Moderna, Pfizer, Johnson & Johnson, Sinovac, and Novavax, have successfully created vaccines that are currently being deployed to individuals worldwide. Medicago and GSK have also joined forces to develop a unique vaccine against COVID-19.
Medicago, a biopharmaceutical company headquartered in Canada, focuses on developing plant-based therapeutics in response to global health challenges, COVID-19 being no exception. In response to the pandemic, Medicago (alongside GSK) developed a COVID-19 vaccine unique to all the other vaccines recently developed against COVID-19. So what's special about the vaccine? Interestingly, this is the first COVID-19 vaccine that is entirely plant-based. Yep, you heard that right — made in plants! Tobacco plants (ish)!
What plant is the Medicago COVID-19 vaccine made in?
Medicago's vaccine is made from a close relative of the tobacco plant, Nicotiana benthamiana. Ironically, tobacco has plagued humans with lung conditions ranging from COPD to lung cancer for decades. Surprisingly, this notorious plant has been critical in manufacturing vaccines against COVID-19-related pneumonia in the past year.
Plant-based vaccines are cheap to produce, safe, and rapid for developing vaccine ingredients such as the Coronavirus-like particles. Plant-based vaccines are also highly scalable and provide manufacturers with increased flexibility. As highlighted by the peer-reviewed paper linked here, the Nicotiana benthamiana is the core production host for various companies aiming to further human medicine, including Medicago, Icon Genetics, iBio, and Lead Expression Systems.
Nicotiana benthamiana is often the production host of choice because it has a defective plant immune system, making it easy to produce new vaccine ingredients. Research suggests that the plant sacrificed its defensive system in favor of a hastened reproduction cycle. A strategy that enabled the plant to cope with severe drought weather in central Australia. This susceptibility to infection enables the plant to quickly undergo genetic transformations and transient gene expression, making it an excellent mini-factory for protein production.
How are plants used in vaccine development?
For over a decade, Medicago has been developing a technology that harnesses the power of plants to develop vaccines. More specifically, using a bacterium, the plant is "fed" information about a virus to produce the main ingredient of a vaccine (also known as the active ingredient).
The plant is programmed to produce the vaccine -- to do this, Medicago only needs the virus's genetic sequence rather than the virus itself. The genes are then introduced to the plant via a bacterium. The plant's normal machinery will produce the vaccine ingredient against the virus through a natural process within the tobacco plant. The plant can produce the vaccine's active ingredient in approximately one week from the date of initial exposure to the pathogen. The active ingredient is then harvested and purified for use in a vaccine candidate. The vaccine candidate can then be produced within 5 to 6 weeks.
For the coronavirus vaccine in development, Medicago focuses on producing Coronavirus-Like Particles that mimic the virus's structure. Please note: although the Coronavirus-Like Particles are structurally identical to the SARS-CoV-2 virus (which causes COVID-19), they do not contain any of the genetic material of the virus, and therefore, are unable to cause infection.
To learn more about the COVID-19 vaccine in development by Medicago, feel free to check out the following video:
However, the production of the vaccine candidate is only half the battle won: the vaccine candidate then has to pass various safety and efficacy tests before the vaccine can be commercialized for use in humans. This testing process is further explained in an article linked here.
Tell me more about the Coronavirus-Like Particle Technology
Virus-like particles, such as the coronavirus-like particles manufactured by Medicago, are structurally identical to wild-type viruses. However, they lack the genetic material inside the virus. Because of this, the virus-like particles are unable to replicate or cause infection in the vaccinated individual.
Below is an image of Medicago’s CoVLP compared to an image of a wild-type SARS-CoV-2 virus:
It can be argued that vaccines produced in plants are faster and more accurate because no manipulation of the virus is required. The vaccine development also does not require that viruses be handled in the laboratory. Research so far suggests that virus-like particles have had an equivalent or superior immune response in mice when compared with live viruses.
Virus-like particles cannot develop any mutations and are also structurally stable, unlike mutations that may sometimes develop from traditional vaccine manufacturing. The virus-like particles can elicit an immune response via antigen-presenting cells (a type of white blood cell) found in the human body, leading to a robust immune response in the vaccinated individual.
The virus-like particles fool the immune system to protect an individual from disease. Virus-like particles look like the disease-causing virus that can trick the immune system into making antibodies that will protect against the actual infectious pathogen if naturally exposed to it.
Isn't Medicago working with GSK for this vaccine?
Yes, Medicago has joined hands with GSK for the development of their COVID-19 vaccine. As part of the partnership, Medicago manufactures the active ingredient to be used in the vaccine. GSK provides its pandemic vaccine adjuvant system for the same.
What is the importance of the adjuvant? The adjuvant plays a vital role in the pandemic and has done so amid the last flu pandemic of 2009. The pandemic adjuvant reduces the amount of vaccine protein required per dose, allowing more vaccine doses to be produced overall. In other words, the Medicago vaccine can be “diluted” using the Medicago pandemic adjuvant, which can help to protect more people overall. This “dilution” using the adjuvant enhances the immune response and provides long-lasting immunity against infection.
Is the Medicago vaccine vegan?
This seems like an apt question here. Is the Medicago COVID-19 vaccine really vegan? Medicago has not directly responded to this question (yet). But one thing is clear, the active ingredient of the vaccine, the coronavirus-like particle, is not of animal origin. More information is required about all the other ingredients used in the vaccine before we can certainly comment about whether or not the vaccine is vegan.
What's the current situation?
The Candian company, alongside GSK and Philip Morris, recently reported promising results from their Phase I and II clinical trials. The vaccine is now in the final phase of human trials. To learn more about the vaccine development process, feel free to check out this article here.
Depending on the results of the Phase III trials, the company has reached an agreement with the Canadian government to accelerate its COVID-19 vaccine candidate efforts. The government has agreed with the company to supply 76 million doses of its COVID-19 vaccine within Canada.
Carpe Fiscus! The time is ripe to stimulate the "pathway to independence" for early career researchers.
In August 2017, faced with the increasing probability of drastic budget cuts under the Trump administration, the NSF Directorate of Biological Sciences announced it would no longer be funding its long-praised Doctoral Dissertation Improvement Grants (DDIGs). This decision marked a turning point in funding opportunities for graduate students, who are particularly vulnerable to a lack of grant options. As a graduate student at the time, I wrote about the decision and its ramifications for myself and my peers, citing the NSF's choice to slash the program as a brand of "trickle-down" academics. I noted that the decision to cut DDIGs was worrisome for early-career researchers, heralding a further consolidation of academic power at the very top levels of the hierarchy and diminishing agency for already-vulnerable trainees.
Just under four years later, the world and I have both moved on – I passed my dissertation defense and began a new chapter in my academic training. The US handed the reins of power over to a markedly different administration, one constantly challenged by the lingering watermarks of its inherently anti-science predecessors. With this country-wide transition and the eyes of the world increasingly on academic research during a global pandemic, we sit at the cusp of an incredible opportunity to push funding opportunities for early-career researchers further than ever before.
Funding for Early Career Researchers
Most graduate students in STEM are funded through a combination of research and teaching assistantships, the money for which comes from grant and institutional funding, respectively. Grant funds are often awarded directly to the student's principal investigator or PI — the person directly responsible for mentoring and supervising graduate students. Teaching assistantships are often seen as less desirable by the students and their PIs, as teaching takes time away from research. Therefore, research assistantships are prized but depend entirely on the PI for funding and tend to be fairly restricted in subject matter. The PI faces pressure to publish and present on the experiments laid out in the original proposal as this evidence of success is instrumental in future funding decisions; there is usually little room for creativity or independence from the grad student.
Funding opportunities for postdoctoral fellows are often similarly awarded to the PI rather than the fellows themselves, with fairly rare exceptions. The NSF has a single program aimed solely at biology postdoctoral fellows called the postdoctoral research fellowship in biology (or the PRFB), while the NIH, the other major funder of foundational research in the US, has several opportunities geared directly at postdocs, including their F32, K25, and K99/R00 awards. Competition for these awards is fierce. It is much more common for postdocs to be funded as a component of grants awarded to established PIs, leaving little opportunity for postdocs to control the direction of their research. Instead, they remain bound to research that can directly tie into the goals of the grant they are funded under – goals which they may have had no hand in setting.
Rightly or wrongly (and it would be the topic of a whole other post to unpack whether it's right or wrong), we predominantly train our graduate students and postdocs to be future PIs. PIs must be able to independently find funding opportunities, generate innovative research proposals, and follow through on those proposals' aims. These steps all involve thinking creatively, responding to unpredictable events, and adjusting accordingly. While funding decisions are based strongly on publication record, they also depend heavily on a proven track record of securing independent funding. Depriving our early-career trainees of opportunities to establish a funding track record makes their professional and academic journeys much harder. It takes away their agency, leaving them more reliant on their PI.
Hopefully, if you've read this far and are still invested in reading further, I've convinced you that funding opportunities for early-career researchers should be expanded. What can we all do to make sure that this expansion happens? Well, the answer will likely depend on who you are and your specific role within publicly-funded research.
For fellow academics, especially those further along in their careers:
While most people are aware of the threats posed by climate change, few know of just how drastic those threats are to biodiversity. According to the World Wildlife Fund (WWF), the Earth loses roughly 10,000 species every year, roughly 5,000 times higher than the natural extinction rate.
While zoos are an effective way to house endangered or threatened species, the reproductive biology of these animals is largely unexplored but is becoming increasingly important for species conservation. Two of the most pressing issues facing zoos today are space and lack of genetic diversity. Even when zoos are well-managed and internationally connected, zoo populations rarely contain large enough animal populations for long-term sustainability. Moreover, when new animals are brought in to revitalise captive population genetics, the logistics of moving animals between zoos can be extremely challenging (imagine the logistics and costs of moving an elephant or giraffe from New Zealand to New York). This is where assisted reproduction can play a significant role.
What is assisted reproduction?
Broadly speaking, assisted reproduction involves managing an animal's reproductive cycle or manipulating gametes to achieve fertilisation and a subsequent pregnancy/live birth. Some of the most common assisted reproductive techniques in our arsenal are gamete cryopreservation, artificial insemination, and in-vitro fertilisation (IVF). Assisted reproductive techniques have become very well defined in humans that, since the birth of the world's first IVF baby in 1979, around 8 million children have been born from assisted reproductive techniques globally. Assisted reproductive techniques have also become so commonplace in laboratory rodents and farm species that we often forget the incredible difficulty in defining the fundamentals of a novel species' reproductive biology. Unfortunately, this is exactly the case with many endangered or threatened species. Even artificial insemination, one of the more basic assisted reproductive techniques, requires an in-depth understanding of male and female reproductive physiology before we can even think of making an attempt. Although daunting, once even simple techniques like AI or reproductive cycle management are defined, assisted reproductive techniques can be incredibly useful in supporting captive breeding efforts.
As I mentioned earlier, the difficulty of transporting some animals between zoos (let alone continents) is extremely challenging. Sperm cryopreservation is an effective procedure for many species, where semen is collected either voluntarily or through electroejaculation and frozen without dramatically affecting sperm viability. Similarly, even cells from wild individuals can be collected, frozen, and used in captive breeding programs. Cells frozen correctly can (in theory) remain viable forever and be shipped around the world far more cheaply and simply compared to shipping an entire animal. Several institutes around Australia, including the Taronga Conservation Society and Monash University, have adopted this idea and have established the futuristic concept of a 'Frozen zoo.' Frozen zoos store cells from endangered animals and plants in liquid nitrogen until they're needed for future genetic reintroduction programs into captive or wild populations through techniques such as artificial insemination or IVF.
I think it needs to be clearly stated that assisted reproductive techniques never intend to (or I think ever will) replace captive breeding. Assisted reproductive techniques are tools that scientists, conservationists, and zoo staff can use to more effectively increase captive animal numbers without replacing traditional breeding methods.
Have frozen zoos and assisted reproductive techniques been useful before?
In practice, assisted reproductive techniques are rarely used in captive settings due to their technical complexity and perceived costs. However, assisted reproduction continues to make headlines in scientific literature and the media, including artificial insemination in giant pandas and jaguars, cryopreservation in coral and fish species, and, most recently, the cloning of black-footed ferrets from cells frozen over 30 years ago. While it may seem drastic to start cloning rhinos or freezing sperm from lions, climate change poses incredible threats to species biodiversity, which we are doing a terrible job in mitigating. The Earth is losing roughly 10 million hectares of forest every year, and, as a result, animal populations are becoming increasingly fragmented and isolated, limiting gene-flow between populations. By not having enough genetic diversity between populations, a species can suffer from inbreeding depression: the reduced biological 'fitness' of a species and their ability to reproduce and survive in the wild. Reliable techniques for preserving and transporting species genetics between captive settings (or from the wild to captive settings) enable better management of genetic diversity while increasing that species' biological fitness.
So, what does the future of assisted reproduction look like?
While assisted reproductive techniques have clear immediate and future benefits to species conservation, their use is unfortunately not up to the conservationists and scientists but up to funding bodies and political big wigs.
The importance of assisted reproductive techniques in the future of species conservation cannot be understated, and researchers continue to build the case for assisted reproductive techniques as reliable, effective tools for the protection of biodiversity. Conservationists and assisted reproductive biologists have chosen a difficult career, often restricted by funding issues and a pervasive misunderstanding of the importance of biodiversity in the general population. Although everybody loves the trailblazing, revolutionary discoveries, or achievements in science, these discoveries are only possible after decades of fundamental research. Without the proper funding or public interest in biodiversity, species conservation will remain an incredibly tough, arduous field. That being said, although progress may seem slow, if we continue to fight the uphill battle against climate change, we will be glad we invested in assisted reproduction science when we had the chance.
How long do you think is an appropriate time for students to commit to their PhD? If you ask around, the perceived time range varies quite a bit, 3-4 years, 4-6 years, or even double-digit years. If we can't agree on the time length of a doctoral degree (like Med, Pharmacy, and Law school), there must be other cemented parameters that guide students to graduation? .... Right....?
How do you know when you are ready to graduate?
Most STEM doctoral students travel a similar path. They conduct research until their project is complete, then after writing a thesis and defending it, they are conferred with the title of "Doctor." The question is, when does one actually "complete" a research project? Research is never done. One question is answered, which leads to another question, which leads to another and to another.
So how does one judge if a graduate student's work is finished? I tweeted out this question sometime ago and received various answers. Most responses were number of publications, completion of proposal aims, or the amount of years in the program. A few other less common answers were, "vibe," "loss of funding," "up to the PI" and....
So let's break the most common answers down:
Number of publications: Many programs require 1-2 publications for a student to graduate. Often, this is considered a fair. Other times, it puts the student at a disadvantage. This parameter neglects to normalize the support systems within the lab. Some students have no lab-techs, post-docs, or collaborators, meaning that publishing is a much more arduous task than their counterparts. If a student is lucky, they receive a project that is low-risk or partially finished, while other students work on high risk projects for years without a payout.
Completing proposal aims: At the surface, this seems to be an equitable stipulation for graduation. But often, students write proposals off-topic, rendering this parameter impossible. Frequently, projects evolve and take the student in another direction than their proposal. If they veer off path for the sake of science exploration, should they be still held to the same proposal they wrote years ago?
Years in program: Not all projects are created equal, and not all students put in the same effort over the same period of time. However, I argue that putting a time cap drives productivity, encourages streamlined research, and motivate PIs to support their students in finishing their projects.
Graduate student labor.
Graduate students are cheap labor, most making about $30,000 a year. PIs are reluctant to allow students to graduate, thereby forfeiting a valuable resource. Besides the project a student is working on, a grad student is also expected to train incoming lab members, maintain lab equipment, contribute to lab chores, and work on side projects other than their thesis.
We have to ask ourselves if more years in the same program with the same mentor is beneficial to a student's education and training. Is a seventh-year student still learning from their mentor, or are they underpaid employees receiving typical on-the-job training? Furthermore, extending a student's PhD training can certainly have setbacks.
Time at school should not be taken for granted.
A long PhD can have detrimental effects on a students' life and career:
From the many student-PI conflicts I've seen, it's naive to believe the current system of arbitrary graduation guidelines is working. To give more protection to graduate students, I propose two policies: (1) A 4-year time cap for students with a Master's degree or a 5-year time cap without a Master's degree, and (2) salary raises for students throughout the program.
Providing a time cap.
With no time cap, graduate students are encouraged to work on non-thesis research and to tackle high-risk projects that are likely to not pan out. A time cap can motivate both the student and the mentor to come up with a practical project plan and remain focused at the proposed thesis work needed for graduation. Consider Parkinson's Law which states that work will expand to fill the time allotted. Under this principle, a student will finish their dissertation research in four years if given a four year time limit. Without the time limit, a student will linger in the program until an external factor (funding, unhappiness, or a job offer) influences them to wrap up their project.
However, there are plenty of reasons to fight against a time cap: variability between disciplines, discrepancies in work ethic, and neurodiversity of students. This is why I recommend a 4-5 year time cap with an opportunity to extend. Extensions can be offered to students with disabilities, or if a student, PI, and thesis committee mutually agree staying in the program is beneficial.
Demanding raises for long-standing students
Paying senior students more money is another way to ensure equity to graduate students. Senior students take on more responsibility and are [usually] more skilled. Granting raises rewards students for more time in the program and also removes the cheap labor bias.
It's not all the PIs fault.....
It's easy to blame the thesis advisor. But if PIs aren't given the proper tools and support system to keep a lab running smoothly, can we blame them for wanting to keep students on longer? Here's a few ways institutions can support PIs more:
I am aware that many European Universities have time limits. This post is written by an American graduate student whose program has a loose 7-year limit. I am a fourth-year student who intends to graduate during my fifth year. Even with three pubs (one published, one submitted, and one underway) and a Master's degree, I receive pushback that I should stay well into my 6th year. I'm convinced that more time in my PhD will not further my education or career prospects, but I am certain that it will affect my financial and mental health.
We’re a year into this pandemic, and although the numbers seem to be improving, video-chatting is here to stay. Even once we reach “normal,” the convenience and flexibility of virtual meetings likely means we have plenty of web-based interactions in our future.
So, when it’s time for you to plan your next zoom event, here are a few things to consider:
1. Schedule time for IT issues. Plan on time for connectivity issues and microphone checks for all speakers. And if using multiple speakers and break out rooms, plan for some adjustment time. Don’t expect all changes between speakers to happen smoothly and instantly.
2. Schedule breaks. Just because we are sitting at our laptops, doesn’t mean we don’t need coffee, lunch, and bathroom breaks. During long meetings, your participants' brains likely need a break from that dense content.
3. In person format ≠ zoom format. I’ve participated in a few events where the organizer took the same schedule from previous in-person events and used it over webchat without modification. With the lack of in-person socialization, the audience members likely have decreased attention spans. So, for web-based events, less is more. Consider shortening your format. We don’t want to stare at our computer screen for a full day!
4. Ask questions to improve engagement. You may have noticed there’s less questions and participation during webchat events. Utilize the poll functions and don’t be afraid to have fun quiz questions with your audience.
5. Take advantage of break out rooms (when necessary). Break out rooms can be great to facilitate conversation, ie: for panel discussions or to answer discussion questions. But gage your audience, will breaking out into smaller rooms facilitate more conversation? Or will it dilute the pool of participants likely to actively participate?
6. Say good-bye to weekends. Often, conferences may be held on a weekend due to availability of parking, hotels, and conference rooms. But with the internet, availability is endless. Please organizers, leave my weekends alone.
Although we complain about zoom-life, I love it! I can easily meet and talk with people around the world. Covid-19 brought about terrible atrocities, but at least it acclimated us all to the lovely world of video-chatting.
When we think of the hard-working scientist, we think about a scientist that enters that lab early in the morning. They work through the day using multiple cups of coffee to keep their energy up. Late at night, they can be seen writing on a whiteboard, making the prime discoveries in their field.
Scientists have come to romanticize workaholism. We believe that the person who works the longest hours and sacrifices the most for their work will be the most successful. This idea comes from "grind culture" or "hustle culture."
If we believe in the "grind culture," we believe a lie. The person who works the longest hours is really the person losing out on the enjoyable things in life.
Think about the last time you put in a long day. At the end of that day, did you think, "Oh, I just keep wanting to do this forever?" Probably not. Instead, you probably thought about how much you want to go home and how you do not want to come in tomorrow.
As scientists, we can truly love and enjoy our work, but too much of anything can be a bad thing. The grind culture leads scientists to burnout, neglect self-care, and actually become less productive.
When I started graduate school, I believed in this lie. I tried to devote nearly every waking hour to my work to be successful. After every week, I wouldn't get out of bed on Saturdays until the afternoon. Even if I woke up around 10 a.m., I would just lay there questioning what was wrong with me. Once my partner would finally convince me to get up, I would eat and then start doing more work.
Not only did I waste a large amount of my time, but I also dealt with very high anxiety and depression through this time. Any moment that I wasn't working, I felt guilty. At my core, I believed that not working was an expression that I wasn't serious about my work.
In reality, this idea is a fairly disturbing notion.
After seeking therapy, I realized that my long hours and constant work are not what made me successful. What made me successful was my determination, problem-solving skills, and ability to develop ideas. Yet, burnout decreases all of these abilities that lead to success.
How to Really be Successful
Therefore, instead of working more hours, you should focus on becoming more efficient in your work. We are all inefficient in our work. In fact, a recent study found that typical workers only work less than 3 hours in an 8-hour workday.
Think about your regular day. How much of your time did you truly spend working on things that move your science forward?
On a typical workday, I spend time socializing with my colleagues, checking out my social media, watching shows or YouTube, and staring at my screen, not wanting to do work. Yet, I would be at work for over 10 hours, saying I worked 10 hours that day.
If you give in to grind culture and think you should work all the time, then you lose your motivation to do work. If completing work doesn't allow you to leave work sooner, what motivation do you have to complete work?
There are two principles that can help you become more productive by working less: Pareto's principle and Parkinson's law.
Pareto's principle states that 80% of your success comes from 20% of your effort. Therefore, if you think about your typical workday, only about 20% of your time is creating 80% of your success in science.
Parkinson's law states that work will expand to fill the time that it is allotted, meaning that if you give yourself 10 hours in a day to complete a task, it will likely take all 10 hours, even if it only really requires 2 hours of work.
If you apply both of these principles to your approach to work, you can work less and accomplish more by becoming a more efficient worker.
While you may be on board about becoming an efficient worker, you may still wonder how to become more efficient. So let's go step-by-step through a system that I created for myself, which has proven to make me more productive while decreasing burnout.
Set Work Hours
The very first step of becoming efficient is to set your work hours. You should set your work hours based on your lifestyle and work requirements.
Do you want to work 8 hours and be off in time to make an exercise class? Then set your work hours to complete your workday in time for your class.
However, you also need to take into account the needs of your work. What times do you have meetings? When does your boss or supervisor expect you to be around?
The benefit of setting your work hours is that you are already combating Parkinson's law. You now have fewer overall hours that work can expand to fill. Additionally, you can regain motivation because you know that you need to finish your task by a specific time so that you can leave work accomplished.
Make To-Do and Not-To-Do Lists
Once you have your work hours, you need to concentrate your efforts on the things that are bringing you success in your science. The best way to focus is to create to-do lists and not-to-do lists.
First, think about all of the things that you genuinely need to do to make progress. If you think you need to do everything, ask yourself, "If I could only work 2 hours a day, what would I do?". Suddenly, your brain will flood with the most important things that need to be done for you to be productive in science. Write these things down to make your to-do list.
Now, make a list of at least three things that you do that waste your time, such as tasks that make you feel productive but don't result in actual progress. For many graduate students, I believe that reading scientific papers for the sake of reading them should be on your list. Reading papers should be done for a specific reason, not simply so that you can feel productive or say you read so many papers that week.
Personally, my not-to-do list includes checking my social media, checking my email, and watching shows during my day. Place your to-do and not-to-do lists somewhere where you can see them regularly.
Block Out Your Time
The third part of becoming more efficient is to block out your time. The essence of this idea is to prevent you from task switching multiple times and wasting time as you move from one task to another.
There are two ways that I like to block out my time. The first is to theme my days, and the second is to create time blocks.
If you have specific themes to your work, then it is nice to theme your days. For example, if you are a graduate student, you may have coursework, research, and teaching. On a day that you teach, make it a teaching day. Take the time during the day to grade assignments and plan for the next week's lesson. On a day that you attend multiple classes, take the free time you have to study and do homework. On days that you have research meetings or primarily free days, focus that time on research-related activities.
Themed days help you plan your day, keeping you focused and allowing you to make progress on one task all day long.
Time blocks allow you to work on a single task for 45-90 minutes. Maybe this task is a meeting, class, or writing a paper, but after your set work time, you have the margin to move from one task to another. The way I prefer to do this is 90 minutes of work with a 30-minute margin. However, depending on your schedule, a 45-minute block with a 15-minute margin may work better.
Overall, the idea that you need to work longer hours to be successful is not only a lie but counterproductive. Instead, by increasing your motivation and efficiency of your work, you can become more successful while maintaining your personal life. To become more efficient, I have a 3 step system that I employ:
Operation Warp Speed, launched in early 2020, helped speed up the pace of vaccine innovation, turning the normally 10+ year clinical trials process into one that takes less than a year. To learn more about the vaccine development process, check out this post by Nidhi.
Although vaccine development has been significantly accelerated, it is essential to understand that vaccine development has not been rushed. In fact, despite operating in a public health emergency (the COVID-19 pandemic), vaccine research has been thriving. This is thanks to scientific collaboration, funding, and a quick and thorough review process, allowing scientists across the globe to develop the COVID-19 vaccines in under a year.
In this article, we will discuss Johnson & Johnson's COVID-19 vaccine. The Johnson & Johnson (J&J) vaccine will likely be approved by the US Food and Drug Administration (FDA) for use by late February or early March. The J&J vaccine is different from other COVID-19 vaccines in that it only requires one dose. As such, it may be the saving grace to the seemingly slow and clunky vaccination rollout in various countries, including the United States.
Why might the J&J vaccine be the pandemic saving grace?
The J&J vaccine may be the next one to receive approval -- i.e., after the Moderna vaccine and the Pfizer vaccine. There are various advantages and disadvantages to using this vaccine.
The biggest drawback of the J&J vaccine is that it has lower efficacy than the Moderna and Pfizer vaccines. To understand the science of vaccine efficacy better, check out Sheeva's post. More specifically, the J&J vaccine has an efficacy of 72% in the United States, 66% in Latin America, and 57% in South Africa. By contrast, Moderna's COVID-19 vaccine has an efficacy of 94.5%, and the Pfizer COVID-19 vaccine has an efficacy of 95%.
Despite the lower efficacy rate, the J&J vaccine remains quite promising. The vaccine only requires a single dose, significantly simplifying the logistics required for local health departments and clinics. Additionally, the vaccine is stable in a refrigerator for several months (36°F - 46°F or 2°- 8°C). Contrarily, other vaccines, such as the Moderna and Pfizer vaccines, require freezing at significantly lower temperatures of -4°F or -20°C and –94°F or –70°C, respectively.
Johnson & Johnson's COVID-19 vaccine and the AdVac technology
The Johnson & Johnson vaccine in development (which is now seeking FDA approval in the United States) goes by two names -- JNJ-78436735 or Ad26.CoV2-S. The vaccine is developed by J&J's pharmaceutical arm, Janssen, using Johnson & Johnson's AdVac technology.
According to the Janssen website, AdVac technology is "based on development and production of adenovirus vectors (gene carriers)." The AdVac technology enables effective development of an adenovirus-based vaccine in response to emerging diseases, such as COVID-19, in a cost-effective and large-scale manner.
What is an adenovirus-based vaccine?
To explain what an adenovirus-based vaccine is, we first have to talk about the basics of viral vector vaccines. The Oxford/AstraZeneca and J&J vaccines are both viral vector immunizations, meaning that a non-infectious virus is used as a shuttle to deliver the virus's genetic contents into our bodies.
Think of a viral vector vaccine as a "cut-and-paste" vaccine. Parts of one virus are cut and pasted into another to create a viral vector vaccine. An adenovirus-based viral vaccine uses part of an adenovirus as a shell and a gene encoding a part of another virus (such as the novel coronavirus) is shoved into that shell.
In both the Oxford and J&J viral vector vaccines, the gene encoding the coronavirus spike protein is pasted into a "hollow" shell of an adenovirus. J&J specifically uses an adenovirus strain named adenovirus 26 (Ad26). When the vaccine (i.e., the Ad26 shell and with the spike protein center) is administered, it invokes an immune response in the body. (Learn more about the spike protein here.)
After being vaccinated, our body will be able to respond to the virus more effectively to eliminate the risk of infection. This is done through the quick and effective recruitment of immune cells and antibodies to prevent the virus from inducing COVID-19 disease. To learn more about J&J's COVID-19 vaccine, check out Nidhi's post here. You can also learn more about the other top COVID-19 vaccines here.
Johnson & Johnson's FDA Emergency Use Authorization
On February 24, 2021, the FDA stated J&J's single-shot COVID-19 vaccine will receive formal Emergency Use Authorization (EUA) approval. The company's EUA approval is based on the efficacy and safety data from the Phase 3 trials. The J&J vaccine will be a pivotal step towards putting an end to the pandemic due to its single-dose requirements that also require normal refrigeration rather than super cold storage.