Now that the academic semester has ended and the gift giving holiday has passed (Christmas Day), the New Year is upon us. During these times, a little bit of reflection is in order with the following questions:
Where am I at in my life? Where am I headed?
If you are a student, the answers are immediately based on the current academic classes (most likely) -- which are based on the successes of the previous classes completed. This is a normal process for each student during their academic journey. Unfortunately, there exists "outliers" who never learn to perform this introspection during this part of the year. The last statement begs the following questions:
Who are these people? Where do they end up in life?
I do not pretend to have all of the answers to the world. I can supply an example from the recent news which will shed light on the last question -- which is where they will end up. Below is the example.
Own Your Failure
One of the critical lessons to learn in life for anyone is to 'own your own failure.' Which is to say, instead of blaming others for your failure, take ownership of the failure and move on with success. I imagine a few readers might be thinking the following: Easier said than done! Yes, in some cases that is true. Although, the daily practice of ownership is important and could serve each of us quite well.
Siddiqui, 38, who trained as a solicitor after university, says his life and career have been blighted by his failure to obtain a first when he graduated in June 2000. He said he underachieved in a course on Indian imperial history during his degree because of “negligent” teaching which pulled down his overall grade.
More specifically, he is claiming that the error occurred as a result of the University's inability to properly staff the classes. Again, the specifics are fuzzy, but here is an excerpt from the article:
Siddiqui has said the standard of tuition he received from Dr David Washbrook declined as a result of the “intolerable” pressure the historian was placed under. In the academic year 1999-2000, four of the seven faculty staff were on sabbaticals and the court heard from Siddiqui’s barrister that it was a “clear and undisputed fact” that the university knew of the situation in advance. He told the judge that of the 15 students who received the same teaching and sat the same exam as Siddiqui, 13 received their “lowest or joint lowest mark” in the subject.
Mallalieu told the court: “This is a large percentage who got their lowest mark in the specialist subject papers. There is a statistical anomaly that matches our case that there was a specific problem with the teaching in this year having a knock-on effect on the performance of students.” He added: “The standard of teaching was objectively unacceptable.”
Mr. Siddiqui is one of several students who received a bad grade during that exam and course -- for that matter. Where are the other disgruntled students? Further, the problem resides around Mr. Siddiqui's inability to accept the 'contract' that each student agrees to when enrolling in a course at a University.
In a blog post that I wrote for another site (professional - LinkedIn), I highlighted in the beginning that each student enters an informal contract with the university when enrolling in a course. The agreement of the contract is centered around the following two principles stated below:
1) Students agree to follow the university rules (attendance, assignments, etc.).
2) Faculty and Staff agree to uphold their part and provide a quality education to each student.
As I stated in that blog: Do we live in a perfect world? No! But each of us need to do our agreed upon part in the educational process. Students tend to forget that each of them hired the university to teach them a certain skill set. That is an agreement. Not a pay and take all (meaning I pay and receive the degree with no work) process. School is not easy.
The largest problem with blaming teachers in the educational process is that data speaks volume in the teachers favor. What do I mean by this? Not every teacher is wonderful. But most teachers have a track record with a large amount of students over the course of many years. How often do students return decades later and thank their instructor for teaching them properly? No more needs to be said regarding placing the blame on the teacher. Lets focus on the student in this case.
In the article above, the obvious fall-out from such a lawsuit is the 'flood gates' that can be opened for future lawsuits that are obviously flawed and based on failure (on the students part) to take ownership. Additionally, if low grades were given to 13 other students, in order to rule in favor of him the following questions would have to be answered:
1) How was he professionally impacted by the low grade? 2) Why did he wait such a long time to bring the lawsuit against the university? 3) Why have other students not stepped up and joined the lawsuit? 4) Why have other students not spoken out about the low grades? 5) How can a court prove that there is a link between a grade and professional failure? 6) How does the court rule out psychological problems at play in the lawsuit?
I was fascinated to read that there were no other students speaking out about the incident. I imagine each of them have moved on and attributed the low grade to 'bump in the road.' Regardless, the length of time in between the course and the lawsuit is extremely suspicious among other factors in the case. In time, more might be revealed about Mr. Siddiqui.
Each of us need to take ownership of both our progress and failures.
Conclusion...
What is disappointing about the article and the lawsuit is that the student has now grown up to be a professional who has not learned to take ownership for his failures. Which is extremely sad. Imaging what his life is like? Living day to day with 'bad grade' hanging over his head and affecting his current progress.
There will be many failures in life along our professional development. The time is now to accept them and move on. As I tell people constantly about the potential failure of dwelling on the past to such a large degree -- think of the process as driving a car. Ask yourself the following question:
Would I drive forward while looking in the 'rear-view' mirror?
Of course not! You would hit some object (car, human, etc.) by not looking forward but focusing on the past. Each of us should do the same with our success and failure. Move forward accepting the past.
How do chemists discover new drugs? Obviously, in the laboratory! Is that all one can say about the process? Certainly not. There is a process by which discovery happens. The process may vary depending on which laboratory a chemist works in. Although, the process does not vary so greatly as to eliminate a general procedure or process a drug takes from laboratory to the marketplace. In the blog below, I introduce the general process by which drug discovery proceeds. I want to highlight the word "introduction" since depending on your level of understanding, the process can be described in different ways.
Drug Discovery - General Route
I recently stumbled upon a video made by the 'National Institutes of Allergy and Infectious Diseases' (NIAID) titled "How A Drug Becomes A Drug" which I will show below in a moment. Before I emphasize the importance of viewing the short video (less than 4 minutes), I want to introduce the agency NIAID -- which is a sub-agency of the 'National Institutes of Health'. Here is an excerpt describing the organization taken from the "Wikipedia" page for the "NIAID" below:
The National Institute of Allergy and Infectious Diseases (NIAID) is one of the 27 institutes and centers that make up the National Institutes of Health (NIH), an agency of the United States Department of Health and Human Services (HHS). NIAID's mission is to conduct basic and applied research to better understand, treat, and prevent infectious, immunologic, and allergic diseases.[1]
NIAID has "intramural" (in-house) laboratories in Maryland and Montana, and funds research conducted by scientists at institutions in the United States and throughout the world. NIAID also works closely with partners in academia, industry, government, and non-governmental organizations in multifaceted and multidisciplinary efforts to address emerging health challenges such as the pandemic H1N1/09 virus.
The three main mission areas can be summarized from the "Wikipedia" page as follows:
Human Immunodeficiency Virus/Acquired Immunodeficiency Syndrome (HIV/AIDS)
The goals in this area are finding a cure for HIV-infected individuals; developing preventive strategies, including vaccines and treatment as prevention; developing therapeutic strategies for preventing and treating co-infections such as TB and hepatitis C in HIV-infected individuals; and addressing the long-term consequences of HIV treatment.
Biodefense and Emerging Infectious Diseases (BioD)
The goals of this mission area are to better understand how these deliberately emerging (i.e., intentionally caused) and naturally emerging infectious agents cause disease and how the immune system responds to them.
Infectious and Immunologic Diseases (IID)
The goal of this mission area is to understand how aberrant responses of the immune system play a critical role in the development of immune-related disorders such as asthma, allergies, autoimmune diseases, and transplant rejection. This research helps improve the understanding of how the immune system functions when it is healthy or unhealthy and provides the basis for development of new diagnostic tools and interventions for immune-related diseases.
The above mission covers every disease known and unknown. The National Institutes of Health is a huge organization made up of sub-agencies like the NIAID to divide up the mission. As such, the NIAID oversees the funding of drug research to a large extent in order to understand how infectious disease compromise the immune system -- the body at large. Additionally, the NIAID is interested in how drug discovery overcomes infectious diseases that have invaded our body. This includes the research behind the disease at the academic level.
I mentioned above a short video to highlight the general process of drug discovery from the academic level up all the way through to the consumer level -- i.e. the pharmacy. Here is the video below -- which is worth watching:
In the video above, the research is said to start at the basic science level at the university. This is true to an extent. Basic research into disease function and origin typically starts at the university level. Although, I would add that a fair amount of research is done at the industry level too by large drug companies. That research is typically targeted at a specific disease in which the pathway of progression or origin is known. I will explain more about the last sentence shortly.
The drug companies take the research done at the academic (university) level and carry the "small molecule" or "drug target" out to an actual therapeutic that is sold on the shelf of the pharmacy.
Why is this important to know?
Periodically, in the popular news, stories emerge about the over pricing of medication by companies like Turing pharmaceuticals (outrageous pricing) which cause wonder as to why such high prices exist for a given medication. These instances (of over pricing) are minimal compared to the price point needed to make a profit and move onto research more efficient drugs. The point is that research at the companies take time and money along with infrastructure.
The overall benefit of such research could be realized through an "open-access" network of drug targets and therapeutics (proprietary information at the moment) to which other researchers could access at their leisure. Arguments for such a system is that the funding has been provided by a government agency. Whereas arguments against such a system is loss of proprietary information. Tough call. Sorry for the divergence.
The goal of research is to find effective therapeutics (drugs) that treat a large part of the population. Side effects come about as a result of non-target delivery. The drug misses the target of intent or hits additional targets and causes extra problems. This is where the concept of "personalized medicine" comes in and will be discussed in future blog posts. For now, lets focus on designing drugs for a certain disease.
Drug Design 101
In order to design a drug to treat a certain disease or ailment, the pathology of the disease needs to be known. The origin of the disease needs to be known. How did the disease originate in the body?
Is the disease the result of a mutation in the genetic make-up of the person?
Is there a mutation in the DNA of the patient which causes a downstream mutation in the production of proteins?
Is the protein distorted in shape, contour which affects function?
Is the disease caused by an external agent (i.e. virus or bacteria)?
These problems can plague researchers success greatly for years. Luckily, over time, drug companies have built up libraries of "molecules" that serve as "messengers" or "therapeutics" that can hit a specific target that is involved in the process of the disease. Here is an excerpt from the "Wikipedia" page for "drug design" which I think will help you understand the process at the research level in either the university or industry setting:
Drug design, often referred to as rational drug design or simply rational design, is the inventive process of finding new medications based on the knowledge of a biological target.[1] The drug is most commonly an organic small molecule that activates or inhibits the function of a biomolecule such as a protein, which in turn results in a therapeutic benefit to the patient. In the most basic sense, drug design involves the design of molecules that are complementary in shape and charge to the biomolecular target with which they interact and therefore will bind to it. Drug design frequently but not necessarily relies on computer modeling techniques.[2] This type of modeling is sometimes referred to as computer-aided drug design. Finally, drug design that relies on the knowledge of the three-dimensional structure of the biomolecular target is known as structure-based drug design.[2] In addition to small molecules, biopharmaceuticals and especially therapeutic antibodies are an increasingly important class of drugs and computational methods for improving the affinity, selectivity, and stability of these protein-based therapeutics have also been developed.[3]
The phrase "drug design" is to some extent a misnomer. A more accurate term is ligand design (i.e., design of a molecule that will bind tightly to its target).[4] Although design techniques for prediction of binding affinity are reasonably successful, there are many other properties, such as bioavailability, metabolic half-life, side effects, etc., that first must be optimized before a ligand can become a safe and efficacious drug. These other characteristics are often difficult to predict with rational design techniques. Nevertheless, due to high attrition rates, especially during clinical phases of drug development, more attention is being focused early in the drug design process on selecting candidate drugs whose physicochemical properties are predicted to result in fewer complications during development and hence more likely to lead to an approved, marketed drug.[5] Furthermore, in vitro experiments complemented with computation methods are increasingly used in early drug discovery to select compounds with more favorable ADME (absorption, distribution, metabolism, and excretion) and toxicological profiles.[6]
The drug designer is looking for a "biological target" that is involved in either the origin or progression of the disease. As mentioned above, there are two popular processes: computer-aided drug based design and structure-based drug design. Both involve information on the biological target of interest.
What might a biological target look like?
A biological target can vary in definition depending on the nature of the disease. For instance, if the disease involves the distortion or mutation of a protein, then the surface of the protein would be considered the biological target. Specifically, the site of interest for a drug to interact with is referred to as the "active site." Here is an image take from the "Wikipedia" page for "Active Site" to help assist the reader in what that might look like:
Source: Thomas Shafee - Own work, CC
As you can see, the protein appears to be like a "blob" in the image above. The reason for that to emphasize the "binding site" or "catalytic site" not the overall structure which is of little concern to the drug designer. Remember that proteins are made up of amino acids which in turn form large "macromolecules" some of which are referred to as Proteins. I wrote an earlier blog about oligosaacharides are made up of simple sugars.
Starting from the picture above, now, the video below might make sense to watch before we proceed with our discussion of drug design. The video is titled "A Basic Introduction to Drugs, Drug Targets, and Molecular Interactions" and is just over 4 minutes long -- and definitely worth watching.
The video above is more technical than some readers might want to view in order to understand the process. Therefore, we should back off a little on the "technical side" and focus on the "development" side of drug development -- from a simplistic standpoint.
Are you ready to understand drug design from a simple standpoint?
Alright, here we go!
In order to do so, I decided to borrow a few slides from a recent webinar offered online by the American Chemistry Society. The webinar was titled "Crystallography As A Drug Design And Delivery Tool" and was given by Dr. Vincent Stoll of AbbVie -- where he serves as the Director of Structural Biology.
One of the examples that Dr. Stoll used to talk about drug design was binding to the transmembrane molecule B-Cell-Lymphoma-Extra-Large or bcl-xl in the mitochondria. In his talk, he focused on a few binding sites shown in the slide below with a picture:
Specifically, in this case the company wanted to design a drug candidate that would "mimic" the peptide Bak binding. Shown to the right on the slide are the sites or "active sites" that the peptide Bak bind to on the transmembrane molecule bcl-xl. In order to find a drug that will mimic the binding of the peptide, the drug will have to have the ability to bind to multiple sites on the transmembrane molecule.
Fortunately, over time, large drug companies have built up a data base or library of 'molecules' that will bind to similar or exact sites. In the slide below, I show a yellow surface with two molecules hovering above the surface -- slightly bound -- taken from Dr. Stoll's talk:
There is a lot of information on the slide shown above. Let me walk you through the relevant information for drug design 101. First, I mentioned that each drug company kept a library or database with a bunch of 'fragments' that are intended to hit specific targets on biological surfaces. These biological surfaces can be viewed as the picture shown to the right in the slide. They might be a protein surface, or another biological surface of interest to drug manufacturers.
In the case above, the two molecules shown on the yellow surface -- one is a brown color while the other is a green color. The different color is to illustrate that the molecules are fragments designed to hit a specific type of target or active site on a biological surface. In this case, the biological surface is the transmembrane molecule bcl-xl.
Once the fragments have been identified that will occupy and hit the desired targets or active sites, then the challenge is to link the fragments together by chemistry. This step in of itself is often challenging and does not guarantee that the newly formed molecule (of two fragments with a linker molecule) will work. Therefore, in the picture above, there are possible linker molecules that exist within the pharmaceutical database that have been shown to work in other cases.
After linking the two fragments together, the next step is to verify by spectroscopy that the total or linked molecule worked.
How is this accomplished?
In the lab, the substrate or biological surface will have a drop of the linked molecule injected onto the surface. Then the surface which should have the drug bound to the active sites will be investigated using a spectroscopic technique like Nuclear Magnetic Resonance Spectroscopy. Upon confirmation, a number will be reported as shown on the slide that indicates the binding affinity of the molecule onto the surface:
In the slide above, there are a couple of numbers reported that make sense to drug designers but probably not the reader -- you. Do not worry. Over time (through other blog posts) you will come to understand their meaning. What is important to understand at this point is that after linking molecular fragments together, an experiment occurs to understand if the drug or linked molecule is as effective as the fragments are alone.
Furthermore, the pharmaceutical company might understand the chemistry of the active site to a large extent and further modify the linked molecules to make a more "potent" drug or linked molecule. On the slide below, I show from Dr. Stoll's talk such a modification:
Again, the overall take home message is that the molecular modification done to the linked molecule has some effect.
Is that effect better or worse?
Can there be a further modification to the linked molecule or now drug to enhance the ability to mimic the peptide binding?
Who knows. That is why research is continuously pushed forward and costs money to find out.
Conclusion...
In the above paragraphs, my intention was to introduce briefly the process of drug design. As we speak though, changes are being made to parts of the process. Outsourcing of linker molecules is occurring as are mergers and acquisitions of large companies by even larger pharmaceutical companies. Which potentially means that the shared database or libraries of available drug targets is growing. The process is dynamic but slow at the same time.
Discovering the mechanisms of disease and cures as a result is the dream of every drug designer. Progress is unfortunately slowed down by the trial and error process. Research takes time and money to complete. Furthermore, improvements to existing drugs take time. I will leave you with another short video about the progression of the medical research field:
There are many reasons why science outreach is critical in our world today. In the past, the importance has been present. Although, with the explosion of the internet and the devices along with the climate changes that are being seen, science outreach might be at an all time high. Action needs to be taken with the help of a much-needed educated STEM (Science, Technology, Engineering, and Math) population rising up through the educational ranks as we speak. With that in mind, we still have a long way to go. Below are two examples of why we need to communicate science more effectively. These two examples have relevance to the spread of the 'Zika' virus occurring in the United States today.
The adult form of attention-deficit/hyperactivity disorder has a prevalence of up to 5% and is the most severe long-term outcome of this common disorder. Family studies in clinical samples as well as twin studies suggest a familial liability and consequently different genes were investigated in association studies. Pharmacotherapy with methylphenidate (MPH) seems to be the first-line treatment of choice in adults with attention-deficit hyperactive disorder (ADHD) and some studies were conducted on the genes influencing the response to this drug. Finally some peripheral biomarkers were identified in ADHD adult patients. We believe this work is the first systematic review and meta-analysis of candidate gene association studies, pharmacogenetic and biochemical (metabolomics) studies performed in adults with ADHD to identify potential genetic, predictive and peripheral markers linked specifically to ADHD in adults. After screening 5129 records, we selected 87 studies of which 61 were available for candidate gene association studies, 5 for pharmacogenetics and 21 for biochemical studies. Of these, 15 genetic, 2 pharmacogenetic and 6 biochemical studies were included in the meta-analyses. We obtained an association between adult ADHD and the gene BAIAP2 (brain-specific angiogenesis inhibitor 1-associated protein 2), even after Bonferroni correction, with any heterogeneity in effect size and no publication bias. If we did not apply the Bonferroni correction, a trend was found for the carriers allele 9R of dopamine transporter SLC6A3 40 bp variable tandem repeat polymorphism (VNTR) and for 6/6 homozygotes of SLC6A3 30 bp VNTR. Negative results were obtained for the 9-6 haplotype, the dopamine receptor DRD4 48 bp VNTR, and the enzyme COMT SNP rs4680. Concerning pharmacogenetic studies, no association was found for the SLC6A3 40 bp and response to MPH with only two studies selected. For the metabolomics studies, no differences between ADHD adults and controls were found for salivary cortisol, whereas lower serum docosahexaenoic acid (DHA) levels were found in ADHD adults. This last association was significant even after Bonferroni correction and in absence of heterogeneity. Other polyunsaturated fatty acids (PUFAs) such as AA (arachidonic acid), EPA (eicosapentaenoic acid) and DyLA (dihomogammalinolenic acid) levels were not different between patients and controls. No publication biases were observed for these markers. Genes linked to dopaminergic, serotoninergic and noradrenergic signaling, metabolism (DBH, TPH1, TPH2, DDC, MAOA, MAOB, BCHE and TH), neurodevelopment (BDNF and others), the SNARE system and other forty genes/proteins related to different pathways were not meta-analyzed due to insufficient data. In conclusion, we found that there were not enough genetic, pharmacogenetic and biochemical studies of ADHD in adults and that more investigations are needed. Moreover we confirmed a significant role of BAIAP2 and DHA in the etiology of ADHD exclusively in adults. Future research should be focused on the replication of these findings and to assess their specificity for ADHD.
Clozapine is a unique compound that is particularly effective for treatment-resistant schizophrenia (TRS). The use of clozapine is limited, however, due to the 0.8% risk of agranulocytosis,1 which necessitates a strict monitoring of neutrophil counts to detect early neutropenia and prevent progression to agranulocytosis.
First and foremost, I must admit that one of these is an article -- specifically a 'review' while the other is a "news and commentary" -- which means that the formats are quite different. Still, the abstracts are extremely different. Why?
What about if I show you an abstract from a different journal?
Fluorescence microscopy is an essential tool for the exploration of cell growth, division, transcription and translation in eukaryotes and prokaryotes alike. Despite the rapid development of techniques to study bacteria, the size of these organisms (1–10 μm) and their robust and largely impenetrable cell envelope present major challenges in imaging experiments. Fusion-based strategies, such as attachment of the protein of interest to a fluorescent protein or epitope tag, are by far the most common means for examining protein localization and expression in prokaryotes. While valuable, the use of genetically encoded tags can result in mislocalization or altered activity of the desired protein, does not provide a readout of the catalytic state of enzymes and cannot enable visualization of many other important cellular components, such as peptidoglycan, lipids, nucleic acids or glycans. Here, we highlight the use of biomolecule-specific small-molecule probes for imaging in bacteria.
I think that these four abstracts illustrate the point. Right about now, the reader (you) might be thinking the following regarding the three abstracts above:
What do those abstracts mean?
What science is being done?
Why are the words and sentences so complicated?
Am I right? Were you thinking any of the three questions above. I know that I would be -- especially, if I had very little of a science background to serve as a starting point when reading them.
Science Communication Should Be Simple
In a recent TED talk by Tyler DeWitt titled "Hey Science teachers -- Make It Fun" the problem with communicating science is discussed in a simple and elegant manner. Tyler is a graduate student at the MIT.
Below are two avenues by which a virus can infect a cell. Given that the 'Zika' virus is spreading among the United States population, the stories below are completely relevant to current stories in the popular news press. I paraphrased the speech by Tyler DeWitt and used 'still images' from his TED talk below.
Story #1 goes as follows:
The story starts off with a happy little bacterium who is occupying a medium -- say your stomach. Over time the bacterium starts to not feel well as depicted in the slide below:
While pondering over the many reasons which might lead to a cause for the feeling, he looks down to notice the culprit -- a virus who is emerging from his body as shown below:
And with time, the situation is getting very worse while the viruses keep poring out from his body as pictured below:
Now there are two different viewpoints to describe the situation that is occurring. From the standpoint of the bacterium, the situation is worsening exponentially with time hosting an army of viruses. While, from the viewpoint of the virus, each little virus is thinking the following:
"We rock!" From the viruses viewpoint, the first virus managed to get into the host and successfully propagate -- evolve. In order to complete the mission of evolving a number of complex steps had to occur for survival to happen. Lets review them from the standpoint of the virus.
First, the virus had to slip a copy of its DNA into the bacterium as shown below:
In order for the virus to proceed to copy its DNA, the virus had to destroy the DNA of the bacterium as shown below:
After gaining control of the bacterium by destroying the bacterial DNA, the bacterium will not propagate (copy) only the DNA of the virus. The bacterium is serving as the host factory producing multiple copies of the virus as shown below:
The bacterium will continue to make copies of the virus since the 'blue print' has been changed from the bacterium's DNA to the viruses DNA. Manufacturing the virus will not stop until the bacterium bursts due to holding 'too many copies' of the virus as shown below:
The above steps illustrate one avenue by which viruses "infect" bacteria to takeover as a host.
Is there an alternate way for the virus to invade the bacteria and take over to use as a host?
Yes! There is -- which is outlined below:
The virus starts out as a "secret agent" as shown below:
With the ability to secretly insert its DNA into the DNA of the bacterium as shown. The insertion process has no damaging effect like the first avenue of replication did in the example above. The DNA appears to be normal inside the bacterium as shown below:
As mentioned, the secret agent is able to insert his DNA into the bacterium who is unaware of the insertion and lives life normally. Over time, the bacterium reproduces/replicates itself and makes many copies of the "inserted DNA" which has been silent as shown below:
The silent/inactive inserted DNA is not recognizable to the bacterium until a "signal" is sent among the bacterium and the virus DNA pieces pop out and take control over all of the bacterium -- also shown above. After the virus DNA has taken over the bacterium, the replication process of the virus occurs as shown below:
The bacterium have been turned into virus making factories. Extended copies of the virus are produced in each bacterium until the bacterium bursts and releases all of the copies of the viruses as shown below:
And with that, the viruses have won by dominating and replicating through the bacterium. Shown in the slides (which are still pictures taken from Tyler Dewitt's TED talk) are two different stories.
These two stories represent the two pathways by which a virus can attack cells!!!!
On the left hand-side of the picture above is the first pathway (the lytic pathway) -- where the viruses insert themselves and take control over the cell (bacterium) immediately. Whereas, in the second pathway (lysogenic pathway), the virus inserts their DNA and that DNA stays dormant until a signal is sent.
Was that hard to explain and comprehend or what? All science should be that simple - right?
Virtually, everyone who has graduated high school has been exposed to these two pathways in their biology class. The difference is in the presentation of the material. First, the presentation that each of us experienced was most likely more serious than the cartoon story above and certainly did not use cartoon characters like those presented above.
Why not?
The field of science suffers from a "seriousness" problem. Which is to say, scientists, and the way that science is portrayed is too serious. Science is meant to be fun too. You can have fun doing science. I do it every day!
In the next section before concluding, I will tie together the first two sections. Namely, the seriousness of science -- which is a downfall and -- secondly, the language that is used. Language seems to be the number one 'turn off' for students entering various fields of science.
Science Should Be Simplified!
Many of scientists that I know believe that making science simple is simply impossible. Furthermore, the belief is centered around the idea that "dumbing down" science devalues the field. This belief could not be further from the truth. Let me explain why with more slides from Tyler DeWitt's TED talk above to illustrate my point. There is no need to recreate the wheel.
In Tyler's TED talk, the two stories about the two possible avenues by which a virus can infect a cell were told. And he used cartoons and very creative imagery along with simple words right. Everything he said was easily digestible -- at least for me.
Textbooks often complicate explanations of science as do professional publications (i.e. journal articles -- as shown above). Why do these publications use such complicated language to illustrate a point? Because, that is the way the system is designed to be -- which needs to change.
In the example given in the TED talk above, the simple explanation might be something like: Viruses make copies of themselves by slipping their DNA into a bacterium. How would this look in the formal language inside of a textbook? Here is an example -- a slide from the TED talk above with the informal explanation above and the formal explanation below:
Wow. The two descriptions above look completely different. The first (above) is one that I can relate to and would love to read. Whereas the second (below) is completely a 'turn off' and might very well put me to sleep. Here is the divergence of the majority of people's attention. When the practitioners of science transition from the top to the bottom description -- a large percentage of the audience drops off too.
Why does this transition occur in descriptions?
Because, in the simple description, not every word is accurate. After going through and editing the statement for accuracy -- 100% accuracy, the statement would look like the one shown below with corrections:
And this begs the question of science: can we describe science with slightly inaccurate descriptions? I would argue that the answer is yes. Why? Because, a majority of the undergraduate education uses "toy examples" to illustrate the concepts and theories. For example, in the undergraduate curriculum students learn about "ideal gases" and the ideal gas law. The assumption is that gas molecules are "point particles" and do not "interact." What does this mean?
Throughout the undergraduate degree process -- at least in Chemistry -- students are running calculations using the "ideal gas equation" to arrive at relations between chemical compounds. Real gases do not behave ideally and computational coefficients are added to equations to take into account the 'non-linear' behavior. The non-linear behavior arises when gas molecules react with one another or during the collision -- the molecules temporarily "stick together." These types of properties are extremely complex and cannot be simulated with the limited (and great) computational power that society possesses today. See? This is why simplification can work.
Conclusion...Science Should Be Made Simple!
In order to capture the interests of the widest audience for science, the work has to be made simple. A few professors worry about the simplification process attracting others to science. Who cares if that happens? Would we not want the best minds tackling the problems of society? Yes, we would. Science is meant to be fun -- not just serious without.
With captivating and creative descriptions by enthusiastic scientists like Tyler DeWitt, we have a great opportunity to engage a wider audience into science. Although, if other scientists don't sign onto this line of thinking, the new avenue will run dry and we will be stuck with the same old 'broken' method of communicating science.
The world has changed over the decades. Why shouldn't the communication evolve too to attract the widest array of audience members? Technology is providing a whole new range of possibilities for the classroom to teach the message of science. Why would we want to stick with the same broken method that has accomplished enough for the past few decades -- but is currently in need of an overhaul.
One of the overarching goals of this blog is to simplify science for everyone. If you have an idea that you would like explained on the blog site, please leave a comment. As you can see by reading the first few abstracts (descriptions of science studies in section one), we have a long way to go. Lets all band together and demand a change of our system to move toward a more creative and captivating educational system for all disciplines. Until next time, Have a great day.
When your friends and family members realize that your majoring in Chemistry in college, you instantly become the "ambassador of chemistry." Maybe the motivation behind that is to help motivate the person to really become the best chemist that is possible. The realization that I had a mind that was tuned in to chemistry/physics came to me in high school at lunch time.
In the following paragraphs, I will explain how chemistry followed me into the military. Specifically, I will highlight two separate environments -- high school and the military to illustrate my point -- your passion/interests are constantly intersecting your life. Do you believe me? If not, read more below. If so, read more below.
When Did Chemistry Appeal To Me?
Growing up, my father would always talk to me about chemistry. Part of that is due to that he loved chemistry. He is a true academic in the sense that he could get lost in studying science. If he were to be taken hostage and locked up in a library, given the proper amount of food and clothing, he would live the remainder of his life happy as ever. I remember when I was in Junior High, he put a bumper sticker on his car that read "Honk If You Got an A In P-Chem." Who would have thought that two decades later I would become a "physical chemist."
My first exposure in academia to chemistry was kind of "off the beaten path." I used to "ditch" classes quite a bit. I missed a lot of high school one particular semester. As a result, I was given a punishment. First, I would attend Saturday detention from 8 am - 12 pm. I remember my father proudly dropping me off to attend. He was happy that I received a proper punishment for missing school. Additionally, I had to skip lunch and report to the chemistry/physics teacher's classroom -- Mr. Barth -- now Dr. Barth.
What seemed like a punishment then, turned into a major part of my doctoral work a decade later. I was given the task of building (with a friend) a track of alternating bar magnets. The track was to be two magnets wide (around 4 inches) and around 6 feet long. In total, there were around 250 magnets that we had to glue (opposite polarity) alternating (north to south). At this point, you might ask the following question:
What was the purpose of the experiment?
In short, the object was to build a "magnetic levitation train" to measure the coefficient of friction. Before I answer the question in detail, a visual diagram of the experimental setup would be very useful in interpreting the purpose of the experiment. The experimental setup when completed appeared like the following photograph of the "kit" that sells today online:
Source: www.rainbowresource.com
In the diagram above, there appears to be a block of wood that is floating. On either side of the track, there are plastic rails to hold the block of wood or magnetic car onto the track. Back in the late 80s, our car was simply made out of cardboard with magnets glued onto the bottom. There is a fair amount (a huge) of tedious work involved in building the track. That process too prepared me for research in the physical science area.
The purpose of the track was to elevate one side of the track to form a "triangle." The diagram would appear to be similar in nature to a block of wood sliding down a slanted surface. Additionally, if the relevant forces are outlined, the diagram taken from the "Wikipedia" page emerges:
With a magnetic levitating track, where is the friction? The only source of friction (neglecting wind resistance) is due to the car (cardboard) rubbing up against the plastic rails on the track. By changing the angle of the track relative the the ground and measuring the time of travel, the coefficient of friction is easily determined. That was our challenge.
I say "our" because there was another gentlemen in the room assigned to the project. He did not miss school like me. In fact, he was a straight "A" student. He had a name -- Gil Vitug. We became and remain very good friends. At the time, he was more attracted to the physics side of life. Years later, we both graduated with our doctorate degrees (Ph.D.) from University of California at Riverside. He was working in Astrophysics (working at the Stanford Linear Accelerator) while I was working on developing instrumentation for Nuclear Magnetic Resonance experiments.
From that experience, both of us learned the ability to extract a large amount of information from a low-cost setup. Finding a way with limited funding to measure a quantity is extremely useful. Especially, as science funding is becoming more difficult to receive. That was a valuable experience and served as a springboard to which we became "science ambassadors." Out of our school class, we were the two to work in academia.
After high school, I entered college and majored in chemistry with the intention of becoming a surgeon. I wanted to end up in experimental medicine. I even defined my own field -- experimental medicine. Today, that desire would have translated to obtaining a "Md/Ph.D" degree and working in a government laboratory. I had no clue at the time. In fact, my father sat me down and had a talk with me during my junior year of college. He suggested that I look into graduate school in chemistry rather than medicine based on my responses to his questions regarding experimental medicine. I was at the time and remain extremely grateful for that discussion.
Why did I diverge onto that tangent?
Out of those experiences, came a love for chemistry. The experiences were not traditional to me. Late night discussions with my father over topics such as dropping a penny into a bottle of beer spurred my interests in thinking about chemistry. I was not a good student in school. I did show up every day to class. And, I was able to entertain concepts in science reasonably well. The concepts would be in my head.
What remained to be a delinquency was the patience to sit down and study along with explaining the concepts contained within my head. The process of beginning to tackle that delinquency took up the better part of the next decade. Although, with the help of certain individuals (like my father and Dr. Bath along with Gil -- now Dr. Vitug) and a military sergeant, the path was easier. Each person challenges me to become a better person. Furthermore, optimizing the shortcomings in my life has been a continuous challenge -- still to this day. Let me explain briefly how.
Chemistry In The Military?
How can a soldier study chemistry in the military? As I mentioned in a previous blog post, chemistry is all around us. Everything involves chemistry! What determines whether a soldier studies or utilizes chemistry is their job classification or rank. If an enlisted soldier decides to become an officer, he/she returns to college and majors in science. That could involve returning to a job in the military that involves directly performing research.
Although, the more probable situation would be to assigned a job where the requirements have no direct connection to chemistry. Additionally, as an enlisted soldier, the job is most likely going to entail no direct connection to research in sciences. That is reserved more for a position like an officer or a civilian employee.
I was assigned to work as an electrician on the fighter aircraft F-16. That entailed working on the jet on the "flight line" along with working on the parts in a "back shop" setting. What is the difference between the two: "flight line" and "back shop"? Working on the "flight line" involves removing electrical components (generators, rheostats, controllers, batteries, chargers, etc.) and environmental components (bleed air valves, air condition controllers, water separation units, etc.) along with repairing the associated wiring and ducting to those components.
This is different from working in the "back shop" or the component repair shop. The component repair shop is a The two types of work are very different but have the same mission. The overall mission is to keep aircraft in the air. With that being said, work that arrives in the "back shop" or component repair shop can be from any aircraft -- not just the F-16. Since our base (Shaw AFB, South Carolina) was a predominantly F-16 air base, most of the components that we encountered to repair were from F-16 aircraft.
What does all this have to do with chemistry and being a chemistry ambassador?
When I first arrived at the base, my supervisor -- Master Sergeant Daniel Jonas asked me a series of questions. These included if I had any college or university experience. I answered yes -- I had 4 years in chemistry before dropping out. He scolded me for dropping out and encouraged me to finish my degree in the military (and become an officer). He also sent me to the "Middle East" 18 months out of the 24 months -- due to my popularity (hard work ethics). Even though I did not get to go back to school while serving my country, I had the ability to demonstrate my knowledge of the field of chemistry by an assignment -- which was an interesting and unusual occurrence in the military. Especially for an enlisted soldier in his/her first tour of duty.
Master Sergeant Daniel Jonas was a curious man. In fact, he had an unquenchable thirst for information -- spanning all disciplines from economics through physical sciences. He was a very interesting person to say the least. I have often wondered how I happen to run across people in my life like him -- I am extremely fortunate. My wife says, I attract these people -- who see my potential. Maybe she is correct.
Anyways, Msgt. Jonas realized an issue with a battery and called on my chemistry skills to fix the problem. Specifically, he was concerned about two aspects of recharging (or reconditioning) the F-16 battery. First, the unusually large amount of waste generated in the process of charging the battery. Second, the methodology of charging the battery which degraded the lifetime of the battery -- which was nominally around 3-5 years. Let me explain the situation using science language.
Hazardous Waste Generation
The F-16 battery is a single unit (one case) that houses 24 cells that are linked together in "series." A picture of the battery is shown below:
Source: Public Domain
With the diagram of each "cell" shown below:
Source: By Ransu, Public Domain
In order to understand the problems that Msgt. Jonas recognized, the chemical reactions of the discharging and charging cycle of the battery need to be known. Shown below are the chemical reactions of the two cycles of the Nickel Cadmium battery taken from the patent webpage for the "battery charger":
Upon inspection of the chemical reactions, the hydroxide ions play a critical role in the discharge/charge cycle over the course of the life of the battery. The electrolyte solution must contain a chemical that upon dissociation produces a hydroxide ion. For the battery above, the chemical is a solution of potassium hydroxide in water. This is important in recognizing the problem that needed to be fixed to extend out the life of the battery.
I was tasked to understand the charging/discharging cycle of the battery. Furthermore, I was tasked with explaining the problem to the other members of the back shop working on the batteries. Before I go into that, the charging cycle needs to be understood. Looking at the "Wikipedia" page for the "Nickel-Cadmium Battery" the process proceeds like in the following manner:
Vented cell (wet cell, flooded cell) NiCd batteries are used when large capacities and high discharge rates are required. Traditional NiCd batteries are of the sealed type, which means that charge gas is normally recombined and they release no gas unless severely overcharged or a fault develops. Unlike typical NiCd cells, which are sealed, vented cells have a vent or low pressure release valve that releases any generated oxygen and hydrogen gases when overcharged or discharged rapidly. Since the battery is not a pressure vessel, it is safer, weighs less, and has a simpler and more economical structure. This also means the battery is not normally damaged by excessive rates of overcharge, discharge or even negative charge.
They are used in aviation, rail and mass transit, backup power for telecoms, engine starting for backup turbines etc. Using vented cell NiCd batteries results in reduction in size, weight and maintenance requirements over other types of batteries. Vented cell NiCd batteries have long lives (up to 20 years or more, depending on type) and operate at extreme temperatures (from −40 to 70 °C).
A steel battery box contains the cells connected in series to gain the desired voltage (1.2 V per cell nominal). Cells are usually made of a light and durable polyamide (nylon), with multiple nickel-cadmium plates welded together for each electrode inside. A separator or liner made of silicone rubber acts as an insulator and a gas barrier between the electrodes. Cells are flooded with an electrolyte of 30% aqueous solution of potassium hydroxide (KOH). The specific gravity of the electrolyte does not indicate if the battery is discharged or fully charged but changes mainly with evaporation of water. The top of the cell contains a space for excess electrolyte and a pressure release vent. Large nickel plated copper studs and thick interconnecting links assure minimum effective series resistance for the battery.
The venting of gases means that the battery is either being discharged at a high rate or recharged at a higher than nominal rate. This also means the electrolyte lost during venting must be periodically replaced through routine maintenance. Depending on the charge–discharge cycles and type of battery this can mean a maintenance period of anything from a few months to a year.
Vented cell voltage rises rapidly at the end of charge allowing for very simple charger circuitry to be used. Typically a battery is constant current charged at 1 CA rate until all the cells have reached at least 1.55 V. Another charge cycle follows at 0.1 CA rate, again until all cells have reached 1.55 V. The charge is finished with an equalizing or top-up charge, typically for not less than 4 hours at 0.1 CA rate. The purpose of the over-charge is to expel as much (if not all) of the gases collected on the electrodes, hydrogen on the negative and oxygen on the positive, and some of these gases recombine to form water which in turn will raise the electrolyte level to its highest level after which it is safe to adjust the electrolyte levels. During the over-charge or top-up charge, the cell voltages will go beyond 1.6 V and then slowly start to drop. No cell should rise above 1.71 V (dry cell) or drop below 1.55 V (gas barrier broken).
The take home point was that there was maintenance involved in the discharging/charging process over the course of the life of the battery. My supervisor wondered why the life of the battery was no where near the length that was written by the factory. This is where my job started -- since I had a chemistry background and interest in science.
To accommodate the expansion of the volume of liquid during the charging cycle, each instrument had a "turkey baster" sitting next to it for the easy removal of excess water. During the dynamic charging cycle, the cells would expand due to the hydrogen gas being liberated. The caps would be loosened and set beside the battery. Essentially, the battery sat on the table top hooked up the charger and "open" (vent caps removed) to the environment. Unknown to us at the time, that is where the problems lay the entire time -- the open cells to the atmosphere. Why?
Source: www.rd.com
There were a couple of issues with the charging/disharging cycles that I started to mention above which may be confusing. After the charging cycle, the "electrolyte" level might need to be adjusted (meaning removal or addition of water with the "turkey baster" device shown above) as discussed in the excerpt above.
The problem with this is the removal of the following: 1) electrolyte mixture -- KOH and H20 (Potassium hydroxide and water), and 2) the electrode (which decomposed). Collecting these two chemicals is and disposing them safely (not down the drain) is required. This means that the solution of waste has to be kept in a "hazardous waste" container -- which is picked up each week by a disposal company. Each weak, the shop would generate on the order of 55 gallons of "hazardous waste" -- mostly water, but a little bit of potassium hydroxide, electrode (cadmium, nickel, etc.). As you might imagine, this was a huge motivation to determine how to extend the life of the battery.
During the addition of water or the extraction of the electrolyte after charging, the problem was that the internal concentrations of all components had changed. If the "turkey baster" was used to pull out water/KOH and electrode material, the over the course of the lifecycle of the battery -- each time that the battery was sent to be conditioned in the "back shop" -- the battery would be degraded ever so slightly. Adding this up over time, renders the battery unusable.
Couple this to the competing chemical reaction occurring with the air -- which is shown below:
This reaction was not known to occur at the time of our investigation. If Msgt. Jonas had not been so persistent in understanding all chemical reactions within the F-16 battery, the situation (short lifetime of the battery) would have continued on for decades. What did I learn out of this? Does any of this make sense to you (the reader)? I know that I have been rambling on for a while.
Conclusion....
The point I would like to make with this post is that a persons true passion becomes apparent eventually in one's life -- whether they pursue work within that passion or not. For Master Sergeant Jonas, that passion is an unquenchable thirst for knowledge. He is a power house of knowledge and commands those around him "in directly" to be thirsty as well. Amazing. I have always loved chemistry in one form or another. Dr. Dan Barth has taught chemistry and physics for decades. My father shares a passion for the physical sciences (as well as others too). Put all of us in a room together or have us interact with each other, and these shared interests will become apparent soon. Additionally, each one will show their specific talent or interests over time.
Regardless if a person pursues their interests or not, those interests will become apparent over time. For me, hanging out in the chemistry and physics classroom benefitted me greatly -- since this experience was aligned with my interests. I imagine that the school counselor who assigned me to the room instead of detention saw my interests shine through at some point in our interactions.
Similarly, when I arrived in the US Air Force at Shaw AFB -- I must have exuded the interests in sciences. This later caused me to be chosen to interpret and explain the work of Master Sergeant Jonas and the extension of the F-16 battery. What does this have to do with you?
If you are at a point in your life where you have no idea of where to go in moving forward, just keep moving forward. Eventually, your interests will come to the surface. But, you must be willing to listen to yourself and observe your interests. I will you luck in your adventure pursuing your interests. Have a great day.
I remember being thoroughly confused the first time that I saw the image on the back of a truck's window. Of course, I was equally confused when I saw the word "YOLO" in print the first time too. "YOLO" means "You Only Live Once." "NOTW" means "Not Of This World." There are many of these little shortened statements floating around the internet. Why is "NOTW" important and used in the same title as the chemical elements Hydrogen and Helium? Great question.
Short answer: Read the paragraphs below to find out!
Long answer: The other day I was thinking about the concept of "escape velocity" and these two elements came to mind. If set free, will each of the elements in gaseous form leave "our world" -- the atmosphere around planet Earth? In the answer is yes, then these two elements are "Not Of This World." First, lets focus on the crucial question: Why does the escape occur? What properties allow that to happen? The answers are contained in the paragraphs below.
Escape Velocity?
If you were to go outside onto your yard lawn and jump up into the air, what would happen? You would probably briefly rise up into the air and then begin to descend back onto the lawn. Why? The reason is due to the Earth's gravitational field. As I wrote in an earlier post on force, the gravitational field is exerting a force to accelerate your body onto the surface of the Earth. This is Newton's Law of Universal Gravitation and can be represented by the equation below:
where 'm' is the mass and 'g' is vector representing the acceleration of gravity with a constant magnitude of 9.81 m/s^2 (meters per second squared) toward Earth. Why is this important? Well, you would have to understand the effects of gravity if you were going to launch a spacecraft into space right? You would have to plan to overcome the gravitational field in a safe manner without destroying your spaceship in the process? The general equation for a Force on mass-1 due to the gravitational pull of mass-2 can be represented by the following equation shown below:
Where G is the gravitational constant and the two masses experiencing this pull between one another are represented by m1(mass-1) and m2 (mass-2). Furthermore, the strength of the gravitational force varies by the inverse of the square of the distant between the two masses. Simply stated right. Therefore, to escape this force, energy would be needed.
How does one calculate the escape velocity for an object to leave the atmosphere?
In order to break the gravitational barrier, the proper energy must be obtained. Two questions need to be answered in order to arrive at a escape velocity:
1) How much energy is required to break the gravitational barrier?
2) How much kinetic energy is required to break the gravitational barrier?
A this point you might be slightly confused. I just showed you an equation for the force between two masses with a gravitational pull. Now, I am asking about kinetic energy?Where is the connection between the two? Fair enough.
To start with, the force is holding us on the planet. As a thought experiment, we can think of a rock on top of a mountain. That rock has a large amount of potential energy. If that rock were to roll down the mountain, the potential energy would be converted into kinetic energy. In order to drive the point home, an excerpt from the "Wikipedia" page for "potential energy" might help the reader understand the work (energy) required to break the gravitational field is shown below:
There are various types of potential energy, each associated with a particular type of force. For example, the work of an elastic force is called elastic potential energy; work of the gravitational force is called gravitational potential energy; work of the Coulomb force is called electric potential energy; work of the strong nuclear force or weak nuclear force acting on the baryon charge is called nuclear potential energy; work of intermolecular forces is called intermolecular potential energy. Chemical potential energy, such as the energy stored in fossil fuels, is the work of the Coulomb force during rearrangement of mutual positions of electrons and nuclei in atoms and molecules. Thermal energy usually has two components: the kinetic energy of random motions of particles and the potential energy of their mutual positions.
In equation form, the potential energy is shown as follows:
Again, to launch into space, the potential energy (stored energy) needs to be converted into 100% kinetic energy (the energy of motion). Following this line of reasoning leave us to equate the two energies as shown below:
To determine the escape velocity needed to break the Earth's gravitational pull. Before the above equation is rearranged to solve for "v" -- velocity, one more substitution needs to be made. The substitution is an expression for the gravitational acceleration at the surface of the earth. Below is an expression to substitute for G in the equation above:
If the above expression is substituted into the equation for gravitational potential energy, the expression below is the relation of the energy needed to escape the surface of the Earth:
Now, the above expression is the escape velocity required to leave the Earth's gravitational pull. The remaining task is to plug numbers into the equation and calculate the velocity as shown below:
There you have the answer. In order to break Earth's gravitational pull, an object (i.e., spaceship, molecules, atoms, etc.) needs to travel at minimum escape velocity of 7 miles per second. Take a look at a map. Look for a landmark or geographical point that is 7 miles away from your house. Imagine, traveling that distance in one second. Wow!
That sets the discussion in motion with a definite answer. The space shuttle carries fuel which helps propel it into orbit. Are there any natural objects that might possess enough energy to escape the Earth's atmosphere without fuel? I cannot think of any off the top of my head that travel normally at 7 mile/sec. That is what I would expect to hear from most people. Sub-atomic particles travel quickly. Entertaining this question, I recalled hearing years ago that both chemicals -- Helium and Hydrogen possess enough energy to escape the atmosphere.
A couple of weeks ago, I wrote a blog post about cooking pasta like a chemist. The point of that post was to inspire people to imagine the dynamic environment that is occurring in the boiling water and the headspace just above it. While writing that post, I could not help but to return to the statement that I had heard several years earlier regarding both chemicals -- Helium and Hydrogen -- possessing enough energy to escape Earth's gravitational field. I started narrowing my curiosity down to the following question:
What properties enable the elements hydrogen and helium to escape the Earth's atmosphere?
Are these two chemicals special? Do other chemicals possess enough energy to escape Earth's gravitational field?
The answer is interesting but somewhat complex and still being researched. Below, I start to discuss the parameters which might give both of these chemicals the ability to act special (in the sense of escaping into space). Read on below to find out the answer.
Hydrogen & Helium Are Special!
As I found out, the process is simple yet complicated. How does that figure? Simple yet complicated? In order to understand the statement about these elements, we must take a divergent step for a brief backstory in chemistry. These two elements are gases at room temperature. In order to describe the behavior of the gases at a particular temperature, the "probability distribution" created by James Clerk Maxwell must be shown to illustrate our point. First, lets read the description of the "probability distribution" of molecular speeds devised by him taken from "Wikipedia":
In statistics the Maxwell–Boltzmann distribution is a particular probability distribution named after James Clerk Maxwell and Ludwig Boltzmann. It was first defined and used in physics (in particular in statistical mechanics) for describing particle speeds in idealized gases where the particles move freely inside a stationary container without interacting with one another, except for very brief collisions in which they exchange energy and momentum with each other or with their thermal environment. Particle in this context refers to gaseous particles (atoms or molecules), and the system of particles is assumed to have reached thermodynamic equilibrium.[1] While the distribution was first derived by Maxwell in 1860 on heuristic grounds,[2] Boltzmann later carried out significant investigations into the physical origins of this distribution.
A particle speed probability distribution indicates which speeds are more likely: a particle will have a speed selected randomly from the distribution, and is more likely to be within one range of speeds than another. The distribution depends on the temperature of the system and the mass of the particle.[3] The Maxwell–Boltzmann distribution applies to the classical ideal gas, which is an idealization of real gases. In real gases, there are various effects (e.g., van der Waals interactions, vortical flow, relativistic speed limits, and quantum exchange interactions) that can make their speed distribution different from the Maxwell–Boltzmann form. However, rarefied gases at ordinary temperatures behave very nearly like an ideal gas and the Maxwell speed distribution is an excellent approximation for such gases. Thus, it forms the basis of the Kinetic theory of gases, which provides a simplified explanation of many fundamental gaseous properties, including pressure and diffusion.[4]
The distribution is very useful in describing the behavior of "ideal gases". In this context, helium is considered an "ideal gas" -- why you might ask? Because one of the properties of the helium molecule is "inertness". What does this mean? Typically, that helium does not react with other gases. On a side note, helium is very useful in carrying out chemical reactions that are "air sensitive." Helium gas is "inert" and serves the purpose of providing an "reactive" free environment in which desired chemicals can be introduced to carry out a chemical reaction. What do I mean by this? In the photograph below, there is a picture of a graduate student carrying out a chemical reaction in a "glove box" which is "air sensitive" -- the atmosphere in this case is Argon -- another "inert gas":
Using an environment of helium or nitrogen or argon is common in any chemistry department in the world.
What does this "probability distribution" look like?
Shown below is the general representation of Maxwell's Distribution of molecular/atomic Speeds:
Source: Pdbailey at English Wikipedia
As you can see, the distribution is greatly dependent on molecular weight. A heavier element like Xenon with a molecular mass of 131.293 grams/mole has a narrow range of speeds (0-500 m/s). Whereas the element Argon has a molecular mass of 40 grams/mole and a broader distribution (0-900 m/s). The lightest of the "Noble gases" is helium with a molecular mass of 4 grams/mole and a broad distribution (0-2500 m/s).
From this information, you should be able to compare the highest speed with that of the escape velocity needed to break the gravitational field from the previous calculations above for a space ship. Additionally, the other variable that determines the shape and location (i.e., the speed) is the temperature. After a brief search "online" I was able to find a good representation of the "probability distribution" dependency on temperature. For a given gas at two different temperatures, "OpenStax" has a great diagram shown below:
Notice how the average speed of the molecule changes along with the top speed (determined by the tail of the distribution length) shown in the colors red and green. At higher temperatures, the distribution gets broad and the top speed is much greater. This is important in understanding how the gases act in the upper atmosphere. Naturally, at this point, you are probably asking yourself, how high would the temperature have to be to eject (play a dominant role) in the escape velocity.
What about temperature?
In order to calculate the temperature needed to provide enough thermal energy to eject a molecule of helium or hydrogen, an expression is needed for the speed of molecules at a given temperature. For this, the analysis of the "probability distribution" (breaking down the nature of the distribution curves) yields a "root-mean-square" speed of the following form:
In order to calculate the temperature, the above expression needs to be rearranged to solve for temperature T as follows:
Plugging in the remaining values for the mass of the Earth, M, the "root-mean-square" speed, and the gas constant, R, yields the following:
That is hot! Does the atmospheric temperature ever reach the above temperature? Hopefully, not -- at least in the lower atmosphere. Further, the temperature does not reach this value along the distribution of height with temperature. Therefore, the only way to obtain enough energy to escape is through interacting in the complex upper atmosphere.
There are a number of factors along with the collisional energy that allow both molecules (hydrogen and helium) to escape. For the purposes of this post, we will focus on the dominant factor -- collision energy.
How does a person visualize this complexity within the atmosphere above them?
Look up into the sky. If the weather calls for a storm, then there will be clouds and just by inspection, the situation does not look good. Clouds help us visualize the complexity going on in the sky at any given moment. The shape gives us insight into the various patterns of wind moving around at various heights. Although, we are not able to perceive the depth of various patterns from the ground. Can we do better?
Sure, watch the weather channel with the satellite images. Shown below is a short video of a satellite image of a storm moving through the Southern California region. Watch how the storm moves across the region.
On the screen, the movement appears to be slow. But, if you had a sensor up in the sky, the situation might appear to be much more chaotic. Why is this realization important? Because, according the the explanation above based on the distribution of speeds of gases at a given temperature, even the lightest gases (hydrogen and helium) lack sufficient energy to overcome the barrier to escape the atmosphere. Naturally, this leads up to the following question:
Where does the remainder of the kinetic energy come from?
I was thinking about this while walking through campus over the last few days. Suddenly, I realized that the complexity in the atmosphere might easily be understood (visually) by looking at the state Lottery. Yes, the lottery. If you take a look at the short video (less than 30 seconds)below of the lottery drawing, you will see a container with balls that are being mixed quite rapidly.
As you can see, there is a large amount of kinetic energy in the system to begin with which is being supplied by the air to mix the balls in the container. When the time comes to draw a ball -- which is indicated by one ball being "ejected" up the center column and held by air to be read by the lottery announcer. The balls in the container could be compared to the atoms and molecules that are being mixed by the wind currents (in addition to the Earth's rotational energy contribution). The Earth rotates at a speed of around
The process of "ejecting" the ball could be analogous to a "chaotic current" in the upper atmosphere which would give the helium molecule enough energy to overcome the remainder of the barrier to the appropriate escape velocity of 7 miles/sec.
One classical thermal escape mechanism is Jeans escape.[1] In a quantity of gas, the average velocity of a molecule is determined by temperature, but the velocity of individual molecules change as they collide with one another, gaining and losing kinetic energy. The variation in kinetic energy among the molecules is described by the Maxwell distribution.
The kinetic energy and mass of a molecule determine its velocity by E_{\mathit{kin}}=\frac{1}{2}mv^2.
Individual molecules in the high tail of the distribution may reach escape velocity, at a level in the atmosphere where the mean free path is comparable to the scale height, and leave the atmosphere.
The more massive the molecule of a gas is, the lower the average velocity of molecules of that gas at a given temperature, and the less likely it is that any of them reach escape velocity.
This is why hydrogen escapes from an atmosphere more easily than carbon dioxide. Also, if the planet has a higher mass, the escape velocity is greater, and fewer particles will escape. This is why the gas giant planets still retain significant amounts of hydrogen and helium, which have largely escaped from Earth's atmosphere. The distance a planet orbits from a star also plays a part; a close planet has a hotter atmosphere, with a range of velocities shifted into the higher end of the distribution, hence, a greater likelihood of escape. A distant body has a cooler atmosphere, with a range of lower velocities, and less chance of escape. This helps Titan, which is small compared to Earth but further from the Sun, retain its atmosphere.
An atmosphere with a high enough pressure and temperature can undergo a different escape mechanism - "hydrodynamic escape". In this situation the atmosphere simply flows off like a wind into space, due to pressure gradients initiated by thermal energy deposition. Here it is possible to lose heavier molecules that would not normally be lost. Hydrodynamic escape has been observed for exoplanets close-to their host star, including several hot Jupiters (HD 209458b, HD 189733b) and a hot Neptune (GJ 436b).
Interestingly enough, the variation of the speeds in the Maxwell distribution are similar to the deficit of Professor Jeans idea regarding the loss of gases to space. According to measurements made after he passed, the escape mechanism (based on thermal energy) cannot account for all of the gas that has escaped the orbit. Therefore, we are left with other mechanisms at play that contribute energy -- known and others that are unknown (i.e. still being researched).
As I mentioned at the beginning of the section regarding the elements hydrogen and helium, the dynamics are complex. Amazingly enough, contributions from insightful physicists such as James Clerk Maxwell and James Jean have withstood the test of time and held up as a significant contribution to evaluating molecular speeds based on temperature, molecular mass, and gravitational pull. How the gravitational system contributes to the escape of the distribution (the tail of the distribution without sufficient energy to obtain escape velocities) remains to be discovered?
Conclusion...
The dynamics are complex in the atmosphere above us. I say that not as an excuse, but a challenge to conquer them in the future. Find out what types of collisional energy contribute the escape velocity of a hydrogen atom. Why do other "heavier" molecules escape sometimes? How do other collisional exchanges contribute -- Rotational energy, Translational energy, etc.? How does the Earth's rotation contribute to the escape velocity of these small molecular systems?
One take-away message is concrete among many uncertain. That is, our ability to send a manned space shuttle into space without problems of breaking the gravitational pull is absolutely amazing. Our technological development has led us to understand the atmosphere to a large extent. As you can see, there is still a lot of room to grow intellectually. This is where each of us come in. We need to continue to opt for funding for space programs. As I will discuss in future posts, many technological developments are created as a result of such research. Until then, keep on learning as much as you possibly can about the world. Have a great weekend.