Finding Drugs with our eyes
I worked in drug discovery for a few years. From testing out my own personalized caspase products and engineering my own biological and small molecules, to working at a biotech company that used machine learning to find non-coding mRNA drug candidates, there are a few common patterns I found in the industry. Drug discovery is notoriously slow, expensive, and extremely risky. Out of thousands and thousands of potential candidate molecules, only a handful ever make it to market—It’s like finding a needle in a haystack.
By increasing the hit rate, rate of successfully identifying a promising candidate, removing hay to find the needle, we can transform the economics of pharmaceutical R&D. Scientists created high throughput screening (HTS) in 1980, just less than 50 years ago. We are now using machine learning to compliment HTS, significantly changing the game in finding the needle.
Using ML in HTS, we can rapidly test millions of compounds and variants in parallel. It’s like having a massive “searchlight” sweeping over chemical space. ML guided screening allows for models to learn from previous experients to prioritize candidates, take in real world data and human genetics to take into account differences (chemical and biological landscapes) and translational concerns for homogenous searches, increasing a higher likelihood of success.
Google DeepMind has taken advantage of this using their TeleProt framework, which currently outperforms standard directed evolution approaches in finding high-activity enzyme variants. DeepMind’s TeleProt discovered a nuclease variant with 19× higher activity than the original at physiological pH, using fewer rounds and fewer wasted experiments than traditional methods.
Nuclease is an enzyme that cleaves, or “breaks” DNA and RNA. It’s what is used in CRISPR and other genetic engineering platforms that allow us to treat very rare and life threatening diseases.
How does this work?
Think of it like human vision. Vision is one of the most incredible senses we have, allowing us to perceive and interpret the world around us through light. What seems to be an effortless process involves a complex journey that begins with light (input) and ends in the brain (output).
When light enters the eye through the cornea, the clear, dome-shaped surface that covers the front of the eye, the cornea bends (refracts) the light to help focus it. The light passes through the pupil, the black circular opening in the center of the eye which adjusts automotically to control light intensity. Behind the pupil lies the lens, a flexible, transparent structure that further focuses the light. Similarly in HTS, the instruments capture “raw sensory input” of large data and has instructions to personalize and focus the data coming in.
After passing through the lens, light is focused onto the retina, a thin layer of specialized cells at the back of the eye, which acts like the film in a camera, capturing the image thorugh photoreceptors called rods and cones. These are converted into electrical signals that carry detailed information about the image, such as brightness, color, and movement. The signal travels to the brain where it processes and interprets the electrical signals, combining input from both eyes to create a coherent and three-dimensional image of the world. Similarly in machine learning, the data is transferred to a computational domain, and like the brain, you use ML models to process the raw data, breaking it down to meaningful “images” just like how your brain detects edges, shapes, and colors. You start to create pattern recognition and improve decision-making to predict successful objects, or in DeepMind’s case, nuclease candidates.
This approach shifts the bottleneck in drug discovery from blindly searching chemical space to intelligently navigating it, the same way your brain focuses attention on the most relevant features in a complex visual scene.
If you are interested, here is an article on how Google Deepmind is using ML to find highly active nucleases: https://www.sciencedirect.com/science/article/pii/S2405471225000699
Can we grow a fetus in a lab?
Embryogenesis, a complex process in the development of the embryo, is set to the first eight weeks of gestation after fertilization. First, the egg is fertilized by a sperm to create a zygote. The zygote starts to cleave, or rapidly divide, by 2,4, 8, 16, 32 cells.. and so on... At 32 cells, the clump is now called a morula. The cells of the morula begin to migrate outward to create a hallow ball which is now called a blastula. At this time, all the cells are pluripotent, meaning that they do not have signals to become specialized cells, or mature cells that form specific organs. After blastulation, or the formation of the blastula, gastrulation begins. In gastrulation, the cells start to form cavities, implant onto the uterine wall and differentiate into different germ layers, which are categorized as the ectoderm, endoderm and the mesoderm. The three categories then continue to “mold” into the organs needed for human life in nine months. The ectoderm, which is the outer layer, forms the neural crest, central nervous system and skin. The mesoderm, which is the middle layer, forms the notochord, skeleton, kidney, heart and muscles. The endoderm, which is the inner layer of cells, created the gastrointestinal system, liver and lungs.
We have developed artificial ways to become pregnant. In vitro fertilization (IVF), a state-of-the-art technology, offering hope to families struggling with fertility is commonly used to increase chance of human pregnancy. Unfortunately, there are still high failure rates with IVF that are not clearly understood and the first 14 days of embryogenesis is still considered a “black box”.
To better understand embryogenesis, and the early developmental stages, scientists have pursued developing artificial embryos. Researchers have figured out a way to use pluripotent stem cells, also called human naive stem cells, into embryo models without the need for an egg, sperm or womb. Artificial embryo models can be made in unlimited numbers compared to the limited number of natural embryos that can be obtained and has less ethical restrictions. After having to make induced pluripotent stem cells myself and differentiate them into specialized cells, I can tell you that the process is extremely sensitive and that both the de-differentiating and differentiating process is extremely difficult. As difficult as it may be, however, these embryo models can be used to conduct research in support of genetic and gestational research to continue improving early human development.
On the latter end of fetal development, scientists have created artificial wombs to grow a fetus outside of the human body with high scientific success displaying that we can mimic the womb environment to provide the oxygen and neutrients a fetus needs to grow. In 2019, Children’s hospital of Philadelphia published a fantasy paper on conceiving a baby lamb in a lab. Since then, after extensive pre-clinical studies on animals and good endpoints, with animal results showing that pre-28 week fetus can be carried to full term in an artificial womb, the first human clinical trials on Extra-uterine Environment for Newborn Development, or EXTEND, is under FDA discussion and will begin soon.
While we can artificially inseminate, artificially grow embryos, and have an artificial womb, we are still not at a point where we can artificially create a fetus. Artificial embryos can be implanted successfully into humans to form a healthy and living fetus. However, through the extensive research in the use and longevity of synthetic embryos and continuous innovative medical technology research on artificial wombs, I believe that one day we might just be able to bridge the gap to grow real humans in a lab.
Machines are becoming Human
The gap between “human” and “machine” is closing. Thanks to advancements in computer powering, machines are quickly coming closer in proximity and merging with the human body. Phones used to be attached to the wall but now are “attached” to our hands. The first clocks were a pedestal telling time from the position of the sun but now we wear watches on our wrist, headphones used to be big and bulky but can now be discreetly inserted into our ears, and we would keep track of time for medications but now have sensors insert under our skin to auto-pump them – and the list is ever growing.
Back in 1965, Gordon Moore, businesses man and engineer who co-founded intel, estimated that the number of transistors in microchips double. This means circuit sizes can get smaller and technology can be developed smaller and smaller with more power and efficiency to drive data. It’s 2023, so there have been 29 cycles into what is now microchip development doubling since Moore’s theory.
What does this mean for how close humans and machines can come together?
Well, have you ever watched the 2001 movie “A.I. Artificial Intelligence” directed by Steven Speilberg? While this movie was a spin off on Pinocchio, it holds some realism in the potential of computers. Scientists are striving to create an artificial brain, which uses machine learning to create artificial decision making, and maybe just one day we will have a humanized machine – an artificial being.
Without getting too much into the science fiction, this means that computers will continue to jump from big data to deep understanding. Computers be small enough to be implanted into your brain, nanobots within the body to transport and reconstruct medication and organs respectively, computerized contact lenses as subtitles to words, virtual arms with sensation through a screen, and many more.
Death of My Valentine
“Happy Valentine’s day. Thank you for the lovely flowers” said Victor on February 14th, 2023.
That was the last text I received from him before he passed. Victor, a 64+ year old male, was on hospice after failed chemotherapy and failed immunotherapy for metastatic gastric cancer. Gastric cancer is the fifth most common cancers and it is expected that 11,130 gastric cancer deaths will occur in the US in 2023. Unfortunately Victor is a part of that statistic this year.
With so many technological advances and scientific break throughs, how do therapies still fail?
No chemo drug can gaurantee a 100% success rate. The response rate for advanced gastric cancers is up to 60% for polychemotherapy (meaning multiple the use of chemotherapy agents) and even less for single agent therapies. While this is just the success rate of getting some sort of efficacious response from chemotherapy, most patients still develop drug resistance within a few months. By developing drug resistance, the cancer comes back and chemotherapy no longer works.
Chemotherapeutic resistance, whether intrinsic or acquired, is a multifactorial phenomenon associated with the tumor cells and the tumor microenvironment. Chemotherapy resistance could be acquired by drug efflux. ATP binding cassette (ABC), is a molecular transporter protein family that reduce intracellular concentration of drugs. This means that an efflux would increase chemotherapy agent concentration outside of the cell and redistribute the drug away from the tumor site. Drugs can also be inactivated causing resistance to chemotherapy. In the pre-clinical stage of drug development, scientists test pharmacokinetics — the concentration of a drug over time, through ADME. ADME stands for absorption, distribution, metabolism and excretion. When studying drug metabolism, there are phase 1 and phase 2 reactions. In phase 2 metabolic reactions, drugs are conjugated. Specifically, the body produces Glutathione S-transferases (GST) which is used to detox the conjugated proteins and inactivate chemotherapy drugs. Lastly, chemotherapeutic drugs on its own also cause DNA damage, induce cell death, or mutate the cell to escape cell death, increasing cellular resistance and allowing cancer cells to survive. While drug efflux, drug inactivation, or dysregulation of cell survival are three examples of cancer resistance, there are still more causes as well.
No immunotherapy can gaurantee a 100% success rate. Immunotherapy does not target the cancer itself. It targets the pateint’s own immune system to teach it to attack the disease. In most cases, immune based treatments stimulate T-cell response, a special immune cell in our body that fights pathogens and cancers. The relationship between a tumor and the immune system is extremely dynamic and there are a multitude of reasons that cause immunotherapy to fail. Tumors can develop mutations that prevent the T- cells from penetrating the tumor. Other mutations can also cause T-cells to no longer recognize the cancer cells as it was instructed to do by the immunotherapy agent, because the cancer cells have changed their membrane protein epitopes. This is the usual case for advanced cancers. Tumors can also change how they act to influence T-cell efficiency. It can turn down signaling pathways in the cells that normally stimulate T-cells and ultimately dampen anti-tumor response. an example of how this can work is presenting myeloid cells in the tumor to counter t-cell activity.
Each patient is unique in their own way. If each human physically looks different, imagine the diversity at a cellular and molecular level. High-throughput sequencing technology has increased the numbers of predictive markers and therapeutic target but there are still no solid biomarkers or predictive markers to treat resistance. Sometimes, even with all the scientific and technological advances, personalized treatments fail.
I gave Victor his flowers during his last consultation for a “second opinion” even though he had multiple consultations already clinging on for a hopeful option to survive. “Happy Valentine’s Day! I’m glad you liked them” I said.
References:
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5027022/
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4066028/
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8396490/#:~:text=Malfunctioning%20of%20antigen%20presenting%20cells,immunotherapy%20%5B10%2C14%5D.
AI and Biotechnology
Biotech is applying AI and Machine Learning to drug development and here’s why.
The human body is complex and when it comes to making drugs that end up in pharmacies or given at hospitals, it takes a lot of effort to make sure that treatment is safe and effective.
Drug discovery is also a complex and expensive process that involves the identification of compounds that can treat or prevent diseases. Scientists manually create hundreds of potential drug leads. There are always a few different projects running in the background of academic labs, biotech labs or pharmaceutical labs. It takes years to find a drug that’s worked well enough in a lab bench and in animals. It takes time to get something called an Investigational New Drug (IND) approval. This allows for scientists to start clinical trials. But oftentimes, throughout the 6-10 years of clinical trials, the drug ends up failing. If at any point the drug does not work the way it should in humans, or it’s not as effective as they thought, or they just aren’t able to get enough data, the research stops and all the effort used on this drug gets re-allocated to a new one. Only about 5% of drugs really make it through all the hurdles of pre-clinical and clinical testing.
The process of getting new drugs to the market takes a whole lot of time, money, and effort. A new wave of drug development platforms using AI is helping many biotech companies also gather data faster. The integration of AI into biotechnology has had significant effects on various aspects of the field, including drug discovery, gene editing, and personalized medicine. By using AI as a predictive tool, you can save some of the effort in trying to find good drug leads. AI algorithms can analyze large datasets and predict the efficacy of potential drug candidates with greater accuracy than traditional methods. Instead of finding a needle in a haystack, AI makes a bunch of needles for you. It can better identify leads, saving us time, money and effort on drugs that might not work. This can help with difficult to treat diseases that we may be stuck on solving.
AI has already been used for drug discovery. For the first time ever, a covid-19 drug was designed by AI, and what usually takes 2-3 years, took about 8 months before this company received approval for clinical testing! I mean there was a point where the human genome project took 10 years, and now it is just a task on a scientist daily to do list to check off. Machine learning and AI has the power to make drug development one day potentially become so fast and easy.
Devereson, A., Sandler, C., & The, L. (2022, November 16). How AI could revolutionize drug discovery. McKinsey & Company. Retrieved March 4, 2023, from https://www.mckinsey.com/industries/life-sciences/our-insights/how-ai-could-revolutionize-drug-discovery
N/A. (n.d.). How ai could speed drug discovery. Morgan Stanley. Retrieved March 4, 2023, from https://www.morganstanley.com/ideas/ai-drug-discovery
We are in the Bio Revolution
Over the last three decades, the life science industry has grown dramatically and has had an enormous impact on improving our lives from healthcare, the environment and to the economy. Up to 60% of the things in this world can be produced biologically from what we eat and wear, medicine, fuel and energy and even how we construct the physical world.
Ernst Ulrich von Weizsäcker, a pioneer on Environmental, Climate and Energy policies drafted a timeline that shows the rise and fall, like negative parabolas or upside down Us, as waves of innovation. Focusing on life sciences, we started with digital networks, biotechnology and software at the fifth wave starting in 1990 and we are now accelerating to the peak of the 6th wave of sustainability, radical resource productivity, biomimicry, green chemistry and renewable energy since the early 200s.
Artificial intelligence and digitization has facilitated life science trends and the innovation boom. Artificial intelligence uses machine learning and deep learning to make predictions and execute actionable methods. It has made its way into the biotechnology, pharmaceutical and healthcare industry, and can now streamline the drug discovery process, allow for more efficient and effective research and development, lowering the cost of drug and therapies and significantly improve image classification algorithms and patient screening and diagnosis. Pharmaceutical companies such as Astrazeneca have announced three year expansion plans using artificial intelligence for lupus and heart failure research.
Digitization has also drastically facilitated the life science industry, and wearable technology to patient monitor applications has allowed for decentralized and precision medicine. Patients can monitor their glucose levels using a glucose sensor at home or any place in the world, biomedical feedback devices, and even conduct clinical trials from the comfort of the patients home thus reducing labor and hospitalization costs significantly. Even further, digitizing the life science industry has implemented cloud based data storage, improved security of confidential patient information and integration of different research fields like genetics with biomimicry allowing for researchers to explore new out-of-the-box ideas.
After the first industrial revolution in rapid expansion of textiles and machinery, to the second industrial revolution where study of electricity brought the invention of light bulbs and telephone lines, all have permanently improved society. Each wave of innovation has brought ideas and inventions to revolutionize the world, and the bio revolution will do the same.
References:
Why is a fetus almost always upside down?
As a kid, I would try to hang upside down on the monkey bars but within minutes, I would feel my face become warm, red and light headed from the pressure like feeling as blood goes towards my head. I can’t hang upside down for more than a few minutes, so how does a fetus remain in an upside down position for months?!
A fetus does not actually take the upside down until at 20 weeks earliest, and it is protected by amniotic fluid. The amniotic fluid is liquid made up of water, hormones and antibodies that acts as a cushion (shock absorber) for the fetus from bumps, injuries and even gravity. It also prevents compression of the umbilical cord which is a crucial factor to the fetus’ positioning, supplying blood, oxygen, nutrients and acting as a waste removal system.
Additionally, over 95% of all fetuses turn upside down before delivery. But how is it that a fetus “knows” to turn upside down for a normal birth? Fetal programming and genomic imprinting may be linked to fetal positioning by resource provision and restriction in early development. Fetal programming is the theory that environmental cues play an important role in development and genomic imprinting is an epigenetic phenomenon that analyzes how your behaviors and environment can cause changes that affect the way your genes work.
While studying gene regulation in human pregnancy is difficult, there has been research on mice gestation that shows a set of genes essential for placentation, the formation of placenta. As the placenta provides the intrauterine nutrients needed for fetal growth, there are genetic cues that have a common location for placental development. Epigenetic regulation of the placenta evolves over the period of pre-implantation development and the pregnancy period. Epigenetic marks like DNA methylation and histone modification affect gene expression and result in genomic imprinting for proper fetal and placental development.
Fetal programming genetic factors cause the placenta to attach to the wall of the uterus and the fetal umbilical cord arises from the center of the placenta and latches onto the interior side of the mother’s uterus. A well developed placenta, and umbilical cord that if effectively able to provide nutrients and blood to the fetus leads to a larger baby with optimal birth weight and body length and body composition. THis is because the placenta sends signals to modulate fetal development through selective environmental pressures, which is where genomic imprinting and epigenetic landscape, for example of the GAB1 gene for signaling growth factors and cytokines, comes into effect.
This is fairly complicated but in summary, there is a combination of environmental and genetic factors during gestation to allow for fetal safety and development up until a natural normal birth. This includes the amniotic fluid that protects the fetus against gravitational forces, adverse effects from external forces such as gravity, provide a nourishing environment despite the rise in fetal urine towards late pregnancy stages, and reduces umbilical cord pressure. It simultaneously assists in supporting the fetus in the upside down position, which is the ideal position for a natural birth and naturally develops this way due to imprinted genes and fetal programming.
https://journals.plos.org/plosgenetics/article?id=10.1371/journal.pgen.1008709
https://pubmed.ncbi.nlm.nih.gov/20959349/
https://www.frontiersin.org/articles/10.3389/fendo.2019.00940/full
https://my.clevelandclinic.org/health/articles/9677-fetal-positions-for-birth
What came first? the Chicken or the egg
There has been a long-standing debate on which came first, the chicken or the egg? You need a chicken to lay an egg, but you need an egg to make a chicken. Similarly, for decades, scientists have wondered what came first regarding genetic materials needed to make a living organism.
The Central Dogma explains the flow of genetic information in a biological system that controls all functions needed for life to be viable, allow for diversity and evolution. The Central Dogma states that DNA (carrier of genetic information) is transcribed to RNA (material essential in coding and expression of genes), and then RNA is translated into protein. You would assume that DNA is the origin of life since it is first in this sequence, but it is not. In the process of cell division, mitosis or meiosis, DNA is replicated. However, the replication process requires proteins like polymerases or helicase, thus DNA cannot be replicated and a cell cannot divide without proteins proving that DNA did not come first.
To figure out how life was formed, scientists mixed methane, water, ammonia, and hydrogen to mimic the atmosphere of the Earth in the beginning of life formation. After adding electric sparks to mimic lightening, they were able to produce amino acids, the building block of proteins. Thus, many assumed that protein came first. However, in 2009, British scientist John Southland found that nucleotides, the building block of DNA and RNA, was present in early Earth, even before the formation of proteins. They found that the nucleotides were made through chemical reactions on the surface of minerals. This metabolism-first scenario suggests that the high pressures and high temperatures of early Earth, provided the optimal environmental conditions to drive catalytic reactions by minerals and give rise to organic compounds.
RNA World hypothesis may have the answer, where RNA came before protein or DNA. Why? Protein and RNA hold that ability for chemical reactions. RNA, like DNA, is also advantageous in being able to store genetic information. It is ability to do both is why it is suggested to play a crucial role in creating life. So, a suggested series of events may be, based on RNA World theory, that RNA was made auto-catalytically with chemicals and minerals interacting in the environment. Then the RNA developed proteins for additional functions. Later, since DNA needs proteins to replicate, DNA was the last thing in the central dogma to be formed to store information in a stable manner.
So what came first, the chicken or egg? Sometimes the answer may be something in between and has the quality of both, just like RNA.
Choosing our Significant Other: Our Body Pheromones
Ever wonder what makes you attracted to a person? Where the saying “love is in the air” may have sprouted from or why there’s a “feeling” during Valentine’s day? There has been a long debate by scientists as to whether “love” is literally in the air. Well, there is something in the air and it’s chemicals called pheromones.
An evolutionary perspective has shown that living organisms have a method of attracting a mate. While some animals like peacocks use physical appearance, dancing with their beautiful feathers, most other species use chemical signals. Animals, plants and even some bacteria produce chemicals called pheromones. The term was first derived around 1960 with the identification of a powerful aphrodisiac called bombykol. It is a pheromone, secreted by female silk moths that can be detected over a kilometer away. Just a few molecules of bombykol is enough for a female silk moth to attract a male. While human pheromones are still in early stages of research, the first step focuses on body odor. Gradual physiological changes, such as the female moth releasing bombykol, is called a primer and the behavioral response, such as a smitten male moth, is called a releaser. Analogous comparison to humans is that the release of specific pheromones, the primer, can cause a newborn infant to be guided to a mother’s breast by that scent.
It has been concluded that each individual organism produces a mixture of chemicals unique to them and here is why: an important immune system protein called histocompatibility complex (MHC) found in each individual human being. You can think of it as a “fingerprint” for the uniqueness of the biological and environmental differences in each person. It is said that sexual attraction is strongest between those with the most unlike MHC. The complex cloud of smells and aromas we emit stimulates olfactory cues that cause reactions of chemistry of individuals we find most pleasant. The MHC theory was tested in 1995 where a group of females smelled worned t-shirts by men. Most women chose the shirt with whose smell they found most attractive, which were worn by the man of the most dissimilar MHC. The test was repeated in 2005 and provided similar results.
Compatibility is still a debatable subject in which some works suggest different immune profiles provide the most attractions, while other work suggest that “like attracts like”. Research has also shown that endogenous androstane steroids, particularly androstenedione, is characterized as a pheromone primer secreted by men that attracts women. It can be found in axillary sweat and other bodily secretions; also known as sweat. It is said that male pheromones also play a role of mating selection through the attributions to facial and bodily features. This can be evident from a female’s primer.
Histocompatibility can already be analyzed but as the understanding of pheromones and attraction matures, more idealistic studies will arise to provide more concrete evidence. The understanding of bodily chemical release and what makes two people attracted to one another can lead to more effective matchmaking platforms than our current dating applications.
Is Coffee Good for You?
As one of the world’s most drunk non-alcoholic beverages, coffee is still a product whose health benefits are not widely known. For many of its ardent followers, a day can never start without a mug full of this almost magical liquid. While you may have heard of its negative effects, like staining of teeth and increased dependency rates, coffee, like many other drinks, is good in moderation. While we are aware of caffeine as being the substance that symbolizes the coffee, it is in fact made up of many different active substances and antioxidants, and they all help in the many beneficial effects of coffee drinking.
Some of the benefits perceived commonly are wakefulness and improved energy levels. The other benefits in our body such the heart, liver, brain have been theorized and have been studied as part of scientific research. One positive effect of coffee is that it can help reduce liver damage caused by alcohol and unhealthy food consumption. From nine different studies, results showed that out of 430,000 participants, those that drank two cups of coffee a day were linked to having 44% less of a chance at developing liver cirrhosis, which is chronic liver damage due to liver fibrosis and failure. Compared to no coffee at all, it has shown a linear correlation of coffee consumption to liver cirrhosis. One cup results in 22% lower risk, two cups down to 44%, 57% for three cups and 65% chance for four cups.
Coffee grains are also a beneficial multipurpose product and can be used as home remedies. Coffee may help with reducing the appearance of cellulite by dilating blood vessels under the skin to improve blood flow. It has anti-aging benefits by decreasing the appearance of spots, redness and fine lines. It is a source of vitamin B-3 that aids with reducing nonmelanoma skin cancers.
Coffee also contains compounds with antioxidant and anti-inflammatory properties which aid in disease reduction. However, that does not mean that drinking coffee will fix any issues resulting from excessive weight gain, alcoholism, an unhealthy diet, or health issues. Think of coffee with similar effects to tea or vitamins. They have positive effects to help your health but do not solve them, so it is crucial to understand that even while drinking coffee, health maintenance is important. The positive results are most effective while drinking straight coffee, the use of additional products in your coffee like the sugar, or all the ingredients in your caramel macchiato, may not provide optimal effects in lowering the chance of disease.
Stem Cell Therapy on Knees
Stem cell therapy is the current source of regenerative medicine to create or repair and restore living tissue and organs. It’s goal has been to restore damaged tissue and organs in patients with serious injuries or chronic disease where the body does not provide sufficient renewal processes.
Stem cells are special human cells found in the body that can develop into various cell types. Stem cells are collected from the bone marrow of a human located towards the back of the patient's pelvis. Bone marrow is a bloody substance that is extracted using a small needle. The bone marrow contains mesenchymal stem cells, used for tissue repair, platelets and other growth factors. Once this is introduced back into the body in the desired area, the stem cells and growth factors work together to repair damaged cells and regenerate new healthy ones. The most effective method with the least chance of bad reaction is for a patient to use their own adult stem cells from their body, extract and relocate the sample into the damaged tissue.
Recent studies have shown that stem cells can be used to regenerate and relieve knee pain. Impairment of the knee can significantly impact your quality of life, causing difficulties in everyday tasks. Until stem cells, the options for knee pain were steroid injections, physical therapy or knee replacement surgery. Now, stem cell therapy is a potential less straining and drastic measure to improve your knee. Knee pain can be caused by a multitude of factors ranging from fractures, ligament injuries, tendon tears to chronic diseases such as Arthritis and Iliotibial band syndrome all causing some sort of pain due to inflammation, tenderness or tears in the muscles.
Stem cell therapy for knees is still very new but studies seem to be promising with results showing patients that can avoid surgery with minimal side effects. Take arthritis for example; it is one of the major disability causes in the United States. It develops when the cartilage lining on joints start to deteriorate. Overtime, due to age, a person can develop arthritis in their knee or any other joint causing pain and limited function. Every year, approximately 600,000 patients in America receive knee replacement surgery. That number will keep growing, but thanks to stem cell therapy, knee pain and cartilage repair can reduce the chronicity of arthritis. Cartilage promotes smooth movement of the joint surface, allowing protection for bones from up to 20 times the shock of the body weight. Researchers believe injections allow stem cells to develop into essential cartilage cells, thwarting inflammation to reduce pain and relelate cytokine proteins that slow degeneration of cartilage.
The cost of treatment ranges from approximately $3,000 to $5,000 per knee depending on the location. Unfortunately, most insurances do not cover stem cell injections, but that may change with the increased amount of research showing the effectiveness of the procedure. In the process, the patient completes the procedure in one visit in about two to three hours but is required to have an initial visit for consultation and then a follow up appointment after the procedure.
We are at the infancy of stem cell therapy and have the potential to provide ground breaking results in more effective noninvasive procedures with less side effects. There is ongoing research on the use of stem cells that hopefully will continue to provide new restoration methods for various tissue damage or deterioration in the body.
AI in Precision medicine
Since the development of the first Artificial Intelligence (AI) program, there has been exponential advancement in the field of technology and this has resulted in technology being even more integrated into our everyday lives. AI is an umbrella that categorizes “intelligent” tasks. It uses machine learning, deep learning and artificial neural networks to create layers, basically a hierarchy of data in a “human brain” inspired model to allow machines to learn and predict outcomes (Patel et al, 2019). Whether we know it or not, we use AI based systems in items we may overlook, from the text predictive algorithms on our phones and emails, facial recognition technology in the tagging systems on social media, Siri for voice communication and much more.
No drug affects two different people exactly the same way. When it comes to prescribing medicine, an ideal situation would allow a physician to calculate independent dosages for each patient based on numerous factors, from environmental to genetic factors, to ultimately provide the best course of treatment. AI has also integrated itself into medicine such that care like this is available. Instead of a one-size-fit-all approach, precision medicine has allowed for patient specific treatment that is faster, cheaper and more accurate. Since the start of precision medicine in the 1990s, we’ve successfully discovered a range of technology and medicine that provide optimal results. The famous Human Genome Project took a 13-year global effort to sequence three billion base pairs of the first human genome (U.S. Department of Energy, 2019). Now, there are large data bases of gene sequences, protein sequences, nucleotides, and so on that can be found on the National Center for Biotechnology Information (NCBI) website and can use AI based algorithms to do large searches.
Such extensive databases, automation machines, and testing tools allow faster and more precise investigation that provide scientific discoveries to generational health problems and human body anomalies. Geneticist Dr. Steve Horvath had developed the epigenetic clock which used an algorithm that could determine your age, and even your age of death (Horvath S. 2013). Since then, the algorithm has continued to improve for more accuracy. Stem cell therapy includes personalized medication to cure tissue damage secondary to surgery, toxins, trauma, aging and disease. Stem cell therapy helped aid ACL tears in professional athletes like basketball player Kobe Bryant, NFL player Adrian Clayborn, golfer Tiger Woods, and so many more (BioInformant, 2019). New medical devices like automotive insulin pumps and diagnosis tools like Sepsis Watch have been created to assist patients.
We have spent the last few decades exponentially growing in the field of biotechnology. Now, we are in the 6th wave of innovation in the technological revolution which has used AI and precision medicine towards the path of biomimicry. Biomimicry, or biomimetics uses nature as a model. It’s purpose is to solve complex biological problems in a technological-oriented approach. The umbrella of precision medicine has allowed us to improve health care to patients. The use of AI in precision medicine has allowed astonishing new discoveries and has solved many biological issues. While the idea of biomimicry is still coming into play, and there are still concerns and critiques to make on AI in precision medicine, the new wave will continue to build on the success we have had in patient oriented care and continue to advance medicine.
"Trust Your Gut!": The Gut-Brain Connection
Communication between the gastrointestinal tract, most commonly called the gut, and the human brain has fascinated scientists for centuries. Fortunately, the direct effect the brain has on the gut and vice versa was found less than a century ago, around the 1960s. The connection between the two is called the gut-brain taxis which generates information through the vagus nerve, allowing the food that we eat to be tethered to how we feel.
How does this work? The food we eat is broken down by our digestive system. It begins with mechanical and chemical break down from chewing and our saliva to chemical breakdown by the acids in our stomach. Once the food particles enter our small intestine, the small particles communicate with specific cells known as the gut sensor which relays messages onto the vagus nerve. The message is created by how they sense and react to the small particle environment through chemical, mechanical and thermal stimuli.
Now that may have been a mouthful to read and understand. What this basically means is that nutrients or even bacterial byproducts, as we have gut bacteria, from the food we eat sends a signal to the nerve that sends information to the brain and influences the way the brain functions. So, think of it as “good food sending good messages to positively affect the brain and bad food negatively affecting the brain”. If you agitated the bacteria in your gut, it’ll cause a certain environment that relays information to the brain that negatively affects brain function. As the control center of your entire body, how we feel throughout our body affects our feelings and moods. A few obvious bowel conditions that may change mood is Irritable Bowel Syndrome (IBS), constipation, diarrhea, bloat.
Have you ever felt that “gut-wrenching” feeling? Has an event made you “feel nauseous” or has someone given you “butterflies”? Just as the gut affects the brain, the brain can affect the gut. Those feelings are linked due to the gut-brain connection. In Psychology, certain physical or social conditions cause physical dilemmas affecting the function of the gut and this can cause inflammation or increase infection susceptibility.
To keep your healthy GI tract and avoid loss of sufficient brain function and bad moods, there are a few things you can do. Research has shown that high levels of stress takes a toll on your body, thus lowering your stress levels will allow your body to function more smoothly, helping you to relax and leaving you in an overall good mood. Of course, that is easier said than done. Insufficient sleep impacts your brain so an individual should try to attain seven to eight hours of sleep every night. A few other methods are to eat slowly and stay hydrated to obtain full digestion and absorption of nutrients. Taking probiotics will provide “good bacteria” in your gut to provide a better-quality environment in your gut. Keeping a healthy lifestyle is crucial to the way we feel so it is important to pay attention to what we put in our body, how it is reacting to moments in our life and to always think about ways to constantly improve our alertness and mood.
Pain Assessment: The WILDA process
A patient’s knowledge and beliefs about pain can play a role in their perception of pain and how it affects them. Many are very reluctant to go to their doctor; some to save money and others just look at it as a hassle. They tell themselves that “the pain will go away eventually.” But there are others who run to their doctors at an inkling of a strange feeling. When you do go to your doctor’s office, you might have noticed that their first protocol is to ask questions to decipher and diagnose the issue. When it comes to pain, a pain assessments are crucial for pain management and finding the most optimum way of curing it. Pain is very subjective and can be very broad but the WILDA approach provides a concise template for assessment. It focuses on words for pain description, its intensity, location, duration and aggravation.
Pain is defined as an unpleasant sensory and emotional experience from tissue damage by the International Association of the Study of Pain. It can be thought of as a part of life; it can occur to anyone at any age. There are two types of pains. The first is acute pain which can last anywhere between hours to weeks. It is caused from tissue damage, inflammation, during a surgical recovery process, or temporary disease. Chronic pain, on the other hand, intensifies over time and can last for months or even a lifetime. They are even types of pain that are side effects to diseases like cancer, arthritis, fibromyalgia, diabetes or unresolved injuries.
The physician usually starts off by asking you to tell them about your pain. This starts off the WILDA process as you describe the pain. Then the doctors go on to ask how much it hurts, where it hurts, how long it’s hurting and if there is anything that intensifies or alleviates the pain. They may also ask how it feels to gain more description. Some common pains that patients tend to describe are neuropathic pain which feels like a burning or shooting pain, somatic pain which achy throbbing pain or visceral pain which feels like pressure or cramping.
Doctors also use a pain scales from numbers 0 for no pain to 10 being severe pain to describe intensities of pain. For children, they tend to use a visual pain scale like The Wong/Baker face rating scale which contains corresponding face expressions with each number to aid them with identifying their pain. Doctors may also ask patients the intensity of pain over a period of 24 hours, when it hurts the most and when the least and if the pain continues, patients are provided with pain medication.
There are many approaches to alleviating pain. Doctors will ask if you have noticed how or when the pain better or worsens. From there, the doctor can decide what approach to take. There are pharmaceutical and no pharmaceutical approaches. Pharmaceutical is using drugs like Advil to alleviate the pain, and no pharmaceutical are massage or therapy approaches. The doctor will also ask about previous health issues like depression, constipation, insomnia or nausea which can also be a factor in intense pain.
Whether you are familiar with medical terms and symptoms, the WILDA approach can be used at home to provide you, the patient, a checklist that will help you gain more information about your pain that you may have overlooked and potentially save more time at hospitals when consulting your doctor.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1317046/
Color Therapy & Mood
Ever wonder why you can automatically associate a color to a certain feeling from just looking at it? This is because colors do actually give off a certain “feeling”. They are associated with different mental, physical, and emotional effects on people. Red, for example, can increase heart rate and adrenaline. While some colors have been psychologically proven to influence a change in affect, some may be subjective and can be opened to interpretation and perception between different populations or cultures.
There are three categories of colors: primary, secondary, and tertiary. Primary colors are basic hues that cannot be mixed together to form and all other colors are created from them like red, blue, and yellow. Secondary colors are mixtures of two primary colors such as green, orange, and violet. Tertiary colors are mixtures of both primary and secondary colors, sometimes resulting in two word names like blue-green.
Color symbolism is the use of color as a representation of something specific to a particular culture or society. While there is a general meaning behind every color, some colors can represent something totally different from its general meaning based on a culture. For example, the color white gives a sense of purity, innocence, cleanliness, neutrality and space. However in some cultures, like in hinduism, white also means mourning and is typically worn during cremation and mourning period of a loved one. The color black gives a sense of power, authority, strength and intelligence. But again, it can also mean evil or death and is also worn during a funeral. Yellow provides a positive energy representing happiness and optimism. Sometimes it can also represent hunger and frustration. Green is a natural color that symbolizes growth, prosperity, good health, fertility and harmony but it can also represent envy. The color grey is a neutral color that means practicality and timeless. The color red gives off many emotions. It represents love and roman, but also energy, excitement, intensity and agitation. Orange always gives off an energetic feeling with excitement, warmth, change and prosperity. Blue gives of a calm and serene feeling whose color usually represents cold, wisdom, truth, and focus.
In color therapy, colors are associated with a person’s emotions. It is a holistic healing method that uses light and color to alter a person’s mood or health usually for people with brain disorders or emotional troubles. For example, since the color blue has a calming effect, looking at it can help reduce blood pressure; opposite of the color red. Our auras are represented by different color and intensity, and presenting colors to our environment can help with cleansing and balancing. It is believed that it is the specific frequency and vibration of colors that affect our bodily energies. We know that light enters through our eyes to create images of our physical environment, but it also gets absorbed through our skin. Certain wavelength absorption of light can activate hormones or enzymes in our body to make us feel a certain way. For example, shades of red give off a loving sexual vibration which may activate parasympathetic nervous system and result in increased state of arousal.
The psychological effects of colors are separated into two categories: warm colors which spark emotions ranging from comfort to hostility, and cool colors that spark emotions ranging from calmness to sadness. Some companies even take into consideration color schemes for their interior decoration to give off a sense that will make people need their product. For example, McDonald’s uses yellow because the color yellow increases hunger, making people buy their food.
So maybe next time you decide to repaint your home, it may be of importance to you to consider the mood you want to provide. If you want to create a state of two different moods, you can use secondary colors. For example, purple is a mix of red and blue and can provide a steady balance between stimulation and serenity. Intensity of colors should also be taken into consideration. A light purple, which includes the mixture of color white, can create a more peaceful surrounding, reducing the tension or intensity that can be created by mixing two prominent primary colors.
Straining your Eyes
It all begins with an idea.
Growing up, we’ve heard our parents tell us not to watch TV for too long because it’s “bad” for our eyes. But what does that mean? Does prolonged screen viewing cause damage to our eyes? Or is there no effect at all? In this generation, technology has increasingly become a part of our everyday lives, making us more reliant on it. In fact, 97% of classrooms in the United States have at least one computer for children to use, and smart devices are becoming readily available and given to kids less than five years old. As screen time rises, parents worry about the harmful effects of increased screen exposure and indeed, extensive electronic use can cause eye strain when looking at the screen for too long.
Eye strain is a repetitive strain injury caused by insufficient rest to the muscles. Glare from the amount of light that shines into your eyes and the position of the computer can also lead to muscle fatigue. Dry Eye syndrome, a condition where a person’s eye doesn’t produce enough tears to keep the eye lubricated, is often mistaken as an effect to screen exposure. Tears are necessary for the eye to maintain a healthy front surface to provide clear vision. Those with Dry Eye Syndrome tend to experience a burning sensation of the eye, irritation, sensitivity to light, redness, watery eyes, and blurry vision. Common causes of this syndrome are aging, medical problems like diabetes, rheumatoid arthritis, thyroid disorder, Vitamin A deficiency and more. Studies have shown that our eyes blink on average 12 times per minute. So the dry eyes, irritation, tears, and all symptoms similar to Dry Eye Syndrome, do not equate to the syndrome itself, but rather, it is us as humans forgetting to blink a sufficient amount when looking at a screen. Similar effects can also develop when straining your eyes to read a book.
In today’s generation, studies have shown that increased screen exposure to children at young ages result in higher frequency of myopia, or nearsightedness, due to a child’s habit of holding electronics very close to his or her face and the lack of outdoor activities to allow the eyes to exercise short and long distance vision.
The most important thing in alleviating eye strain is to rest your eyes. It is crucial to deliberately blink frequently if you are at your computer for extended periods of time so that the tears used to lubricate your eyes do not evaporate. Positioning your computer correctly, about 18-30 inches away from your face in a 10 to 15 degree slant can aide with minimizing eye strain. Lowering the glare will help with excessive use of your eye muscles as well. You can also use the “20-20-20 rule” to rest your muscles. Stop looking at your screen every 20 minutes and focus on something 20 feet away for 20 seconds. Closing your eyes for a few minutes every 30 minutes also provides the same effects.
For adolescents, the American Academy of Pediatrics (AAP) has revised their recommendation for childhood screen time. They recommend little to no screen exposure to kids 18 months or younger. For children between 18 months to 2 years old, screen time should be limited and supervised, and topics of screen time should be educational since children learn at an accelerated rate in the first few years. For children between the ages of 2 to 5 years old, screen time should be limited to one hour a day, and the same limits should still be in place for children 6 years and older. It is crucial to make sure that electronics do not inhibit a child’s every day activity.
Can we fix a broken heart?
It all begins with an idea.
What happens to your body when you fall in love and what happens when you lose that love? Humans are known to be a deeply social species. Our most gratified state arises when we feel belonged in our environment and with those around us. Hence, we tend to form relationships. Our motivation to maintain stable and meaningful social relationships is a common pattern that is rooted in our evolutionary history.
Love in the brain was examined by functional Magnetic Resonance Imaging, or fMRI, in 2003 in order to map the neural activity of the brain when looking at a loved one. The scan showed vibrant fireworks of green, yellow and blue in an image of grey matter, confirming that love is activated by an influx of dopamine. Dopamine is one of the main neurotransmitters that are responsible for the feeling of pleasure and satisfaction, like a reward system. Love isn’t the only thing that wires an increase in dopamine. Nicotine based drugs and cocaine follow the same pattern of increased dopamine activity within the brain to provide a pleasurable feeling. The more dopamine that is released, the better you feel, the more you want it, and the easier it is to become addicted. From parallel conclusions, you can say that love is a drug.
So then what happens when the love is taken away? If you compare it to taking away a drug from a drug addict, it results in withdrawal. The extremity of the withdrawal is then determined by how addicted they were to the drug. Similar to how our body and mind hurt, heartbreak causes our body and mind to also hurt. The brain regions involved in anticipating pain and regulating negative emotions are the right anterior insula, which regulates motor control and cognitive function, and the superior frontal gyrus, which links the nervous system to the endocrine system for hormone production. Thus, in response to physical pain, the brain activates the anterior cingulate cortex (ACC) as an alarm for distress.
When in the mire of heartbreak, chances are that you feel pain somewhere in your body. Maybe you feel it in your heart, stomach or maybe even in the palm of your hand. The pain can be temporary, or it can be chronic, depleting you and hanging over a crushing sensation. So if an fMRI of a heartbroken patient were to be taken, if he/she is feeling pain and anger, the fMRI will show color development in the three areas due to increase in the hormone progesterone. Progesterone can be linked to anger and anxiety, which causes an overall depressing effect. In the midst of a heartbreak, the panic and denial of losing a loved one, we tend to show “signs of lack of emotional control” for weeks or months after initial heartbreak. This may result in unsuitable phone calls, writing letters, pleading for reconciliation, crying sessions, uncontrollable drinking, or dramatic encounters in the wake of passion.
If it hurts so much and makes us do crazy things, how can a broken heart be fixed? Unfortunately, there is no yet physical cure for heartbreak. There is no pill to cure it. The closest may be that recent studies have shown that Acetaminophen, the main ingredient in Tylenol, provides a placebo effect on patients resulting in significantly low activity in their brain’s ACC. This means that there is lower sense of distress in the patients.
Although this might not be a solid answer, other studies have shown that sensitive social support is one of the greatest source of relief for emotional distress. Hanging out with your friends or focusing on other things to keep yourself busy are beneficial for a smoother healing process. As much as we’d like to grow a new fresh heart every time we get our hearts broken, that is still not feasible. A heartbreak can distort your sense of self, and although it is a popular feeling that a lot of us can relate on, it is a topic still in research. On the bright side, just like the nerves on a broken bone grow, change and connect with new nerves over time to heal, your heart also heals.
DNA for Data Storage: Storing all the World’s Information
It all begins with an idea.
When you want to store away physical objects for the future, the attic tends to be the go-to place for this. However, at some point, it becomes cramped, crowded and unorganized. What can be even messier than your attic is your computer and all other electrical devices with a storage component. A one page paper is small and easy to manage, but imagine all those files, documents, pictures, videos, and applications that pile up until your computer’s memory is full. Then, the computer starts running slow and you need a USB, external memory disk, or an online storage unit like cloud to back-up and save everything. But with approximately 7.7 million people currently on earth, and almost 2 centuries since the first built computer, how are we supposed to manage ALL of it?
Scientists have proclaimed DNA as being a potential source of storage for long periods of time. Its dense, easy to replicate and stable property makes it a highly desired candidate for an easier method of storage and retrieval. Current storage uses magnetic tape to store zettabytes of data but with the extensive amount of data production made every day, the current infrastructure is expected to consume all the world’s microchip-grade silicon by the year 2040 and therefore does not seem like an efficient method. Researchers can use the DNA base pairs Adenine (A), Thymine (T), Cytosine (C) and Guanine (G) to make a script to encode information. Attaching non-binary numbers, 0s and 1s, to transcribe the nucleotides into a coding sequence can allow the ability to simplify the computational language in the writing and reading process.
Encoding data in DNA initially started as a joke in 2011, but it was soon seen as a potential idea for long time archiving. Shakespeare's Sonnet, snippets of Martin Luther King’s speech “I Have a Dream” and even parts of Beethoven have been successfully encoded into a strand of DNA. Unfortunately, the biggest worry is that DNA tends to make 1 mistake in the nucleotide sequence for every 100 bases. While reading the transcript, the desired file may not be able to be retrieved properly without being damaged. DNA uses one of the five types of DNA polymerases for proofreading and repairing for DNA sequences, so a mathematical computation needs to be made that performs the same function. The economics of writing DNA still remains problematic since DNA-synthesis companies charge 0.07-0.09 dollars per base. This means that a minute of stereo can be stored for $100,000. For such high expenses, an alternative source of processing needs to be found that is more cost effective.
Based on bacterial genetics, digital DNA can maybe one day rival or exceed storage technology. The read-write speed of a hard disk is between 3,000 to 5,000 microseconds per bit with a retention span of just over 10 years, using 0.04 watts per gigabytes. In contrast, bacterial DNA’s reading-writing speed is less than 100 microseconds per bit, with over 100 years retention period and uses less than 1*10^11 watts per gigabyte. To conclude, this means that even though the translation of memory is slower in DNA, DNA storage still stores for 10 times longer than a regular hard disk, and uses an exponential amount of less energy with 1*10^6 times more data storage density. This means that we need only 1 kg of DNA storage to store the world’s information. Once the design is successful and the economics of its production resolved, we will be able to put all the internet’s information, books, and more with terabytes of information in something as small as a strand of hair.