What happened to the Soviet Venera probes sent to Venus?



When we speak about the space race between the U.S. and the Soviet Union that took place in the second half of the 20th Century, we often focus on the moon missions. There were, however, various other missions during this time that had many different objectives as well. One of these was the Venera programme that corresponded to a series of probes developed by the Soviet Union – to better understand our neighbouring planet Venus – between 1961 and 1984.



Launched in 1961, Venera 1 lost radio contact before it flew by Venus. Venera 2 failed to send back any important data, but it did fly by Venus at a distance of 24,000 km in February 1966. Venera 3 too lost communication before atmospheric entry, but it did become the first human-made object to land on another planet on March 1, 1966.



With the planned mission including landing on the Venusian surface and studying the temperature, pressure and composition of the Venusian atmosphere, Venera 3 carried a landing capsule that was 0.9 m in diameter and weighed 310 kg. The atmosphere was to be studied during the descent by parachute.



Positive start



Venera 3 was launched on November 16, 1965, just four days after the successful launch of Venera 2. Things went fine for Venera 3 as ground controllers were able to successfully perform a mid-course correction on December 26, 1965 during the outbound trajectory and also conducted multiple communication sessions to receive valuable information.



Among these were data obtained from a modulation charged particle trap. For nearly 50 days from the date of launching, Venera 3 was thus able to give an insight into the energy spectra of solar wind ion streams, out and beyond the magnetosphere of our Earth.



A failure and a first



Just before Venera 3 was to make its atmospheric entry in Venus, on February 16, 1966, it lost all contact with scientists on Earth. Despite the communications failure, the lander was automatically released by the spacecraft.



At 06:56:26 UT (universal time) on March 1, 1966, Venera 3’s probe crash-landed on the surface of Venus, just four minutes earlier than planned. It wasn’t in a position to relay back any information as it had lost contact, but it was the first time an object touched by humans had struck the surface of a planet other than our own.



Success follows



Investigations revealed that both Venera 2 and 3 suffered similar failures owing to overheating of several internal components and solar panels. With regard to Venera 3, its impact location was on the night side of Venus and the site was put in an area between 20 degrees north and 30 degrees south latitude and 60 degrees to 80 degrees east longitude.



Venera 3 tasted success in what was largely a failure, but it did pave the way for several more successes as well. For, Venera 4 became the first to measure the atmosphere of another planet, Venera 7 became the first to achieve soft touchdown and transmit information from another planet, and Venera 13 and 14 returned colour photos of the Venusian surface, days within each other. Venera 13, in fact, transmitted the photos on March 1, 1982, exactly 16 years after Venera 3 had landed on Venus.



 



Picture Credit : Google


When was the moon Miranda discovered?



Either now, or when you were younger, you would have surely played with jigsaw puzzles. But have you ever tried to piece together parts from different puzzles and see what you can end up with? What if the same thing actually happened on a celestial scale? The result probably would look something like Miranda.



One of Uranus’ five major moons, Miranda is the innermost and smallest among them. It was discovered by Gerard P. Kuiper on February 16, 1948 in telescopic photos of the Uranian system. Kuiper worked at the McDonald Observatory in western Texas and the photos were obtained using the Otto Struve Telescope at the University of Texas in Austin.



Shakespeare connect



Weeks within its discovery, Miranda’s motion around Uranus was confirmed, on March 1, 1948. With Uranus’ previous moons Ariel and Umbriel discovered in 1851, this made Miranda the first satellite of Uranus to be discovered in nearly 100 years.



Like Uranus’ other major moons Oberon, Titania, Ariel and Umbriel, Miranda’s name too was related to the works of English poet William Shakespeare. Miranda was named for the daughter of Prospero in Shakespeare’s play The Tempest.



At about one-seventh the size of our Earth’s moon, Miranda is among the smallest objects in the Solar System to have achieved hydrostatic equilibrium. Taking 1.4 days to complete an orbit around Uranus and with an orbital period that is also 34 hours, it is tidally locked with Uranus and hence has the same side facing the planet at all times.



Five features



What makes Miranda mysterious, however, is the fact that it has one of the weirdest and most varied landscapes among all extraterrestrial bodies. Scientists agree upon at least five types of geological features on Miranda. These include craters, coronae (oval-shaped features), regiones (areas strongly differentiated in colour or albedo), rupes (scarps or canyons) and sulci (complex parallel grooved terrain).



There are younger, lightly cratered regions and older, heavily cratered regions on Miranda. There are three large coronae in the southern hemisphere, which are kind of unique among objects known in the solar system. These racetrack-like grooved structures are named Arden, Elsinore and Inverness, all locations in Shakespeare’s plays.



Largest cliff in Solar System



The largest known cliff in the Solar System is on Miranda and is known as Verona Rupes, named after the setting of Shakespeare’s Romeo and Juliet. With the cliff face estimated to be 20 km high, this rupees is as many as 12 times as deep as the Grand Canyon in the U.S.



As Miranda is almost invisible to most amateur telescopes, almost everything we know about it is through the Voyager 2 mission. The only flyby of the Uranian system so far was achieved by Voyager 2 in 1986, providing us with a sneak peek of Miranda’s geology and geography.



Considering only the southern hemisphere of Miranda faced the sun during Voyager 2’s flyby and the northern hemisphere was in darkness, only the southern hemisphere has been studied to some extent. Theories have been proposed and discussed as to what might be the reasons for Miranda’s varied geological features. But these mysteries will be solved only with more information and that might well require further missions to Uranus and its system.



 



Picture Credit : Google


Who invented the diesel engine in 1892?



What all have you seen during a visit to a fuel station? You would have seen motorbikes and cars, and maybe even bigger vehicles on some occasions. You would have seen people attending to these vehicles, filling them up with the desired fuel. If you had noticed closely, you would have observed that the fuel station mainly provides two kinds of fuels – petrol and diesel.



While all the two-wheelers and some four-wheelers get their tanks filled up with petrol, other four-wheelers and certain bigger vehicles get their tanks filled with diesel. The type of fuel used by the vehicle is determined by the engine that it houses. While those with petrol engines use petrol, those with diesel engines use diesel as their fuel.



The “diesel” in these diesel engines comes from Rudolf Diesel, a German inventor and mechanical engineer. Born in Paris in 1858, Diesel decided on a career in engineering at the age of 14. He went to the Munich Technical University (Polytechnic Institute), and by the time he completed his studies there in 1880, he had received the highest grades the university had ever given in an examination since it was founded.



Inspired by Linde



Apart from his brilliant record as a student, Diesel was also drawn towards the thermodynamics lectures of German refrigeration engineer Carl von Linde during his time at Munich. This meant that Diesel not only went to work in the Linde refrigeration machine factory after his studies, but was also inspired to develop a new engine with increased thermal efficiency.



After a year of traineeship, Diesel was hired in Linde’s ice factory in Paris in 1881. By the end of the year, Diesel received his first patent – regarding the manufacture of transparent ice. As the years passed by, Diesel started devoting more time to his self-imposed task of developing a more efficient internal combustion engine.



By 1890, the year he moved to Berlin for a new post with the Linde firm, Diesel had conceived his idea for the engine. His concept of a “new, rational heat engine” was patented in 1892 and in the following year, on February 23, 1893, Diesel was granted the patent DRP 67 207 “on a principle of operation and construction for internal combustion engines”.



No external ignition



As opposed to the spark-ignition engine that requires an externally applied ignition to a mixture of air and fuel, Diesel’s compression-ignition engine relied on compressing air. The ignition was to be achieved by introducing the fuel to a cylinder full of air that is highly compressed, thereby reaching high pressures and hence, extreme heat.



Having built the first prototype of his engine in 1897, Diesel kept improving it over the years. Even though the engines he built never quite hit the efficiency he had predicted through his theoretical calculations, they were still way better than the peers. This meant that there were over 70,000 diesel engines – mainly in factories and generators- that were working by 1912.



Death remains a mystery



On September 29, 1913, Diesel disappeared from the steamship Dresden while travelling to Harwich, England from Antwerp, Belgium. A body floating in water and spotted on October 10 was identified to be Diesel. Even though Diesel’s continues to be a mystery, it was officially judged to be a suicide. Despite making a lot of money through his engines, Diesel was facing financial ruin during this voyage and was nearly broke. Conspiracy theories, however, suggest that Diesel’s death could well have been a murder.



In the year of his death, Diesel wrote that “I am firmly convinced that the automobile engine will come, and then I consider my life’s work complete.” He didn’t live to see it, but almost all vehicular diesel engines, till this day, continue to follow the basic principles that were set forth by Diesel.



 



Picture Credit : Google


Which country launched the space station Skylab in 1973?



If you were to think of a laboratory in space, you would probably think about the International Space Station (ISS). A habitable artificial satellite in low Earth orbit, the ISS is a joint project by a number of space agencies and its usage is governed by treaties and agreements. The ISS, however, isn’t the first space station built by humans. There were precursors to it, one of which is the Skylab.



NASA had the idea for a space station for years before the Skylab was eventually launched. It was believed to be the dream of American-German space architect and Apollo rocket engineer Wernher von Braun, who had long wanted to build a human outpost that would let people live and work in space for extended periods of time.



Use unused hardware



With the space race and the mission to moon dominating the 1960s, the idea for a space station picked up pace only after that. NASA began an Apollo Applications Program to utilise unused hardware from their missions to the moon and one of the ideas to pursue was that of the space station.



The design evolved over the years and the Skylab consisted of four components. The Orbital Workshop (OWS) was the main compartment for living, working and sleeping; the Airlock Module (AM) allowed astronauts to conduct space-walks; the Multiple Docking Adapter (MDA) enabled rescue operations; and the Apollo Telescope Mount (ATM) contained telescopes for observations and solar arrays for additional power.



Repair work possible



Launched on May 14, 1973, Skylab was faced with a crisis right at the start. A micro-meteoroid shield that was to function as a thermal blanket and shelter Skylab from debris accidentally opened just over a minute after the launch. This meant that the temperatures inside the space station rose to intolerable levels.



While the launch of the first crew was delayed because of this incident, their first task eventually became to salvage this situation when they made their way on May 25. Even though it was no easy task and it was definitely frustrating for the crew members involved, they eventually succeeded after a few days. They were able to erect a sun shade, deploy the array that was previously stuck and begin their work inside the space station. It showed that it was possible for space-walking astronauts to fix badly damaged space station while it was still in orbit.



Three successive three-man crews lived aboard the Skylab. While the first one – the ones who performed the repair – stayed in space for 28 days, those in the second and third crews lasted 56 and 84 days in orbit respectively.



Superhuman expectations



Skylab’s second crew were super productive in getting their tasks done, that they wanted more. So much so that it set unrealistic expectations of what can be achieved in space. By the time the third crew returned to Earth on February 8, 1974, they had accompanied a lot of scientific work, but not before complaining of being overburdened with superwoman expectations.



Even though there were plans for more crews to be sent to Skylab, the third turned out to be the last as well owing to financial constraints and NASA’s other objectives. The three crews had managed to conduct 270 experiments in life sciences, solar astronomy and Earth observations. In fact, over 700 hours were spent observing the sun, resulting in over 1,75,000 solar pictures.



Intense solar activity heating up Earth’s atmosphere meant that Skylab’s orbit decayed faster than NASA’s expectations, forcing them to pursue the only course of action possible. They adjusted the station’s path so that it wouldn’t hit populated areas. Upon re-entering on July 11, 1979, it broke up in the atmosphere and showered debris over the Indian Ocean and Australia.



Skylab provided for plenty of learning, especially with respect to the human element in long-haul space-fights. The focus was placed strongly not only on the nutritional and physical exercise requirements for human beings in space, but also other mental concerns they might face, including those caused by their workload and the lack of family life. Thanks to the Skylab, all these areas are now addressed for astronauts aboard the ISS.



 



Picture Credit : Google


HOW DO VACCINATIONS WORK?


In 1796, an English doctor called Edward Jenner (1749-1823) gave the first vaccination. He realized that milkmaids who caught cowpox did not catch the very dangerous disease of smallpox. By injecting the cowpox virus into a child, he was able to vaccinate him against the more serious disease. As the body fights the virus, antibodies are formed in the blood that prevents further infections or infection by some similar viruses. Today, huge vaccination programmers ensure that most children are protected against a range of diseases.



A person may become immune to a specific disease in several ways. For some illnesses, such as measles and chickenpox, having the disease usually leads to lifelong immunity to it. Vaccination is another way to become immune to a disease. Both ways of gaining immunity, either from having an illness or from vaccination, are examples of active immunity. Active immunity results when a person’s immune system works to produce antibodies and activate other immune cells to certain pathogens. If the person encounters that pathogen again, long-lasting immune cells specific to it will already be primed to fight it.



A different type of immunity, called passive immunity, results when a person is given someone else’s antibodies. When these antibodies are introduced into the person’s body, the “loaned” antibodies help prevent or fight certain infectious diseases. The protection offered by passive immunization is short-lived, usually lasting only a few weeks or months. But it helps protect right away.



Infants benefit from passive immunity acquired when their mothers’ antibodies and pathogen-fighting white cells cross the placenta to reach the developing children, especially in the third trimester. A substance called colostrum, which an infant receives during nursing sessions in the first days after birth and before the mother begins producing “true” breast milk, is rich in antibodies and provides protection for the infant. Breast milk, though not as rich in protective components as colostrum, also contains antibodies that pass to the nursing infant. This protection provided by the mother, however, is short-lived. During the first few months of life, maternal antibody levels in the infant fall, and protection fades by about six months of age.



Passive immunity can be induced artificially when antibodies are given as a medication to a nonimmune individual. These antibodies may come from the pooled and purified blood products of immune people or from non-human immune animals, such as horses. In fact, the earliest antibody-containing preparations used against infectious diseases came from horses, sheep, and rabbits.






HOW ARE NEW DRUGS DEVELOPED?


Research chemists examine different chemicals to find out how they react with other chemicals and with living cells. When a mixture of chemicals is thought to have potential in the treatment of certain conditions, various combinations of the chemicals will be tested to see whether they might be dangerous to living things. Tests on individual cells and on animals are made before human beings are given the new drug. Many people think that drug-testing on animals is wrong, but others feel that this is the best way to make sure that drugs are safe. Trials of the drug, in which some patients are given a placebo (a drug with no active ingredients), carried out to assess the drug’s effectiveness. It is usually only after many years of testing and monitoring that the drug is released for use by doctors.



The journey will have begun in a university laboratory where researchers, with grants from the research bodies or the pharmaceutical industry, have undertaken basic research to understand the processes behind a disease, often at a cellular or molecular level. It is through better understanding of disease processes and pathways that targets for new treatments are identified. This might be a gene or protein instrumental to the disease process that a new treatment could interfere with, for example, by blocking an essential receptor.



Once a potential target has been identified, researchers will then search for a molecule or compound that acts on this target. Historically, researchers have looked to natural compounds from plants, fungi or marine animals to provide the basis for these candidate drugs but, increasingly, scientists are using knowledge gained from the study of genetics and proteins to create new molecules using computers. As many as 10,000 compounds may be considered and whittled down to just 10 to 20 that could theoretically interfere with the disease process.



The next stage is to confirm that these molecules have an effect and that they are safe. Before any molecules are given to humans, safety and efficacy tests are conducted using computerised models, cells and animals. Around half of candidates make it through this pre-clinical testing stage and these five to 10 remaining compounds are now ready to be tested in humans for the first time. In the UK, approval by the Medicines and Healthcare products Regulatory Agency (MHRA) is required before any testing in humans can occur. The company will put in a clinical trial application (CTA), which will be reviewed by medical and scientific experts, who will decide whether or not sufficient preliminary research has been conducted to allow testing in humans to go ahead.



Each year sees a couple of dozen new drugs licensed for use, but in their wake there will be tens of thousands of candidate drugs that fell by the wayside. The research and development journey of those new drugs that make it to market will have taken around 12 years and cost around £1.15bn.




WHAT CAUSES ILLNESS?


Understanding the cause of an illness can often help a doctor to bring a patient back to good health or to suggest ways to prevent the illness from recurring or affecting other people. Illness may he caused by an accident, which physically affects part of the body, or it may be brought about by tiny organisms such as bacteria and viruses. Antibiotics are used to treat bacterial infections, while antiviral drugs attack viruses. In both cases, some disease-causing organisms are resistant to drug therapy. Occasionally, the cells of the body seem to act in destructive ways for no obvious reason. This is what happens in some forms of cancer. However, researchers are finding new ways to combat disease all the time.



A complex illness contains two or more elements of illness, causal illness, injury illness and blockage illness, with a single cause. A complex illness requires a cure for each illness element.



For complex illnesses, the first cure is to address the cause.  The second cure is to heal the damage, the third to transform the negative attributes that resulted from illness and from healing. It is possible, sometimes necessary to work on elemental cures out of sequence, or at the same time. However, cures can seldom be completed out of sequence, because the prior illness is a cause, and the illness will recur.



The hierarchy is also a hierarchy of life and of health. It is also useful to view the hierarchy of illness. An illness can exist in a single cell, the simplest life form. A single cell might have an illness with a single cause that causes an injury that is healed, but leaves a blockage resulting in congestion.



An illness might exist in a bodily tissue, independent of the cells comprising the tissue.  A tissue is a layer of life above individual cells.  A tissue might have an illness because that is not a cause of cellular illnesses that leads to tissue injury, which heals and leaves a tissue blockage, resulting in congestion in the tissue.  In the same manner, a limb, or an organ, or an organ system might have a simple or compound illness.



An illness can be based in an organ, an organ system, or in the body.  This is the common view of much of today’s medical practice. It is sometimes a useful view, sometimes not so useful. The illness of the body, like that of a cell, or that of a tissue might begin with a cause, or as an injury or a blockage, caused by an internal or external factor.



An illness might also arise in the mind, or the spirit, or even the community aspects of a life entity, from internal or external causes. An illness might result in damage to the mind, or to the spirit, or to the community aspects of the patient, which when healing is not perfect, results in a negative attribute – leading to congestion, and possibly even a new illness.




WHEN WAS ANAESTHESIA FIRST USED?


Anaesthesia prevents pain signals from being received by the brain, so that the pain is not felt by the patient. Hundreds of years ago there were few ways to relieve a patient’s pain during surgery. Alcohol might be used, but it was not very effective. It was not until the nineteenth century that anaesthetic drugs began to be widely used. The first operation to be performed using a general anaesthetic was by an American surgeon, in 1842.



Anaesthesia refers to the practice of blocking the feeling of pain to allow medical and surgical procedures to be undertaken without pain.



 An ancient Italian practice was to cover a patient’s head with a wooden bowl and beat on it repeatedly until the patient lost consciousness. Presumably this method resulted in a number of side-effects the patient would not have found beneficial.



Opium and alcohol were regularly used to produce insensibility, both of which also had a number of negative side effects and neither could dull the pain completely. Few operations were possible and speed was the determinant of a successful surgeon. Patients were often tied or held down and the abdomen, chest and skull were effectively inoperable. Surgery was a last, and extremely painful, resort.



On October 16, 1846, an American dentist, William Morton, proved to the world that ether causes complete insensibility to pain during an operation performed in front of a crowd of doctors and students at the Massachusetts General Hospital. Morton instructed the patient to inhale the ether vapour and, once the patient was suitably sedated, a tumour was removed from his neck. The patient felt no pain. This demonstration transformed medical practice.




Picture Credit : Google



WHAT WAS THE EARLIEST OPERATION?


Archaeologists have found skulls, dating from at least 10,000 years ago, that have holes drilled into them. Because bone has begun to grow around the holes, they were clearly made while the person was still alive. It is believed that this technique, called trepanning, was the first operation. It was probably done to relieve headaches or to let out evil spirits that were thought to be trapped inside the patient’s head.



The history of dental and surgical procedures reaches back to the Neolithic and pre-Classical ages. The first evidence of a surgical procedure is that of trephining, or cutting a small hole in the head. This procedure was practiced as early as 3000 BC and continued through the middle Ages and even into the Renaissance.  The initial purpose of trephining in ancient cultures is unknown; although some hypothesize it may have been used to rid the body of spirits. The practice was widespread throughout Europe, Africa, and South America. Evidence of healed skulls suggests some patients survived the procedure. Trephining continued in Ancient Egypt as a method of treating migraines. In South America, ancient Mayans practiced dental surgery by filling cavities with precious stones including jadeite, turquoise, quartz, and hematite, among others. It is supposed that these procedures were for ritual or religious purposes, rather than health or cosmetic reasons.



Ancient Greeks also performed some surgical procedures including setting broken bones, bloodletting, draining lungs of patients with pneumonia, and amputations. The Greeks had new, iron tools at their disposal, yet the risk of infection or death was still high. Hippocrates’ theory of four humors influenced medicine for hundreds of years. He claimed that the humors (black bile, yellow bile, phlegm, and blood which coincided with the elements earth, fire, water, and air, respectively) exist in the body, and bloodletting (or the draining of blood), among other procedures, balanced them. Ancient Roman physician Galen was heavily influenced by the Greeks. He served for three years as doctor to Roman gladiators and as the Emperor’s surgeon, gaining hands-on surgical experience. Romans continued with trephining, amputations, and eye surgery. Beginning in 900 AD, Al-Zahrawi, a famous Islamic surgeon, wrote books focused on orthopedics, military surgery, and ear, nose, and throat surgery, further influencing Islamic and Western medical practitioners.




WHO WAS HIPPOCRATES?


Hippocrates is often described as “the father of modern medicine”. He was a Greek doctor, living in the fourth and fifth centuries BC , who taught that a doctor’s first duty is to his or her patient and that the aim must at all times be to try to do good rather than harm. When they qualify, many modern doctors take the Hippocratic Oath, promising to follow these principles throughout their careers.



Hippocrates was born around 460 BC on the island of Kos, Greece. He became known as the founder of medicine and was regarded as the greatest physician of his time.



He based his medical practice on observations and on the study of the human body. He held the belief that illness had a physical and a rational explanation. He rejected the views of his time that considered illness to be caused by superstitions and by possession of evil spirits and disfavor of the gods.



Hippocrates teaching Hippocrates held the belief that the body must be treated as a whole and not just a series of parts. He accurately described disease symptoms and was the first physician to accurately describe the symptoms of pneumonia, as well as epilepsy in children. He believed in the natural healing process of rest, a good diet, fresh air and cleanliness. He noted that there were individual differences in the severity of disease symptoms and that some individuals were better able to cope with their disease and illness than others. He was also the first physician that held the belief that thoughts, ideas, and feelings come from the brain and not the heart as others of his time believed.



Hippocrates traveled throughout Greece practicing his medicine. He founded a medical school on the island of Kos, Greece and began teaching his ideas. He soon developed an Oath of Medical Ethics for physicians to follow. This Oath is taken by physicians today as they begin their medical practice. He died in 377 BC. Today Hippocrates is known as the “Father of Medicine”.




Picture Credit : Google