Why can t we bounce radar signal off the sun and determine 1 au directly

 

On April 7, 1959, a three-member team led by Stanford electrical engineer Von R. Eshleman recorded the first distinguishable echo of a radar signal bounced off the sun. A.S.Ganesh tells you more about Eshleman and how his team achieved this success...

When we generally say "reach for the stars," we use it as a phrase to convey having high or ambitious aims. Some people, however, reach for the stars in the real sense. Stanford electrical engineer Von R. Eshleman was one of them and the star he reached out for was our sun.

Born into a farming community in Ohio, U.S. on September 17, 1924, Eshleman attended the General Motors Institute of Technology in Flint, Michigan, while still being a high school student. Similarly, even before earning his bachelor's degree in electrical engineering from George Washington University in 1949, he started attending Ohio State University.

Intrigued by wave science

Before this, he had a stint with the navy during World War II, working as an electrical technician from 1943-46. It was during this period that he was drawn towards wave science. Intrigued by both sonar and radar, Eshleman had the idea that he could bounce radio signals off the surfaces of the sun and the moon, in order to study their hidden structures. While his own ship-based experiments of the time weren't successful, they paved the way for his future research.

Having received his master's degree from Stanford in 1950, he went on to earn his Ph.D. in 1952. He was recruited by Stanford to be a research professor, a position he held until 1957, when he was promoted to the teaching faculty as an Assistant Professor (Associate Professor back then). By 1962, he had not only managed to bounce radar off the sun, but also became a full professor at Stanford.

The same war that had planted the idea in Eshleman's mind for bouncing radar off surfaces also saw the rapid development of radar. Bouncing radar off distant surfaces wasn't an idea exclusive to Eshleman. Radar was successfully bounced off the moon in the 1940s itself and the first attempts to bounce radar off Venus were made in the late 1950s, albeit with mixed results.

16-minute round-trip

Eshleman's three-member team, including Lt. Col. Robert C. Barthle and Dr. Philip Gallagher, achieved success in bouncing radar off the sun on April 7, 1959. The tests, in fact, were run on April 7, 10 and 12, with an average time of 16 minutes and 32 seconds spent for the signals to travel the 149 million km distance between the Earth and the sun and back again.

The researchers needed many months to confirm that they had indeed succeeded and when they finally made their announcement public with a press conference in February 1960, it was with 99.999% certainty.

Coded pulse

Eshleman had explained to the gathered media persons that the radar antenna consisted of 5 miles of wire that was spread out across over 10 acres of land, and a 40,000 watt transformer.

Every time the test was conducted, a coded pulse was beamed at the sun in 30-second bursts. This was done to enable identification once it returned after bouncing off the sun.

While 40,000 watts were sent out, atmospheric and spatial dissipation meant that only about 100 watts reached the sun. Similar losses during the return journey meant that only a miniscule amount of energy returned, making detection difficult. The task was further complicated by the fact that this small amount of energy was now part of the vast amounts of similar energy that the sun itself radiates. The other wavelengths. By spending over six months with some of the best computers of the time, they were able to conclude that the coded pulse that they sent out was among the radio emissions of the sun.

In 1962, Eshleman, along with Stanford colleagues, founded the Stanford Center for Radar Astronomy to oversee radio experiments. Even though he began his career in radar astronomy, Eshleman is now best remembered for his pioneering work using spacecraft radio signals for precise measurements in planetary exploration. While he briefly served as Deputy Director of the Office of Technology Policy and Space Affairs in the U.S. Department of State, he was most comfortable among academic circles and hence returned to Stanford, where he flourished. Eshleman died in Palo Alto on September 22, 2017, five days after turning 93.

Picture Credit: Google

What is the concept of the first british atomic bomb?

Like it or not, science and technology sees unprecedented growth during dire times. This is probably because funding flows into different branches of science like never before, allowing for progress inconceivable during ordinary times. Just like how the COVID-19 pandemic saw a global collective search for vaccines, there have been other times in the past - mostly during wars - when a number of scientific fields received a tremendous boost.

World War II was one such period when scientific progress was at its pinnacle. The ability to split an atom through nuclear fission was discovered in the 1930s. With its ability to release immense power realised, it wasn't long before the race to build a bomb with it was on. The Manhattan Project was born early in the 1940s and we all know what happened in Japan's Hiroshima and Nagasaki.

To retain influence                                           

While the Manhattan Project was led by the U.S., it was done in collaboration with the U.K. along with support from Canada. Following the war, however, the U.S. refused to share atomic information with the U.K. With the objective of avoiding complete dependence on the U.S., and to remain a great power and retain its influence, Britain sought to become a nuclear power.

The prospect was discussed in a secret cabinet committee in October 1946. While Chancellor of the Exchequer Hugh Dalton and President of the Board of Trade Stafford Cripps were opposed to the idea of a British bomb citing the huge costs involved, Secretary of State for Foreign Affairs Ernest Bevin had his way and work went ahead. By the time the bomb was ready, however, Winston Churchill's government came to power.

Penney at the helm

Led by British mathematician William Penney, who had worked on the world's first atomic bomb in the U.S., the project that went on to become Operation Hurricane began with a secret laboratory tasked with developing the trigger device. With the Soviets managing to successfully explode their first atomic bomb in 1949, Penney's team was under further pressure. Soon enough, the Brits were ready with their bomb.

Early in 1951, the Australian government agreed that the blast could take place at the uninhabited Monte Bello islands, an archipelago of over 100 islands lying off the coast of north-western Australia. The region was declared a prohibited zone and ships and aircraft were later warned to stay clear of an area of 23,500 nautical square miles off the coast.

Plym carries the bomb

 The troops were mobilised, the first set of vessels left for their destination in January 1952 and six months later HMS Plym, carrying the bomb, and the fleet flagship HMS Campania, made their way. The radioactive core, which used British and Canadian plutonium, was flown out later, and installed in the bomb on Plym very close to the scheduled detonation.

On the morning of October 3, 1952, Britain's first atomic bomb exploded, sending thousands of tonnes of rock, mud, and sea-water blasting into the air. The Plym was instantly vaporised, with scant bits of red-hot metal from the vessel falling on one of the islands even starting a fire.

An eye-witness account of a Reuters correspondent stationed less than 100 miles away mentions a grand flash followed by the appearance of a grey cloud-a zigzag Z-shaped cloud as opposed to the mushroom cloud that we instantly associate with such detonations.

The success of Operation Hurricane resulted in Penney being knighted. Churchill, who was serving as the Prime Minister of the U.K. for a second time, announced to the House of Commons that there had been no casualties and that everything had gone according to plan. While he did congratulate the Labour Party for their role in the whole project, he also did take a dig at them saying that 'as an old parliamentarian I was rather astonished that something well over £100 million could be disbursed without Parliament being made aware of it.'

Like it or not, science and technology sees unprecedented growth during dire times. This is probably because funding flows into different branches of science like never before, allowing for progress inconceivable during ordinary times. Just like how the COVID-19 pandemic saw a global collective search for vaccines, there have been other times in the past - mostly during wars - when a number of scientific fields received a tremendous boost.

World War II was one such period when scientific progress was at its pinnacle. The ability to split an atom through nuclear fission was discovered in the 1930s. With its ability to release immense power realised, it wasn't long before the race to build a bomb with it was on. The Manhattan Project was born early in the 1940s and we all know what happened in Japan's Hiroshima and Nagasaki.

To retain influence                                           

While the Manhattan Project was led by the U.S., it was done in collaboration with the U.K. along with support from Canada. Following the war, however, the U.S. refused to share atomic information with the U.K. With the objective of avoiding complete dependence on the U.S., and to remain a great power and retain its influence, Britain sought to become a nuclear power.

The prospect was discussed in a secret cabinet committee in October 1946. While Chancellor of the Exchequer Hugh Dalton and President of the Board of Trade Stafford Cripps were opposed to the idea of a British bomb citing the huge costs involved, Secretary of State for Foreign Affairs Ernest Bevin had his way and work went ahead. By the time the bomb was ready, however, Winston Churchill's government came to power.

Penney at the helm

Led by British mathematician William Penney, who had worked on the world's first atomic bomb in the U.S., the project that went on to become Operation Hurricane began with a secret laboratory tasked with developing the trigger device. With the Soviets managing to successfully explode their first atomic bomb in 1949, Penney's team was under further pressure. Soon enough, the Brits were ready with their bomb.

Early in 1951, the Australian government agreed that the blast could take place at the uninhabited Monte Bello islands, an archipelago of over 100 islands lying off the coast of north-western Australia. The region was declared a prohibited zone and ships and aircraft were later warned to stay clear of an area of 23,500 nautical square miles off the coast.

Plym carries the bomb

 The troops were mobilised, the first set of vessels left for their destination in January 1952 and six months later HMS Plym, carrying the bomb, and the fleet flagship HMS Campania, made their way. The radioactive core, which used British and Canadian plutonium, was flown out later, and installed in the bomb on Plym very close to the scheduled detonation.

On the morning of October 3, 1952, Britain's first atomic bomb exploded, sending thousands of tonnes of rock, mud, and sea-water blasting into the air. The Plym was instantly vaporised, with scant bits of red-hot metal from the vessel falling on one of the islands even starting a fire.

An eye-witness account of a Reuters correspondent stationed less than 100 miles away mentions a grand flash followed by the appearance of a grey cloud-a zigzag Z-shaped cloud as opposed to the mushroom cloud that we instantly associate with such detonations.

The success of Operation Hurricane resulted in Penney being knighted. Churchill, who was serving as the Prime Minister of the U.K. for a second time, announced to the House of Commons that there had been no casualties and that everything had gone according to plan. While he did congratulate the Labour Party for their role in the whole project, he also did take a dig at them saying that 'as an old parliamentarian I was rather astonished that something well over £100 million could be disbursed without Parliament being made aware of it.'

Picture Credit Google

What is aircraft de-icing?

 

Part of deaning an airplane is the dearing loose chunks of frozen vapour off its surface to ensure safety during flights. In cold regions, aircraft face the risk of ice-formation taking place on their surface. Ice causes the aircraft's surface to become rough and uneven. This disrupts the smooth air flow and increases the drag. If large pieces of ice break up into loose chunks during flight, these can get into the engines or hit the propellers and cause them to malfunction and spell disaster.

A thorough thaw

De-icing, which ensures that an aircraft can fly safely in such conditions, involves simply the removal of existing ice or snow from a surface. De-icers are usually chemicals that dissolve the ice. Sometimes, infrared rays are also used for de-icing.

Aircraft de-icing is done on the ground before take-off. The plane's surface is sprayed with the de icing fluid so that the engine inlets, wings and various other sensors are free of condensed precipitation. After this the aircraft is generally sprayed with anti-icers to prevent the water on the surface from refreezing.

Picture Credit : Google

What is AI fiction?

Al fiction is a constantly evolving genre that gives us a peek into the potential umides and downsides of intelligent machines whether it is books written by humans with robots and Al as central characters or stories composed entirely by machine learning algorithms. Al fiction never fails to captivate readers and stimulate discussions about what is in store for technology in the future. Artificial intelligence hum long barn a popular topic in soner fiction from haar

Asimov's humuncul mots in Robot to the sostient machines of the Matrix Al has beos & costant so of Jascination and spratation As Al trdonology p mie in fiction has been increasingly muted with author examining the potential upsides and dowmides of intelligent madunes.

One recent devettipment in Al fiction is the merger of novels written retimly by artial de The first Al generated novel 1 the Road caine at in 2018 hased on data gathered by namically exhand ca on a road trip from New York to New Orleans it was moted by an operimental 2016 sort story co-written by Al and sapanese researchers troulated as The Day a Computer Visite a Novel which nearly won a literary prize. In the same year, Sunspring, an Al-authored screenplay, was placed in the top 10 at a London sci-fi film festival. In the first year of the pandemic, we got Pharmako-Al, a genre-bender philosophical book co-written by an Al and K. Allado-McDowell, founder of Google's Artists and a machine intelligence programme, GPT-3. These experimental works of fiction represent an intriguing new avenue for Al fiction. With machine learning algorithms capable of generating coherent narratives and dialogue, it is possible that we may soon see a flood of novels, stories, and even movies written entirely by Al.

Science fiction (sci-fi) & Al

For generations, sci-fi has foreseen the pervasive influence of Al in our daily life. Its representation in mainstream media has played a pivotal role in shaping public opinions towards this technological advancement Films such as The Terminator and Ex Machina have helped to shape the cultural narrative around Al, with many people viewing intelligent machines as potential threats to human safety and autonomy. At the same time, this type of speculative fiction has also explored the more positive aspects of Al, from the helpful robots of Wall-E to the benevolent supercomputers of 2001: A Space Odyssey. As Al technology continues to evolve, it's likely that all the good, bad, and ugly visions of intelligent machines will continue to be explored in fiction.

Picture Credit : Google

How do crystals form?

Rocks are mixtures of different minerals. All minerals are crystals, but not all crystals are minerals. These solid substances are found naturally in the ground. But do we know how they are formed?

How do crystals form?

Scientifically speaking, the term "crystal" refers to any solid that has an ordered chemical structure. This means that its parts are arranged in a precisely ordered pattern, like bricks in a wall. The "bricks" can be cubes or more complex shapes. I'm an Earth scientist and a teacher, so I spend a lot of time thinking about minerals. These are solid substances that are found naturally in the ground and can't be broken down further into different materials other than their constituent atoms. Rocks are mixtures of different minerals. All minerals are crystals, but not all crystals are minerals.

Most rock shops sell mineral crystals that occur in nature. One is pyrite, which is known as fool's gold because it looks like real gold. Some shops also feature showy, human-made crystals such as bismuth, a natural element that forms crystals when it is melted and cooled.

Why and how crystals form

Crystals grow when molecules that are alike get close to each other and stick together, forming chemical bonds that act like Velcro between atoms. Mineral crystals cannot just start forming spontaneously - they need special conditions and a nucleation site to grow on. A nucleation site can be a rough edge of rock or a speck of dust that a molecule bumps into and sticks to, starting the crystallization chain reaction. At or near the Earth's surface, many molecules are dissolved in water that flows through or over the ground. If there are enough molecules in the water that are alike, they will separate from the water as solids - a process called precipitation. If they have a nucleation site, they will stick to it and start to form crystals. Rock salt, which is actually a mineral called halite, grows this way. So does another mineral called travertine, which sometimes forms flat ledges in caves and around hot springs, where water causes chemical reactions between the rock and the air. You can make "salt stalactites" at home by growing salt crystals on a string. In this experiment, the string is the nucleation site. When you dissolve Epsom salts in water and lower a string into it, then leave it for several days, the water will slowly evaporate and leave the Epsom salts behind. As that happens, salt crystals precipitate out of the water and grow crystals on the string. Many places in the Earth's crust are hot enough for rocks to melt into magma. As that magma cools down, mineral crystals grow from it, just like water freezing into ice cubes. These mineral crystals form at much higher temperatures than salt or travertine precipitating out of water.

What crystals can tell scientists

Earth scientists can learn a lot from different types of crystals. For example, the presence of certain mineral crystals in rocks can reveal the rocks' age. This dating method is called geochronology - literally, measuring the age of materials from the Earth. One of the most valued mineral crystals for geochronologists is zircon, which is so durable that it quite literally stands the test of time. The oldest zircon ever found come from Australia and are about 4.3 billion years old - almost as old asour planet itself. Scientists use the chemical changes recorded within zircon as they grew as a reliable "clock" to figure out how old the rocks containing them are some crystals, including zircon, have growth rings, like the rings of a tree, that form when layers of molecules accumulate as the mineral grows. These rings can tell scientists all kinds of things about the environment in which they grew. For example, changesin pressure, temperature and magma composition can all result in growth rings. Sometimes mineral crystals grow as high pressure and temperatures within the Earth's crust change rocks from one type to another in a process called metamorphism. This process causes the elements and chemical bonds in the rock to rearrange themselves into new crystal structures. Lots of spectacular crystals grow in this way, including garnet, kyanite and staurolite.

Amazing forms

When a mineral precipitates from water or crystallizes from magma, the more space it has to grow, the bigger it can become. There is a cave in Mexico full of giant gypsum crystals, some of which are 40 feet (12 meters) long - the size of telephone poles. Especially showy mineral crystals are also valuable as gemstones for jewellery once they are cut into new shapes and polished. The highest price ever paid for a gemstone was $71.2 million for the CTF Pink Star diamond, which went up for auction in 2017 and sold in less than five minutes. (The author works at University of Montana.) THE CONVERSATION

Picture credit : Google

What is a 3D printed robotic hands?

 

Researchers have succeeded in printing robotic hands with bones, ligaments and tendons for the first time. Using a new laser scanning technique, the new technology enables the use of different polymers.

Additive manufacturing or 3D printing is the construction of a 3D object from a 3D digital model. The technology behind this has been advancing at great pace and the number of materials that can be used have also expanded reasonably. Until now, 3D printing was limited to fast-curing plastics. The use of slow-curing plastics has now been made possible thanks to a technology developed by researchers at ETH Zurich and a MIT spin-off U.S. start-up, Inhabit. This has resulted in successfully 3D printing robotic hands with bones, ligaments and tendons. The researchers from Switzerland and the U.S. have jointly published the technology and their applications in the journal Nature.

Return to original state

 In addition to their elastic properties that enable the creation of delicate structures and parts with cavities as required, the slow-curing thiolene polymers also return to their original state much faster after bending, making them ideal for the likes of ligaments in robotic hands.

The stiffness of thiolenes can also be fine-tuned as per our requirements to create soft robots. These soft robots will not only be better-suited to work with humans, but will also be more adept at handling delicate and fragile goods.

Scanning, not scraping

In 3D printers, objects are typically produced layer by layer. This means that a nozzle deposits a given material in viscous form and a UV lamp then cures each layer immediately. This method requires a device that scrapes off surface irregularities after each curing step.

While this works for fast-curing plastics, it would fail with slow-curing polymers like thiolenes and epoxies as they would merely gum up the scraper. The researchers involved therefore developed a 3D printing technology that took into account the unevenness when printing the next layer, rather than smoothing out uneven layers. They achieved this using a 3D laser scanner that checked each printed layer for irregularities immediately.

This advancement in 3D printing technology would provide much-needed advantages as the resulting objects not only have better elastic properties, but are also more robust and durable. Combining soft, elastic, and rigid materials would also become much more simpler with this technology.

Picture Credit : Google 

What are some examples of things written about in science fiction that became real?

Battle tanks, debit/credit cards headphones, bionic parts……… many of the machines and gadgets we use today were predicted by sci-fi authors long ago. Let's look at a few of them that have become a reality

Debit/Credit Cards

Edward Bellamy's 1888 novel ‘Looking Backward’ was a huge success in its day, but it is best remembered for introducing the concept of ‘universal credit’. Citizens of his future utopia carry a card that allows them to spend 'credit’ from a central bank on goods and services without paper money changing hands.

Battle tanks

One of the best-known science fiction writers of the 20th century was H.G. Wells. In his 1903 story ‘The Land Ironclads’, published in the ‘Strand’ magazine, Wells described war machines that were uncannily similar to the modern tank They were approximately 100 feet long and rolled on eight pairs of wheels, each of which had its own independent turning axle. A conning tower in the top let the captain survey the scene. The first battle tanks were deployed on the battlefield a mere 13 years later, during the Battle of the Somme in World War I, and have been an integral part of every country's armed forces ever since.

In ‘When the Sleeper Wakes’ (1899), Wells describes automatic motion-sensing doors which saw reality 60 years later.

Earbud headphones

When Ray Bradbury published his classic ‘Fahrenheit’ 451 in 1953, portable audio players were a reality. However, headphones were massive and ugly-looking. That's why his description of 'seashells’ and thimble radios that brought an electronic ocean of sound, of music and talk is so amazing. He exactly describes the earbud headphone and Bluetooth, which didn't come into popular use till 2000!

Video chat

The first demonstration of video conferencing came at the 1964 New York World's Fair, where AT&T wowed crowds with its 'picturephone’. The technology has come a long way since then, but the first description of video phones came in Hugo Gernsback's serial tale Ralph 124c 41+ in Modern Electrics magazine in 1911. In it, he described a device called the ‘telephot’ that let people see each other while speaking long distance.

Internet glasses

The protagonist in Charles Stross' 2005 book Accelerando, carries his data and his memories in a pair of glasses connected to the Internet. In 2013, Google came out with a wearable computer called Google Glass fitted to spectacles frames. Wearers could access the Internet using voice commands.

All in one novel

Stand on Zanzibar, a 1968 dystopian* novel by John Brunner which won a number of sci-fi book awards, makes several technological and political predictions. These include laser printers, satellite TV, electric cars and on-demand video broadcasts.

Bionic man

Martin Caidin's 1972 book ‘Cyborg’ is the story of astronaut-turned-test pilot Steve Austin who is severely injured in a plane crash. The government engages a doctor who is researching bionics or the replacement of human body parts with mechanical prosthetics that work almost as well as the original. Cochlear implants for the deaf and artificial hearts are successful modern applications of bionics.

*dystopian-pessimistic description of a society that breaks down. Its opposite is 'utopian’.

Picture Credit : Google

Fun things to do on National Science day?

1. Plant a garden:

It will help you learn about botany and the science of plant growth. You can begin by planting a vegetable or flower sapling or sowing some seeds.

2. Build a simple machine:

Use household items to create a lever, pulley, or other simple machine and demonstrate how they work.

3. Make slime:

Learn about the science of polymers by making slime together in class.

4. Conduct a science experiment:

Choose a simple science experiment, such as growing crystals or making a balloon rocket. Take help from your teachers to conduct it and understand the results.

5. Create a nature Scavenger hunt:

Explore the natural world by creating a scavenger hunt that highlights different plants, animals, and insects.

6. Visit a science museum:

 Take a trip to a science museum (if there is one in your city or town) or planetarium to learn about a wide range of topics.

7. Conduct a star gazing session:

Discover more about astronomy by conducting a star gazing session on a clear night with your friends.

8. Experiment with magnets

Use magnets to explore the concepts of magnetism and electric currents.

9. Make a tornado in a bottle:

Demonstrate the science of air pressure and tornadoes by making a tornado in a bottle. It's simple, look it up online and do it.

10. Create a weather station:

Explore the science of meteorology by creating a simple weather station to measure temperature, precipitation, and wind speed.

Picture Credit : Google 

What about space dust as Earth’s sun shield?

The heat and energy from the sun is what drives life on Earth. That said, humanity is now collectively responsible for so much greenhouse gases that Earth's atmosphere now traps more and more of the sun's energy. This has led to a steady increase in the planet's temperature, and global warming and climate change are causes for concern.

One suggested strategy to reverse this trend is to try and intercept a small fraction of sunlight before it reaches Earth. Scientists, for decades, have considered the possibility of using screens, objects or dust particles to block 1-2% of the sun's radiation and thus mitigate the effects of global warming.

Dust to block sunlight

A study led by the University of Utah explored the idea of using dust to block a bit of sunlight. Different properties of dust particles, quantities of dust and the orbits that would work best for shading Earth were studied. The results were published on February 8, 2023 in the journal PLOS Climate.

Launching dust from Earth to a station at the Lagrange Point between Earth and the sun (L1) would prove to be most effective. The prohibitive costs and efforts involved here, however, might necessitate an alternative, which is to launch lunar dust from the moon.

These two scenarios were arrived at after studying a shield's overall effectiveness, which depends on its ability to sustain an orbit that casts a shadow on Earth. In computer simulations, a space platform was placed at the L1 Lagrange Point (point between Earth and the sun where gravitational forces are balanced) and test particles were shot along the L1 orbit.

While a precise launch was able to create an effective shield for a while, the dust would be blown off by solar winds, radiation, and gravity within the solar system. This would mean that such a system would require an endless supply of dust to blast from L1, making the cost and effort involved astronomical.

Moondust might work

 The second scenario of shooting moondust towards the sun might prove to be more realistic as the inherent properties of lunar dust allow it to work as a sun shield. After studying simulations of lunar dust scattered along different courses, an ideal trajectory that aimed towards L1 was realised.

The authors were clear in stating that their study only looks at the possible impact of such a strategy and do not evaluate the logical feasibility of these methods. If it works, this could be an option in the fight against climate change as it would allow us to buy more time.

Picture Credit : Google 

The science behind pronghorn’s speed

When we think of very fast land animals, the first one that comes to our mind is perhaps the cheetah. Why not? It is the fastest land animal! Do you know which one is the second fastest? The pronghorn. And, the theory behind how it developed such. speed is fascinating. Let's find out more about the animal and its sprinting capacity.

A hoofed mammal, the pronghorn is native to North America, and does not have any close relative anywhere in the world. Healthy populations of the animals exist in their range and are listed under 'Least Concern' in the International Union for Conservation of Nature Red List of Threatened Species. Though it looks a lot like an antelope, the herbivore belongs to its own taxonomic family called Antilocapridae. Pronghorns get their name from the forward-facing projection - the prong on their horns. Interestingly, their ‘horns’ exhibit characteristics of both a horn and an antler. The sheath of its horn is made of keratin, the substance horns are made of. But, these horns are forked and shed every year-just like antlers are! While much can be written about what else is unusual about the pronghorn, its most unique characteristic is its speed.

Running at more than 80 kmph, the pronghorn is the fastest land mammal in its entire natural range- from Canada through the US to Mexico in one aspect, it even gets better than the African cheetah-it can maintain a fast speed for a longer period of time than those carnivores. But the pronghom has no natural predator to match this speed, and so scientists had been stumped by the need for this speed. This is where the science of evolution comes in.

According to a study published recently, during the Ice Age, North America was home to several mammals that no longer exist today. Some of them are well-known today - woolly mammoths, giant sloths, and saber-toothed cats. There were lesser-known ones too, such as ‘Miracinongs’ a cheetah-like cat. The skeletal remains of ‘Miracinonyx’ show that “this now-extinct cat shares the morphological characteristics that indicate high speed capabilities with its African counterpart, the cheetah (Acinony)”. It is a close relative of the puma and the African cheetah. Both puma and ‘Miracinonyx’ are native to North America. Results provide support to "the hypothesis that ‘Miracinonyx’ preyed upon Antilocapra, but not exclusively”. Though it is not seen as conclusive evidence and more study is required, scientists say this "may provide an explanation for why pronghorns are so fast. Maybe they were chased by cheetahs after all".

Picture Crdit : Google 

The science behind pronghorn’s speed

When we think of very fast land animals, the first one that comes to our mind is perhaps the cheetah. Why not? It is the fastest land animal! Do you know which one is the second fastest? The pronghorn. And, the theory behind how it developed such. speed is fascinating. Let's find out more about the animal and its sprinting capacity.

A hoofed mammal, the pronghorn is native to North America, and does not have any close relative anywhere in the world. Healthy populations of the animals exist in their range and are listed under 'Least Concern' in the International Union for Conservation of Nature Red List of Threatened Species. Though it looks a lot like an antelope, the herbivore belongs to its own taxonomic family called Antilocapridae. Pronghorns get their name from the forward-facing projection - the prong on their horns. Interestingly, their ‘horns’ exhibit characteristics of both a horn and an antler. The sheath of its horn is made of keratin, the substance horns are made of. But, these horns are forked and shed every year-just like antlers are! While much can be written about what else is unusual about the pronghorn, its most unique characteristic is its speed.

Running at more than 80 kmph, the pronghorn is the fastest land mammal in its entire natural range- from Canada through the US to Mexico in one aspect, it even gets better than the African cheetah-it can maintain a fast speed for a longer period of time than those carnivores. But the pronghom has no natural predator to match this speed, and so scientists had been stumped by the need for this speed. This is where the science of evolution comes in.

According to a study published recently, during the Ice Age, North America was home to several mammals that no longer exist today. Some of them are well-known today - woolly mammoths, giant sloths, and saber-toothed cats. There were lesser-known ones too, such as ‘Miracinongs’ a cheetah-like cat. The skeletal remains of ‘Miracinonyx’ show that “this now-extinct cat shares the morphological characteristics that indicate high speed capabilities with its African counterpart, the cheetah (Acinony)”. It is a close relative of the puma and the African cheetah. Both puma and ‘Miracinonyx’ are native to North America. Results provide support to "the hypothesis that ‘Miracinonyx’ preyed upon Antilocapra, but not exclusively”. Though it is not seen as conclusive evidence and more study is required, scientists say this "may provide an explanation for why pronghorns are so fast. Maybe they were chased by cheetahs after all".

Picture Crdit : Google 

Unsung pioneers in the field of science

These are tales not just of perseverance and love for science, but also of discrimination and unfair treatment. Despite making groundbreaking discoveries, their names remain largely unknown, simply because they are women. Let's celebrate these women scientists and their contribution to the world....

ESTHER MIRIAM ZIMMER LEDERBERG (1922-2006)

Esther Miriam Zimmer Lederberg was an American microbiologist, who discovered bacterial virus Lambda phage and the bacterial fertility factor F (F plasmid). Like many woman scientists of her time, Esther Lederberg was not given credit for her scientific contribution because of her gender. While her husband, her mentor and another research partner won 1958 Nobel Prize in Physiology or Medicine for discovering how genetic material is transferred between bacteria, Esther wasn't even mentioned in the citation, even though her work significantly contributed to the discovery.

Esther Miriam Lederberg was born in Bronx, New York, into a humble family. When studying masters in genetics at Stanford University, Esther struggled to make ends meet. As recollected by Esther in her interviews, she had sometimes eaten frogs’ legs leftover from laboratory dissections.

Esther met her future husband Joshua Lederberg at Stanford. They moved to the University of Wisconsin, where they would begin years of collaboration. Throughout the 1950s, they published papers together and apart, as both made discoveries about bacteria and genetics of bacteria.

Esther Lederberg's contributions to the field of microbiology were enormous. In 1950, she discovered the lambda phage, a type of bacterial virus, which replicates inside the DNA of bacteria. She developed an important technique known as replica plating, still used in microbiology labs all over the world. Along with her husband and other team members, she discovered the bacterial fertility factor.

CECILIA PAYNE-GAPOSCHKIN (1900-1979)

Cecilia Payne-Gaposchkin was a British-born American astronomer who was the first to propose that stars are made of hydrogen and helium.

Cecilia Payne was born in 1900 in Buckinghamshire, England. In 1919, she got a scholarship to study at Newnham College, Cambridge University, where she initially studied botany, physics, and chemistry. Inspired by Arthur Eddington, an English astronomer, she dropped out to study astronomy.

Studying astronomy at Cambridge in the 1920s was a lonely prospect for a woman. Cecilia sat alone, as she was not allowed to occupy the same rows of seats as her male classmates. The ordeal did not end there. Because of her gender, Cecilia was not awarded a degree, despite fulfilling the requirements in 1923. (Cambridge did not grant degrees to women until 1948.)

Finding no future for a woman scientist in England, she headed to the United States, where she received a fellowship to study at Haward Observatory. In her PhD thesis, published as Stellar Atmospheres in 1925, Cecilia showed for the first time how to read the surface temperature of any star from its spectrum. She also proposed that stars are composed mostly of hydrogen and helium. In 1925, she became the first person to earn a PhD in astronomy. But she received the doctorate from Radcliffe College, since Harvard did not grant doctoral degrees to women then. She also became the first female professor in her faculty at Harvard in 1956.

Cecilia contributed widely to the physical understanding of the stars and was honoured with awards later in her lifetime.

CHIEN-SHIUNG WU (1912-1997)

Chien-Shiung Wu is a Chinese-American physicist who is known for the Wu Experiment that she carried out to disprove a quantum mechanics concept called the Law of Parity Conservation. But the Nobel Committee failed to recognise her contribution, when theoretical physicists Tsung-Dao Lee and Chen Ning Yang, who had worked on the project, were awarded the Prize in 1957.

Chien-Shiung Wu was born in a small town in Jiangsu province, China, in 1912. She studied physics at a university in Shanghai and went on to complete PhD from the University of California, Berkeley in 1940.

In 1944, during WWII, she joined the Manhattan Project at Columbia University, focussing on radiation detectors. After the war, Wu began investigating beta decay and made the first confirmation of Enrico Fermi's theory of beta decay. Her book "Beta Decay," published in 1965, is still a standard reference for nuclear physicists.

In 1956, theoretical physicists Tsung Dao Lee and Chen Ning Yang approached Wu to devise an experiment to disprove the Law of Parity Conservation, according to which two physical systems, such as two atoms, are mirror images that behave in identical ways. Using cobalt-60, a radioactive form of the cobalt metal, Wu's experiment successfully disproved the law.

In 1958, her research helped answer important biological questions about blood and sickle cell anaemia. She is fondly remembered as the "First Lady of Physics", the "Chinese Madame Curie" and the "Queen of Nuclear Research”.

LISE MEITNER (1878-1968)

Lise Meitner was an Austrian-Swedish physicist, who was part of a team that discovered nuclear fission. But she was overlooked for the Nobel Prize and instead her research partner Otto Hahn was awarded for the discovery.

Lise Meitner was born on November 7, 1878, in Vienna. Austria had restrictions on women education, but Meitner managed to receive private tutoring in physics. She went on to receive her doctorate at the University of Vienna. Meitner later worked with Otto Hahn for around 30 years, during which time they discovered several isotopes including protactinium-231, studied nuclear isomerism and beta decay. In the 1930s, the duo was joined by Fritz Strassmann, and the team investigated the products of neutron bombardment of uranium.

In 1938, as Germany annexed Austria, Meitner, a Jew, fled to Sweden. She suggested that Hahn and Strassmann perform further tests on a uranium product, which later turned out to be barium. Meitner and her nephew Otto Frisch explained the physical characteristics of this reaction and proposed the term 'fission' to refer to the process when an atom separates and creates energy. Meitner was offered a chance to work on the Manhattan Project to develop an atomic bomb. However, she turned down the offer.

JANAKI AMMAL (1897-1984)

Janaki Ammal was an Indian botanist, who has a flower- the pink-white Magnolia Kobus Janaki Ammal named after her.

She undertook an extraordinary journey from a small town in Kerala to the John Innes Horticultural Institute at London. She was born in Thalassery, Kerala, in 1897.

Her family encouraged her to engage in intellectual pursuit from a very young age. She graduated in Botany in Madras in 1921 and went to Michigan as the first Oriental Barbour Fellow where she obtained her DSc in 1931. She did face gender and caste discrimination in India, but found recognition for her work outside the country.

After a stint at the John Innes Horticultural Institute at London, she was invited to work at the Royal Horticulture Society at Wisley, close to the famous Kew Gardens. In 1945, she co-authored The Chromosome Atlas of Cultivated Plants with biologist CD Darlington. Her major contribution came about at the Sugarcane Breeding Station at Coimbatore, Tamil Nadu. Janaki's work helped in the discovery of hybrid varieties of high-yielding sugarcane. She also produced many hybrid eggplants (brinjal). She was awarded Padma Shri in 1977.

GERTY CORI (1896-1957)

Gerty Cori was an Austrian-American biochemist, known for her discovery of how the human body stores and utilises energy. In 1947, she became the first woman to be awarded the Nobel Prize in Physiology or Medicine and the third woman to win a Nobel.

Gerty Theresa Cori was born in Prague in 1896. She received the Doctorate in Medicine from the German University of Prague in 1920 and got married to Carl Cori the same year.

Immigrating to the United States in 1922, the husband-wife duo joined the staff of the Institute for the Study of Malignant Disease, Bualo. N.Y. Working together on glucose metabolism in 1929, they discovered the 'Cori Cycle' the pathway of conversion of glycogen (stored form of sugar) to glucose (usable form of sugar). In 1936, they discovered the enzyme Phosphorylase, which breaks down muscle glycogen, and identified glucose 1-phosphate (or Cori ester) as the first intermediate in the reaction.

The Coris were consistently interested in the mechanism of action of hormones and they carried out several studies on the pituitary gland. In 1947, Gerty Cori, Carl Cori and Argentine physiologist Bernardo Houssay received the Nobel Prize in 1947 for their discovery of the course of the catalytic conversion of glycogen.

Although the Coris were equals in the lab, they were not treated as equals. Gerty faced gender discrimination throughout her career. Few institutions hired Gerty despite her accomplishments, and those that did hire, did not give her equal status or pay.

Picture Credit : Google 

What is environmental science?

Environmental science integrates several disciplines, including ecology, biology, zoology, oceanography, atmospheric science, soil science, geology, and chemistry. It throws light on how natural and human-made processes interact with one another to impact our planet. Here's a peek into a few words related to this science

Anthropocentrism

The word means centred on humans. This belief places humans and their existence at the centre of the world to mean that we are more important than everything else. However, many have argued that this is ethically wrong and at the root of the ecological crisis staring at us today. For one, by placing ourselves above other species, we view them as resources to be exploited. And that would explain the unsustainable pace of human growth and development at the cost of other species, and, eventually, perhaps the planet itself.

Artificial selection

In nature, each living creature is different. Each finds a way to survive, and passes on the traits for survival to the next generation. This is called natural selection. In artificial selection though, humans identify desirable traits in plants and animals, and take steps to improve those traits in future generations. Also known as selective breeding, the process has pros and cons. For instance, it can result in a new disease-resistant crop with high yield but can lead to loss of diversity in the long-run.

Carbon sequestration

It refers to the long-term storage of carbon in plants, soils, geologic formations, and the ocean. This stored carbon has the potential to get released into the atmosphere as carbon dioxide, both naturally (decomposition of organic matter) and through human activities. The amount of carbon dioxide getting released into the atmosphere has been increasing, especially through human activities such as the burning of fossil fuels.

Bioaccumulation

This refers to the process in which external components - such as toxic chemicals or metals gradually accumulate within an organism-such as fish. Since any organism is part of a food chain, it affects- other organisms too. For instance, when chemicals end up in a waterbody through wind or rain, they sink to the bottom. Tiny creatures in the waterbody consume these when they dig the sediment. These creatures are consumed by larger creatures, and finally, large fish are likely to be eaten by humans. And throughout the process, these chemicals can get transferred from one organism to another, harming them.

E-waste

The shortened version of electronic waste, e-waste is non-biodegradable and includes everything from televisions and computers to mobile phones and home appliances and their components. These discarded products can contain toxic substances such as lead and mercury and also metals such as gold, silver, copper, platinum, aluminium, etc. When not disposed of properly, the toxic substances in e-waste accumulate in the environment, in the soil, air, water, and living things.

Commingled recycling

In this process, all kinds of used materials - both biodegradable and non-biodegradable - such as plastics, glass, metals, etc. are gathered in a collection truck and later sorted at a recycling unit. This process has benefits and drawbacks. The absence of segregation eliminates the need for separate trucks for different materials, cutting down on fuel, resultant emission, etc. But, it could mean contamination of materials and indifference on the part of consumers about what they use.

Rainwater harvesting

It refers to the conscious effort of collecting and storing rainwater rather than allowing it to run off. Rainwater-from rooftops, roads, open areas, etc. can either be filtered and stored or allowed into the ground. Rain is one of the few sources of clean water for us, and given the water crisis looming the world over, it is crucial to find ways to conserve this precious natural resource. Rainwater harvesting also lowers our demand on freshwater resources, slows erosion in dry environments, reduces flooding in low-lying areas, etc.

Brownfield

A brownfield is a parcel of land "that was previously used for industrial purposes and which is contaminated by low concentrations of hazardous chemicals". Most such lands are seen as requiring environmental justice because the toxins there can affect air and water quality, and, in turn, human health. Also, they have the potential to become a dumping ground for hazardous waste. "This creates a situation that deters economic development, decreases property values, and harms the aesthetic value of a community."

Waste hierarchy

This is a simple tool of evaluation used for different waste management options - from the best to the worst for our surroundings. The order in the evaluation is usually as follows: prevention, re-use, recycling, recovery, disposal. The most preferred option is to prevent waste and the least preferred choice is disposal in landfill sites. Having a proper idea of waste generated and how to handle it - whether in a small household or a large company-will go a long way in helping us be efficient with our resources and make planet-friendly choices, leading to better environmental results.

Green purchasing

Also known as sustainable or environmentally responsible purchasing, green purchasing refers to acquiring products and services with no or minimal negative effect on human health and the environment. Such a purchase takes into consideration everything from raw material sourcing to packaging and delivery. It conserves resources, cuts costs, supports local people, and encourages a greener lifestyle. In short, it is kinder to the planet and its inhabitants in every possible way.

Intercropping

You may have seen a single crop being raised on a large parcel of agricultural land. This is called monoculture. When two or more types of crops are raised simultaneously in a field, it is called intercropping. It helps in the effective use of land, offers better profit, can prevent soil erosion, improve ecosystem, etc. It also has a few disadvantages. It can be labour-intensive, time-consuming, be affected by disease, etc. But, with proper planning, intercropping can prove to be beneficial.

Picture Credit : Google 

What is the History of science fiction?

Science fiction (sci-fi) has taken us on incredible journeys through time and space, allowing us to explore the depths of our imagination and the limits of the universe.

The term science fiction was first used by William Wilson in 1851 in a book of poetry titled ‘A Little Earnest Book Upon a Great Old subject’. However, the term's modern usage is credited to Hugo Gernsback, who founded the first sci-fi magazine, ‘Amazing Stories’ in 1926. The American editor used this term to describe stories that combined scientific speculation with adventure and futuristic concepts. The term gained widespread use in the 1930s and 1940s and has since become a popular genre of literature and entertainment.

Generally, the beginning of the literary genre of sci-fi is traced to 19th Century England and the Industrial Revolution, a time when rapid technological change inspired and led to the popularisation of stories and narratives that were ideally set in the future and explored themes such as time travel and interplanetary voyages. These stories dealt with the limits of human knowledge and the unintended consequences of our technological prowess. However, literary scholars claim that the earliest literary work that could fit into the genre of sci-fi dates back to the second Century AD.

A True Story: The earliest surviving work of sci-fi

Written by a Syrian satirist Lucian, ‘A True Story’, (also known as ‘True History’) is a two-book parodic adventure story and a travelogue about outer space exploration, extraterrestrial lifeforms, and interplanetary warfare. It is just extraordinary to know that the author produced a story that so accurately incorporated multiple hallmarks of what we generally associate with modern sci-fi, centuries before the invention of instruments such as the telescope.

Lucian was from Samosata (present-day Turkey), and his first language is believed to be Aramaic but he wrote in Greek. He might not be a household name today but literary scholars call him one of antiquity's most brilliant satirists and inventive wits. He is famous throughout European history for producing his absurd yet fantastical works and for his overt dispelling of the ridiculous and ill-logical social conventions and superstitions of his time. His works have been an inspiration for literary classics such as Jonathan Swift's ‘Gulliver's Travels’ and Thomas ‘More's Utopia’.

The basic classification of sci-fi

Sci-fi can be broadly classified into two categories: soft sci-fi and hard sci-fi.

Soft sci-fi, also known as social sci-fi, emphasises the social and humanistic aspects of science and technology, often exploring the effects of scientific advances on society and individuals. Examples of soft sci-fi include Margaret Atwood's The Handmaid's Tale which explores the social and political consequences of a future where women's rights have been severely restricted. Hard sci-fi, also known as scientific or realistic sci-fi, places a greater emphasis on scientific accuracy and realism, often using established scientific principles and theories to explore the possibilities of the future. An example of this is Andy Weir’s ‘The Martian’, which narrates the story of an astronaut stranded on Mars and his efforts to survive by using his scientific knowledge and problem-solving skills.

Picture Credit : Google 

What is the History of science fiction?

Science fiction (sci-fi) has taken us on incredible journeys through time and space, allowing us to explore the depths of our imagination and the limits of the universe.

The term science fiction was first used by William Wilson in 1851 in a book of poetry titled ‘A Little Earnest Book Upon a Great Old subject’. However, the term's modern usage is credited to Hugo Gernsback, who founded the first sci-fi magazine, ‘Amazing Stories’ in 1926. The American editor used this term to describe stories that combined scientific speculation with adventure and futuristic concepts. The term gained widespread use in the 1930s and 1940s and has since become a popular genre of literature and entertainment.

Generally, the beginning of the literary genre of sci-fi is traced to 19th Century England and the Industrial Revolution, a time when rapid technological change inspired and led to the popularisation of stories and narratives that were ideally set in the future and explored themes such as time travel and interplanetary voyages. These stories dealt with the limits of human knowledge and the unintended consequences of our technological prowess. However, literary scholars claim that the earliest literary work that could fit into the genre of sci-fi dates back to the second Century AD.

A True Story: The earliest surviving work of sci-fi

Written by a Syrian satirist Lucian, ‘A True Story’, (also known as ‘True History’) is a two-book parodic adventure story and a travelogue about outer space exploration, extraterrestrial lifeforms, and interplanetary warfare. It is just extraordinary to know that the author produced a story that so accurately incorporated multiple hallmarks of what we generally associate with modern sci-fi, centuries before the invention of instruments such as the telescope.

Lucian was from Samosata (present-day Turkey), and his first language is believed to be Aramaic but he wrote in Greek. He might not be a household name today but literary scholars call him one of antiquity's most brilliant satirists and inventive wits. He is famous throughout European history for producing his absurd yet fantastical works and for his overt dispelling of the ridiculous and ill-logical social conventions and superstitions of his time. His works have been an inspiration for literary classics such as Jonathan Swift's ‘Gulliver's Travels’ and Thomas ‘More's Utopia’.

The basic classification of sci-fi

Sci-fi can be broadly classified into two categories: soft sci-fi and hard sci-fi.

Soft sci-fi, also known as social sci-fi, emphasises the social and humanistic aspects of science and technology, often exploring the effects of scientific advances on society and individuals. Examples of soft sci-fi include Margaret Atwood's The Handmaid's Tale which explores the social and political consequences of a future where women's rights have been severely restricted. Hard sci-fi, also known as scientific or realistic sci-fi, places a greater emphasis on scientific accuracy and realism, often using established scientific principles and theories to explore the possibilities of the future. An example of this is Andy Weir’s ‘The Martian’, which narrates the story of an astronaut stranded on Mars and his efforts to survive by using his scientific knowledge and problem-solving skills.

Picture Credit : Google