What’s space weather?

Ever wondered about the weather in space? Before that, let's think about what dictates the weather on our planet. The Sun, which is our source of energy, plays a titular role in governing the weather on Earth. And so does it create the weather in space! The activities on the Sun's surface can lead to a type of weather in space and this is called space weather.

Space weather is dependent on activities and changes on the Sun's surface such as coronal mass ejections (eruptions of plasma and magnetic field structures) and solar flares (sudden bursts of radiation). We are shielded from these bursts of radiation and energy by Earth's magnetosphere, ionosphere, and atmosphere.

Impact of space weather

The Sun is some 93 million miles away from our Earth. Yet, space weather can affect us and the solar system. The electric power distribution grids, global satellite communication, and navigation systems are all susceptible to conditions in space that are impacted by the Sun.

Space weather can damage satellites, affect astronauts and even cause blackouts on Earth. Such incidents are rare but they have happened before.

CME, solar flare

When a CME reaches Earth, it leads to a geomagnetic storm. This can disrupt services, damage power grids and cause blackouts.

For instance, back in 1989, a powerful geomagnetic storm led to a major power blackout in Canada. As a result, around 6 million people were left in the dark for about 9 hours.

Solar flares can also result in disruption of services. The strongest and most intense geomagnetic storm ever recorded occurred in 1859. This was caused by a solar flare. Called the "Carrington Event and named after England's solar astronomer Richard Carrington who observed the activity through his telescope, the geomagnetic storm caused damage, disrupting the telegraph system on Earth. It also led to the aurorae, a result of geomagnetic activity, being visible in regions such as Cuba and Hawaii.

While telegraph networks are a thing of the past, our communications system and technologies can still be impacted by space weather. Even as most of the charged particles released by the Sun get shielded away due to Earth's magnetic field, sometimes space weather can affect us. We need to track the activities on the Sun's surface and understand them to protect the people and systems.

Any warning regarding bad space weather can help scientists send alerts and lessen the damage caused by it. Space agencies have observatories monitoring the Sun and detecting solar storms. These help in mitigating the effect of bad space weather.

Picture Credit :Google

What is AI fiction?

Al fiction is a constantly evolving genre that gives us a peek into the potential umides and downsides of intelligent machines whether it is books written by humans with robots and Al as central characters or stories composed entirely by machine learning algorithms. Al fiction never fails to captivate readers and stimulate discussions about what is in store for technology in the future. Artificial intelligence hum long barn a popular topic in soner fiction from haar

Asimov's humuncul mots in Robot to the sostient machines of the Matrix Al has beos & costant so of Jascination and spratation As Al trdonology p mie in fiction has been increasingly muted with author examining the potential upsides and dowmides of intelligent madunes.

One recent devettipment in Al fiction is the merger of novels written retimly by artial de The first Al generated novel 1 the Road caine at in 2018 hased on data gathered by namically exhand ca on a road trip from New York to New Orleans it was moted by an operimental 2016 sort story co-written by Al and sapanese researchers troulated as The Day a Computer Visite a Novel which nearly won a literary prize. In the same year, Sunspring, an Al-authored screenplay, was placed in the top 10 at a London sci-fi film festival. In the first year of the pandemic, we got Pharmako-Al, a genre-bender philosophical book co-written by an Al and K. Allado-McDowell, founder of Google's Artists and a machine intelligence programme, GPT-3. These experimental works of fiction represent an intriguing new avenue for Al fiction. With machine learning algorithms capable of generating coherent narratives and dialogue, it is possible that we may soon see a flood of novels, stories, and even movies written entirely by Al.

Science fiction (sci-fi) & Al

For generations, sci-fi has foreseen the pervasive influence of Al in our daily life. Its representation in mainstream media has played a pivotal role in shaping public opinions towards this technological advancement Films such as The Terminator and Ex Machina have helped to shape the cultural narrative around Al, with many people viewing intelligent machines as potential threats to human safety and autonomy. At the same time, this type of speculative fiction has also explored the more positive aspects of Al, from the helpful robots of Wall-E to the benevolent supercomputers of 2001: A Space Odyssey. As Al technology continues to evolve, it's likely that all the good, bad, and ugly visions of intelligent machines will continue to be explored in fiction.

Picture Credit : Google

When did Voyager 2 achieve its closest approach to Jupiter?

On July 9, 1979, Voyager 2 made its closest approach to the largest planet in our solar system. Now in interstellar space. Voyager 2 altered some of our ideas about the Jovian system.

The Voyager probes are: humanity's longest running spacecraft as they have been flying since 1977 Both Voyager 2 and Voyager 1 are now in interstellar space, and though their power sources are gradually fading, they are still operational as of now.

It might seem counter-intuitive, but Voyager 2 was the first to be launched on August 20, 1977-about two weeks before the launch of Voyager 1. Both spacecraft were equipped with an extensive array of instruments to gather data. about the outer planets and their systems, in addition to carrying a slow-scan colour TV camera capable of taking images of the planets and their moons.

Based on Mariners

The design of the Voyagers was based on the Mariners and they were even known as Mariner 11 and Mariner 12 until March 7. 1977. It was NASA administrator James Fletcher who announced that the spacecraft would be renamed Younger. The Voyagers are powered by three plutonium dioxide radioisotope thermoelectric generators (RTGS) mounted at the end of a boom (a long metal beam extending from the spacecraft and serving as a structure subsystem).

Even though Voyager 1 was launched a little later, it reached Jupiter first in 1979 as it took a trajectory that put it on a faster path. Voyager 2 began transmitting images of Jupiter from April 24, 1979 for time-lapse movies of atmospheric circulation. For the next three-and-a-half months, until August 5 of that year, the probe continued to click images and collect data. A total of 17,000 images of Jupiter and its system were sent back to the Earth.

The spectacular images of the Jovian system included those of its moons Callisto, Europa, and Ganymede. While Voyager 2 flew by Callisto and Europa at about half the distance between the Earth and its moon, it made an even closer approach to Ganymede.

Ocean worlds

The combined cameras of the two Voyager probes, in fact. covered at least four-fifths of the surfaces of Ganymede and Callisto. This enabled the mapping out of these moons to a resolution of about 5 km.

Voyager 2's work, along with observations made before and after, also helped scientists reveal that each of these moons were indeed an ocean world.

On July 9, 1979, the probe made its closest approach to Jupiter. Voyager 2 came within 6,45,000 km from the planet's surface, less than twice the distance between Earth and its moon. It detected many significant atmospheric changes, including a drift in the Great Red Spot in addition to changes in its shape and colours.

Voyager 2 also relayed photographs of other moons like lo and Amalthea. It even discovered a Jovian satellite, later called Adrastea, and revealed a third component to the planet's rings. The thin rings surrounding Jupiter, as had been seen by Voyager 1 as well, were confirmed by images looking back at the giant planet as the spacecraft departed for Saturn. As the probe used the gravity assist technique, Jupiter served as a springboard for Voyager 2 to get to Saturn.

Studies all four giant planets

 Four decades after its closest approach to Jupiter, Voyager 2 successfully fired up its trajectory correction manoeuvre thrusters on July 8, 2019. These thrusters, which had themselves last been used only in November 1989 during Voyager 2's encounter with Neptune, will be used to control the pointing of the spacecraft in interstellar space.

In those 40 years, Voyager 2 had achieved flybys of Saturn (1981), Uranus (1986), and Neptune (1989), thereby becoming the only spacecraft to study all four giant planets of the solar system at close range. Having entered interstellar space on December 10, 2018, Voyager 2 is now over 132 AU (astronomical unit-distance between Earth and the sun) away from the Earth, still relaying back data from unexplored regions deep in space.

Picture Credit : Google

Membrane mirrors for large space-based telescopes?

Researches create lightweight flexible mirrors that can be rolled up during launch and reshaped precisely after deployment.

Mirrors are a significant part of telescopes. When it comes to space telescopes, which have complicated procedures for launching and deploying, the primary mirrors add considerable heft, contributing to packaging difficulties.

Researchers have now come up with a novel way of producing and shaping large, high-quality mirrors. These mirrors are not only thinner than the primary mirrors usually employed in space-based telescopes, but are also flexible enough to be rolled up and stored inside a launch vehicle.

Parabolic membrane mirror

The successful fabrication of such parabolic membrane mirror prototypes up to 30 cm in diameter have been reported in the Optica Publishing Group journal Applied Optics in April. Researchers not only believe that these mirrors could be scaled up to the sizes required in future space telescopes, but have also developed a heat-based method to correct imperfections that will occur during the unfolding process.

Using a chemical vapour deposition process that is commonly used to apply coatings (like the ones that make electronics water-resistant), a parabolic membrane mirror was created for the first time. The mirror was built with the optical qualities required for use in telescopes. A rotating container with a small amount of liquid was added to the inside of a vacuum chamber in order to create the exact shape necessary for a telescope mirror. The liquid forms a perfect parabolic shape onto which a polymer can grow during chemical vapour deposition, forming the mirror base. A reflective metal layer is applied to the top when the polymer is thick enough, and the liquid is then washed away.

Thermal technique

The researchers tested their technique by building a 30-cm-diameter membrane mirror in a vacuum deposition chamber. While the thin and lightweight mirror thus constructed can be folded during the trip to space, it would be nearly impossible to get it into perfect parabolic shape after unpacking. The researchers were able to show that their thermal radiative adaptive shaping method worked well to reshape the membrane mirror.

Future research is aimed at applying more sophisticated adaptive control to find out not only how well the final surface can be shaped, but also how much distortion can be tolerated initially. Additionally, there are also plans to create a metre-sized deposition chamber that would enable studying the surface structure along with packaging unfolding processes for a large-scale primary mirror.

Picture Credit : Google 

What is the mission of Helios 2?

On April 16-17, 1976, Helios-B made its closest approach to the sun, thereby setting a record for the closest flyby of the sun.

April is here and with it comes searing heat as the sun beats down heavily on most parts of India. You must be aware, however, that the sun, with its entire mass of glowing, boiling heat, is the source of all life on Earth. Our sun, in fact, influences how every object in the solar system is shaped and behaves.

Studying solar processes

This means that learning more about the sun and understanding it better has always been a priority. Apart from studying it from here on Earth, which is what we did for most of our history, we have also started sending spacecraft to explore its secrets. The Helios mission was one such mission, sending out a pair of probes into heliocentric orbit (an orbit around the sun) to study solar processes.

Following the success of the Pioneer probes, which formed a ring of solar weather stations along Earth's orbit to measure solar wind and predict solar storms, the Helios mission was planned. While the Pioneer probes orbited within 0.8 AU (astronomical unit, mean distance between Earth and sun) of the sun, the Helios probes shattered that record within years.

A joint German-American deep-space mission to study solar-terrestrial relationships and many solar processes, it was NASA's largest bilateral project up until then. The Federal Republic of Germany (West Germany) paid around $180 million of the total $260 million cost and provided the spacecraft, while NASA provided the launch vehicles.

Named Helios-A and Helios-B and equipped with state-of-the-art thermal control systems, the pair of probes were renamed Helios 1 and Helios 2 after their launches. Launched late in 1974, Helios 1 passed within 47 million km (0.31 AU) of the sun at a speed of 2,38,000 km per hour on March 15, 1975. While this was clearly the closest any human-made object had ever been to the sun, the record was broken again in a little over a year by its twin probe.

Even though Helios-B was very similar to Helios-A, the second spacecraft had improvements in terms of system design in order to help it survive longer in the harsh conditions it was heading for. Launched early in 1976, Helios 2 was also put into heliocentric orbit like its twin.

Achieves perihelion

Helios 2, however, flew 3 million km closer to the sun when compared to Helios 1. On April 16-17, 1976, Helios 2 achieved its perihelion or closest approach to the sun at a distance of 0.29 AU or 43.432 million km. At that distance, Helios 2 took the record for the closest flyby of the sun, a record that it didn't relinquish for over four decades. It also set a new speed record for a spacecraft in the process, reaching a maximum velocity of 68.6 km/s (2.46.960 km/h).

Helios 2's position relative to the sun meant that it was exposed to 10% more heat or 20 degrees Celsius more heat when compared to Helios 1. In addition to providing information on solar plasma, solar wind, cosmic rays, and cosmic dust, Helios 2 also performed magnetic field and electrical field experiments.

Apart from studying these parameters about the sun and its environment, both Helios 1 and Helios 2 also had the opportunity to observe the dust and ion tails of at least three comets. While data from Helios 1 was received until late 1982, Helios 2's downlink transmitter failed on March 3, 1980. No further usable data was received from Helios 2 and ground controllers shut down the spacecraft on January 7, 1981.

This was done to avoid any possible radio interference with other spacecraft in the future as both probes continue to orbit the sun.

Parker Solar Probe gets closer and faster

After enjoying its position for over 40 years, Helios 2's records were finally broken by NASA's Parker Solar Probe. Launched on August 12, 2018 to study the sun in unprecedented detail, the probe became the first to "touch" the sun during its eighth flyby on April 28, 2021 when it swooped inside the sun's outer atmosphere. Already holding both the distance and speed records, it is expected to further break them both during its 24 orbits of the sun over its seven-year lifespan. When it completed its 15th closest approach to the sun a month ago on March 17, it came within 8.5 million km of the sun's surface.

Picture Credit : Google

What is a 3D printed robotic hands?

 

Researchers have succeeded in printing robotic hands with bones, ligaments and tendons for the first time. Using a new laser scanning technique, the new technology enables the use of different polymers.

Additive manufacturing or 3D printing is the construction of a 3D object from a 3D digital model. The technology behind this has been advancing at great pace and the number of materials that can be used have also expanded reasonably. Until now, 3D printing was limited to fast-curing plastics. The use of slow-curing plastics has now been made possible thanks to a technology developed by researchers at ETH Zurich and a MIT spin-off U.S. start-up, Inhabit. This has resulted in successfully 3D printing robotic hands with bones, ligaments and tendons. The researchers from Switzerland and the U.S. have jointly published the technology and their applications in the journal Nature.

Return to original state

 In addition to their elastic properties that enable the creation of delicate structures and parts with cavities as required, the slow-curing thiolene polymers also return to their original state much faster after bending, making them ideal for the likes of ligaments in robotic hands.

The stiffness of thiolenes can also be fine-tuned as per our requirements to create soft robots. These soft robots will not only be better-suited to work with humans, but will also be more adept at handling delicate and fragile goods.

Scanning, not scraping

In 3D printers, objects are typically produced layer by layer. This means that a nozzle deposits a given material in viscous form and a UV lamp then cures each layer immediately. This method requires a device that scrapes off surface irregularities after each curing step.

While this works for fast-curing plastics, it would fail with slow-curing polymers like thiolenes and epoxies as they would merely gum up the scraper. The researchers involved therefore developed a 3D printing technology that took into account the unevenness when printing the next layer, rather than smoothing out uneven layers. They achieved this using a 3D laser scanner that checked each printed layer for irregularities immediately.

This advancement in 3D printing technology would provide much-needed advantages as the resulting objects not only have better elastic properties, but are also more robust and durable. Combining soft, elastic, and rigid materials would also become much more simpler with this technology.

Picture Credit : Google 

What are the oldest surviving photographs of moon?

In March 1840, English-born American John William Draper clicked what are now the oldest surviving photographs of the moon. Using the daguerreotype process that had just been invented, Draper clicked the photograph that showed lunar features.

The smartphones in our hands these days are so powerful and equipped with great cameras that all we need to do to click a photograph of the moon is to wait for the moon to make its appearance and then take a photograph. It wasn't always this easy though. In fact, the oldest surviving photographs of the moon are less than 200 years old. The credit for taking those photographs goes to English-born American scientist, philosopher, physician, chemist, historian and photographer John William Draper.

 Born in England in 1811, Draper went to the U.S. in 1832. After receiving a medical degree from the University of Pennsylvania, he moved to New York University in 1837 and was one of the founders of NYU’s School of Medicine in 1840. He not only taught there for most of his life, but also served as the president of the med school for 23 years.

Learns Daguerre's process

 His interest in medicine, however, didn't keep him away from dabbling with chemistry too. The chemistry of light-sensitive materials fascinated Draper and he learned about the daguerreotype process of photography after the news arrived in the U.S. from Europe. French artist and photographer Louis Daguerre had invented the process only in 1839.

Draper attempted to improve the photographic process of Daguerre and succeeded in ways to increase plate sensitivity and reduce exposure times. These advances not only allowed him to produce some of the best portrait photographs of the time, but also let him peer into the skies to try and capture the moon.

He met with failure in his first attempts over the winter of 1839-40. He tried to make daguerreotypes of the moon from his rooftop observatory at NYU, but like Daguerre before him, was unsuccessful. The images produced were either underexposed, or were mere blobs of light in a murky background at best.

Birth of astrophotography

 By springtime in March 1840, however, Draper was successful, thereby becoming the first person ever to produce photographs of an astronomical object. He was confident enough to announce the birth of astrophotography to the New York Lyceum of Natural History, which later became the Academy of Sciences. On March 23, 1840, he informed them that he had created a focussed image of the moon.

The exact date when he first achieved it isn't very clear. While the photograph on loan to the Metropolitan Museum of Art (which cannot be shown here due to rights restrictions) is believed to have been clicked on March 16 based on his laboratory notebook, the one pictured here was by most accounts on the night of March 26, three days after he had announced his success. The fact that many of Draper's original daguerreotypes were lost in an 1865 fire at NYU, and that daguerreotype photographs themselves don't have a long shelf life unless well-preserved from the moment they were taken means that the ones remaining become all the more significant.

The moon pictured here shows an extensively degraded plate with a vertically flipped last quarter moon, meaning the lunar south is near the top. This shows that Draper used a device called the heliostat to keep light from the moon focussed for a 20-minute-long exposure on the plate. They are of the same we and same circular image area as that of his first failed attempts.

Conflict thesis

Apart from being a physician and the first astrophotographer, Draper also has other claims to fame. He was the invited opening speaker in the famous 1860 meeting at Chford University where English naturalist Charles Darwin's ‘Origin of Species’ was the subject of discussion. He is also well known for his book ‘A History of the Conflict between Religion and Science’ which was published in 1874. This book marks the origin of what is known as the "conflict thesis” about the incompatibility of science and religion.

While we will probably never know on which particular March 1840 night Draper captured the first lunar image, his pioneering achievement set the ball rolling for astronomical photography. The fact that he achieved it with a handmade telescope attached to a wooden box with a plate coated with chemicals on the back makes it all the more remarkable.

Picture Credit : Google 

What is the mysterious object in the James Webb telescope?

A team of international astrophysicists has discovered many mysterious objects that were hidden in images from the James Webb Space Telescope. These include six potential galaxies that should have emerged so early in the history of the universe and are so massive that they should not be possible under current cosmological theory.

These candidate galaxies may have existed roughly 500 to 700 million years after the Big Bang. That places them at more than 13 billion years ago, close to the dawn of the universe. Containing nearly as many stars as the modern-day Milky Way, they are also gigantic. The results of the study have been published in the journal Nature in February

Not the earliest discovered

 Launched in December 2021, the James Webb Space Telescope is the most powerful telescope ever sent into space by us. The candidate galaxies identified this time from its data, however, aren’t the earliest galaxies observed by Jams Webb. Another group of scientists spotted four galaxies observed that likely formed 350 million years after the Big band. Those galaxies, however, were nowhere as massive as the current findings.

While looking at a stamp-sized section of an image that looked deep into a patch of sky close to the Big Dipper (a constellation, also known as the Plough), a researcher spotted fuzzy dots that were way too bright and red. In astronomy, red light usually equals old light. As the universe expands the light emitted by celestial objects stretches, making it redder to human instruments.

Based on their calculations, the team was also able to suggest that the candidate galaxies they had discovered were also huge. Containing tens to hundreds of billions of sun-sized stars worth of mass, these were akin to our Milky Way.

Might rewrite astronomy books

As current theory suggests that there shouldn't have been enough normal matter at that time to form so many stars so quickly, proving it might rewrite astronomy books. And even if these aren't galaxies, then another possibility is that they are a different kind of celestial object, making them interesting.

For now, the discovery has piqued the interest of the researchers and the astronomical community. More data and information about these mysterious objects from James Webb is what is being sought after to confirm that these candidate galaxies are actually as big as they look, and date as far back in time.

Picture Credit : Google

What did Lee De Forest discover?

Exactly 100 years ago, on March 12, 1923, American inventor Lee de Forest conducted a public demonstration of his Phonofilm at a press conference. Even though it wasn’t a great financial success, it heralded on era in movie production as it synced sound with the moving image.

When we think about successful inventors whose inventions have heralded a new era, we imagine that they would have enjoyed considerable personal financial success from it as well. This, however, isn't always the case as some of them turn out to be bad at business. American inventor Lee de Forest was one of them. Even though he contributed immensely to the broadcasting industry and had plenty of scientific successes, he gained little from it all personally.

Unusual upbringing

Born in lowa, the U.S. in 1873, de Forest had an unusual upbringing for his time. Following his family's move to Alabama, they were avoided by the white community. This was because his father had taken the presidency of the Talladega College for Negroes and was involved in efforts to educate blacks.

Despite his unusual circumstances, de Forest grew up as a happy child unaware of the prejudices he was being meted out making friends with the black children in the town. He was drawn towards machinery and by the time he turned 13, he was already making gadgets at will. This is why he took the path towards the sciences, rather than become a clergyman as planned by his father.

Invents first triode

Even though education wasn't easy as he had to do odd jobs to meet expenses in addition to those covered by his scholarship and allowance from parents. de Forest completed his Ph.D. in physics in 1899. By 1906, he presented the audion - the first triode - and it went on to become an indispensable part of electronic circuits.

For several decailes, Inventors including American great Thomas Edison, had been trying to bring together the 3 phonograph (a device for recording and reproducing sound) and the moving picture. De Forest, working alongside fellow inventor Theodore Case, first became interested in the idea of sounds for films in 1913.

The patented system that he called Phonofilm began as a drawing in 1918. Over the next couple of years, he earned a number of patents pertaining to the process as he perfected it along the way. On March 12, 1923, he conducted a successful demonstration for the press and presented his Phonofilm.

Sound on film

The technological advance that de Forest brought about was to synchronise sound and motion. He did this by placing the sound recording as an optical soundtrack directly on the film. This meant that sound frequency and volume were represented in the form of analog blips of light.

In the weeks that followed, a number of short films premiered using the Phonofilm. As synchronising the sound of human voice with the lips that moved on screen was still rather difficult, the first sound films that the public viewed still haut dialogue titles, but were accompanied by music.

Below-par fidelity

While de Forest did equip nearly 30 theatres around the world with Phonofilm, he couldn't get Hollywood interested in his invention. De Forest had a solution for the sound-sync issue with his Phonofilm, but the fidelity (how accurately a copy reproduces its source) on offer didn't meet the expectations of the age.

 In the following years, the motion picture industry shifted to talking pictures and the sound-on-film process was similar in principle to that used in de Forest’s Phonofilm. De Forest, however, was a failed businessman who was bad at judging people. He was defrauded by his own partner, had to pay for lengthy legal battles for his patents, and even had to sell many of these patents, which were then employed profitability

For all his efforts, de Forest at least finished as an Oscar winner. In 1959, two years before his death in 1961, the Academy of Motion Picture Art and Sciences awarded de Forest an honorary Oscar for the "pioneer invention which brought sound to the motion picture”.

Picture Credit : Google 

The making of the Sydney Harbour Bridge

The Sydney Harbour Bridge was officially opened on March 19, 1932. An iconic structure in Sydney and one of the best recognized, photographed, and loved landmarks of the world, it is the world's heaviest steel arch bridge.

There are some human-made structures that are readily identified and immediately associated with the place in which they are located. Taj Mahal is one such structure that people world over connect with India. Similarly, there are two landmarks in Sydney- the Sydney Opera House and the Sydney Harbour Bridge- that have turned out to be prominent structures that people globally link with Australia.

Spanning the Sydney Harbour and connecting Sydney with its northern suburbs, the Sydney Harbour Bridge is about 1,150 m in length, with the top of the bridge standing 134 m above the harbour. Apart from having two rail lines and eight lanes for vehicular traffic, the bridge also has a cycleway for bicycles and a walkway for pedestrians.

An old idea

The site of the Sydney Harbour Bridge (both sides of the harbour) was home to Eora people (Aboriginal Australians) before the arrival of the Europeans in 1788. While the bridge came about only in 1932, the desire to span the harbour and the idea for its construction dates back over 100 years.

As early as 1815, Francis Greenway, an architect convicted of forgery in 1812, suggested the construction of a bridge across the harbour. In the decades that followed, the idea took many forms - a large cast iron bridge, a floating bridge, and even a tunnel. Some proposals were serious, some were even accepted, but nothing really materialised as the costs involved were prohibitive.

Father of the Sydney Harbour Bridge

This remained the case till the turn of the century as estimated costs meant that even satisfactory designs couldn't be pursued. It was in 1900 that civil engineer John Job Crew Bradfield first became involved with the idea. Over the next three-plus decades, Bradfield became the project's most vocal advocate and is even remembered as the father of the Sydney Harbour Bridge.

For Bradfield, the bridge was part of his vision for the suburban railway network's electrification. He used his influence to both promote and oversee the construction of the Sydney Harbour Bridge.

 In 1912, Bradfield was appointed as the chief engineer of the Sydney Harbour Bridge and City Transit. Just when it looked like things were about to get moving, World War I put a halt to all plans.

International competition

 It was in 1922 that the Sydney Harbour Bridge Act was passed by parliament. Calling for worldwide tenders for the 'Construction of a Cantilever or an Arch Bridge across Sydney Harbour’, Bradfield turned it into an international competition. After going through the 20 proposals from six companies, Bradfield and his team selected a two-hinged steel arch with abutment (substructure supporting superstructure) towers by English firm Dorman, Long & Co.

The turning of the first sod ceremony, which is a traditional ceremony in many cultures that celebrates the first day of construction, took place in July 1923. The four abutments served as the load-bearing foundation and from these the arch was built simultaneously from both ends. The construction of the arch began on October 26, 1928 and the two arches touched for the first time on August 19, 1930.

As the bridge became self-supporting once the span was complete, the bridge deck could be built and it was completed in June 1931. Load testing began in January 1932 and it was declared safe in the following weeks. While the official opening of the bridge took place on March 19, 1932, over 50,000 school children had already crossed the bridge by then in a series of "school days".

Jobs during the Great Depression

Over 1,600 people worked in the construction of the bridge through its near decade-long construction. With the economy slowing down and encountering a worldwide depression during the period, the bridge provided much-needed jobs across various work categories. It wasn't without danger, however, as at least 16 people died during the construction of the bridge.

In all, over 52,800 tonnes of steel was used, out of which 39,000 tonnes were employed in the arch alone. The cost of building the bridge alone was £4,238,839 and the total cost including other expenses was closer to £10 million - a debt that was paid off only in 1988. But then, the bridge handled over 200 trains, 1,60,000 vehicles, and 1.900 bicycles on average every single day in 2017. No wonder the Sydney Harbour Bridge is considered an engineering marvel.

Picture Credit : Google 

What was the first successful airship built by Ferdinand von Zeppelin in 1900?

On July 2, 1900, the first directed flight of the LZ-1, a zeppelin airship, took place in Germany. The man behind it was Ferdinand Graf von Zeppelin, who pioneered the cause of building rigid dirigible airships, so much so that his surname is still popularly used as a generic name.

Aeroplanes are now the norm for air travel but there was a brief period early in the aeronautical history when airships or dirigibles were believed capable of playing a crucial role in aviation development. Large, controllable balloons propelled by an engine, airships are one of two types of lighter-than-air aircraft (the other one being well, balloons of course!)

Now relegated to aerial observations, advertising and other areas where staying aloft is more important than movement, airships come in three main types: the non-rigid airships or blimps, the semi-rigid airships, and the rigid airships, often called zeppelins. The last category is more popular as zeppelins because it was a German man called Ferdinand Graf von Zeppelin who conceived and developed the first rigid dirigible.

Born in Konstanz, Germany on July 8, 1938, Zeppelin studied at the University of Tubingen before entering the Prussian Army in 1858. He travelled to the U.S. during the American Civil War and acted as a military observer for the Union Army.

An idea is born

It was during this time, in 1863, when Zeppelin had the first of several balloon ascensions at St. Paul, Minnesota. While he was quick to realise the weakness of free balloons, their overdependence on winds and their uncontrollability, it was an experience that stayed with him through a lifetime.

By the 1870s, the idea of building a steerable airship had taken shape in Zeppelin's mind. So when he retired from the army with the rank of brigadier general, he decided to devote himself to building these airships.

Zeppelin toiled for a decade even though there were many naysayers. By 1900, he had built the first rigid-body airship consisting of a long, uniform cylinder with rounded ends. At 420 feet long and 38 feet in diameter, it had a hydrogen gas capacity of nearly 3,99,000 cubic feet.

Flies from a floating hangar

 From a floating hangar on Lake Constance, Germany, the initial flight of LZ-1, the first zeppelin, took place on July 2, 1900. Days away from turning 62, Zeppelin had finally made progress with an idea that had been with him for decades.

While the demonstration wasn't entirely successful, the craft attained speeds of nearly 32 km/hour, enough to spark enthusiasm around zeppelins, get more donations, and have enough funding to keep the progress happening. Zeppelin tirelessly worked to make new and improved dirigibles and even created the first commercial passenger air service with them by 1910, but it wasn't until World War I that support from the government finally came in.

With most aeroplanes still in the development phase, the Germans perceived the advantages of zeppelin-type rigid airships, which could not only attain higher altitudes than aeroplanes of the time, but also remain airborne for nearly 100 hours. More than 100 zeppelins were employed by the Germans for military operations during World War I.

Hindenburg disaster

Zeppelin died in 1917, without seeing the heights that his zeppelins reached, and the tragedy that followed. The LZ-127 ‘Graf Zeppelin’ was launched in 1927 and it was one of the largest ever built. Having a length more than that of two-and-a-half football fields, it made a number of trans-Atlantic flights.

The LZ-129 ‘Hindenburg’ came about in 1936 and was touted to become the most famous zeppelin ever. Instead, tragedy struck and the ‘Hindenburg’ exploded and burned on May 6, 1937 at its mooring mast in New Jersey. (In case you were wondering, the Hindenburg Research investment company, which has constantly been in the news this year following their reports about the Adani Group, was named after this zeppelin.)

The Hindenburg disaster spelt doom for zeppelins as the remaining ones were also taken off service and dismantled. While safety concerns diminished their popularity, they had helped establish the principles of lighter-than-air aircraft and had even been among the first to provide commercial air travel.

Picture Credit : Google 

What was the first successful airship built by Ferdinand von Zeppelin in 1900?

On July 2, 1900, the first directed flight of the LZ-1, a zeppelin airship, took place in Germany. The man behind it was Ferdinand Graf von Zeppelin, who pioneered the cause of building rigid dirigible airships, so much so that his surname is still popularly used as a generic name.

Aeroplanes are now the norm for air travel but there was a brief period early in the aeronautical history when airships or dirigibles were believed capable of playing a crucial role in aviation development. Large, controllable balloons propelled by an engine, airships are one of two types of lighter-than-air aircraft (the other one being well, balloons of course!)

Now relegated to aerial observations, advertising and other areas where staying aloft is more important than movement, airships come in three main types: the non-rigid airships or blimps, the semi-rigid airships, and the rigid airships, often called zeppelins. The last category is more popular as zeppelins because it was a German man called Ferdinand Graf von Zeppelin who conceived and developed the first rigid dirigible.

Born in Konstanz, Germany on July 8, 1938, Zeppelin studied at the University of Tubingen before entering the Prussian Army in 1858. He travelled to the U.S. during the American Civil War and acted as a military observer for the Union Army.

An idea is born

It was during this time, in 1863, when Zeppelin had the first of several balloon ascensions at St. Paul, Minnesota. While he was quick to realise the weakness of free balloons, their overdependence on winds and their uncontrollability, it was an experience that stayed with him through a lifetime.

By the 1870s, the idea of building a steerable airship had taken shape in Zeppelin's mind. So when he retired from the army with the rank of brigadier general, he decided to devote himself to building these airships.

Zeppelin toiled for a decade even though there were many naysayers. By 1900, he had built the first rigid-body airship consisting of a long, uniform cylinder with rounded ends. At 420 feet long and 38 feet in diameter, it had a hydrogen gas capacity of nearly 3,99,000 cubic feet.

Flies from a floating hangar

 From a floating hangar on Lake Constance, Germany, the initial flight of LZ-1, the first zeppelin, took place on July 2, 1900. Days away from turning 62, Zeppelin had finally made progress with an idea that had been with him for decades.

While the demonstration wasn't entirely successful, the craft attained speeds of nearly 32 km/hour, enough to spark enthusiasm around zeppelins, get more donations, and have enough funding to keep the progress happening. Zeppelin tirelessly worked to make new and improved dirigibles and even created the first commercial passenger air service with them by 1910, but it wasn't until World War I that support from the government finally came in.

With most aeroplanes still in the development phase, the Germans perceived the advantages of zeppelin-type rigid airships, which could not only attain higher altitudes than aeroplanes of the time, but also remain airborne for nearly 100 hours. More than 100 zeppelins were employed by the Germans for military operations during World War I.

Hindenburg disaster

Zeppelin died in 1917, without seeing the heights that his zeppelins reached, and the tragedy that followed. The LZ-127 ‘Graf Zeppelin’ was launched in 1927 and it was one of the largest ever built. Having a length more than that of two-and-a-half football fields, it made a number of trans-Atlantic flights.

The LZ-129 ‘Hindenburg’ came about in 1936 and was touted to become the most famous zeppelin ever. Instead, tragedy struck and the ‘Hindenburg’ exploded and burned on May 6, 1937 at its mooring mast in New Jersey. (In case you were wondering, the Hindenburg Research investment company, which has constantly been in the news this year following their reports about the Adani Group, was named after this zeppelin.)

The Hindenburg disaster spelt doom for zeppelins as the remaining ones were also taken off service and dismantled. While safety concerns diminished their popularity, they had helped establish the principles of lighter-than-air aircraft and had even been among the first to provide commercial air travel.

Picture Credit : Google 

Creating three-dimensional objects with sound

Additive manufacturing, more commonly identified as 3D printing, allows for the fabrication of complex parts from functional or biological materials. As objects are constructed one line or one layer at a time, conventional 3D printing can be a slow process.

Scientists from the Max Planck Institute for Medical Research and the Heidelberg University have demonstrated a new technology to form a 3D object from smaller building blocks in a single step. They utilise the concept of multiple acoustic holograms to generate pressure fields.

Sound exerts force

If you've ever been near a powerful loudspeaker, you would be aware that sound waves exert forces on matter. When high-frequency ultrasound that is inaudible to the human car is used, the wavelengths can be pushed into the microscopic realm. This would allow researchers to manipulate building blocks that are incredibly small, including biological cells.

This research group had previously shown how to form ultrasound using acoustic holograms, which are 30 printed plates made to encode a specific sound field. The scientists had devised a fabrication concept to use these sound fields to assemble materials in 2D patterns.

Holds promise

For this research, the team expanded the concept further by capturing particles and cells freely floating in water and assembling them into 3D shapes. Additionally, the new method works with materials such as glass, hydrogel beads, and biological cells.

Ultrasound affords the advantage that it is gentle for using biological cells and that it can travel deep into tissue. Hence, it can be used to remotely manipulate cells without harm. Scientists believe that their technology of creating 3D objects with sounds holds promise as it can provide a platform for the formation of tissues and cell cultures.

Picture Credit : Google 

Who was the Danish astronomer known for planetary observations Tycho?

On March 5, 1590, Danish astronomer Tycho Brahe observed a comet. This was one of the many observations made by Brahe, known for his comprehensive astronomical observations.

The invention of the telescope allowed astronomy to peer further and further still improving technology and better equipment implies that our modem telescopes allow us to see way beyond what our predecessors imagined possible. And yet, there was a time when there were no telescopes when astronomical observations were still being done.

Danish astronomer Tycho Brahe is best known for measuring and fixing the positions of astronomical bodies and developing astronomical instruments. While his observations paved the way for future discoveries, the fact that these were the most accurate measurements from a time when the telescope had yet not been invented makes it all the more special.

Born in Denmark in 1546, Brahe’s parents were part of the nobility. Abducted at a very early age by his wealthy uncle, Brahe was raised by him and attended universities in Copenhagen and Leipzig

Drawn to astronomy

While his family wanted him to be a lawyer and he even studied the subjects, Brahe chose to pursue astronomy eventually. The total eclipse of the sun on August 21, 1560, and the conjunction of Jupiter and Saturn in August 1563- Brahe's first recorded observation -were natural events that pushed Brahe to devote his lifetime to astronomy.

In 1566, Brahe fought Manderup Parsberg, his third cousin and a fellow student, in a duel over who was the better mathematician. While Parsberg and Brahe went on to become good friends after this, Brahe lost a large chunk of his nose during the duel and had to wear a prosthetic nose to mask the disfigurement for the rest of his life. While this nose was long believed to be made of silver, the exhumation of his body in 2010 revealed that it was made of brass.

Brahe observed a supernova in the constellation of Cassiopeia in 1572 and the new star remained visible for nearly a year-and-a-half. He observed a comet late in 1577 and meticulously followed it till it remained visible in January 1578

Against prevailing theory

 While prevailing theory dictated that disturbances in the atmosphere was the reason behind these. Brahe’s measurements showed differently. Brahe was able to show that the supernova never changed with regard to the surrounding stars. And based on his measurements of the comet, he was able to determine that it was at least six times farther away than the moon

These observations elevated Brake to a new level and he acquired an international reputation. His fame earned him a more comfortable life and the backing of the rulers as King Frederick II of Denmark offered him exclusive usage of his own island of Hven and financial support to carry out astronomical observations.

Brahe built a huge observatory on the island and diligently tracked the heavenly bodies, maintaining impeccable notes of the observations. During his time at Hven, Brake observed at number of comets. The one he observed on March 5, 1590 when he was employed in observing Venus was one of the last he tracked down while at the island.

Combined model

Even though Brahe's work laid bare the flaws of the system that was then used, he failed to embrace Polish polymath Nicolaus Copernicus's proposed model of the universe with the Sun at its centre. Brahe, instead, offered a combined model with the moon, and the sun going around the Earth, even as the five other known planets orbited the sun

Brahe's influence waned following the death of Frederick in 1588 and most of his income stopped under Frederick’s son Christian IV. He left Hven in 1597 and after short stays in a couple of places, settled in Prague in 1599 and stayed there until h death in 1601.

It was in Prague that German astronomer Johannes Kepler started working as Brakes assistant. Kepler. ironically. would go on to use Brahe's detailed observations to arrive at his planetary laws of motion and show that planets moved around the sun in elliptical orbits.

Picture Credit : Google 

When was mendelevium discovered?

The discovery of mendelevium was announced at the end of April in 1955. It was described by one of its discoverers as "one of the most dramatic in the sequence of syntheses of transuranium elements".

The search for new elements is something that scientists have been doing for hundreds of years. Once Russian chemist Dmitri Mendeleev organised the elements known at his time according to a repeating, or periodic (and hence the name periodic table), system in the 1860s, the search became a little easier.

This was because the gaps in Mendeleev's periodic table pointed to elements that weren't known yet. The properties of these elements, however, could be predicted based on their place in the table and the neighbours around them, thereby making it easier to discover new elements. Mendeleev's table has since been expanded, to make space for other new elements.

One of those new elements discovered was element number 101, named mendelevium after. Mendeleev. American Nobel Prize winner Glenn Seaborg, who was one of the discoverers of the element, wrote that the discovery of mendelevium was "one of the most dramatic in the sequence of syntheses of transuranium elements", in a chapter co-written by him for The New Chemistry. Additionally, he also wrote in that chapter that "It was the first case in which a new element was produced and identified one atom at a time."

Begins with a bang

Ivy Mike, the first thermonuclear device, was dropped for testing on the Eniwetok Atoll in the Pacific Ocean in 1952, sending a radioactive cloud into the air, from which samples were collected. The lab reports suggested that two new elements-elements 99 (einsteinium) and 100 (fermium) - were discovered from the debris. The discoveries came at a time when there was a race to discover new elements. The leading researchers of the U.S. involved in this race were camped at the Radiation Laboratory at the University of California, Berkeley, under the direction of physicist Ernest Lawrence A team of scientists which included Albert Ghiorso, Stanley Thompson, Bernard Harvey, Gregory Choppin, and Seaborg, came up with a plan to produce element 101 using a billion atoms of einsteinium-253 that were formed in a reactor.

The idea was to spread the atoms of einsteinium onto a thin gold foil. As its half-life was about three weeks, the researchers effectively had a week to perform their experiments after receiving it. Based on Ghiorso's calculations, they were aware that only about one atom of the new element 101 would be produced for every three hours the gold foil was bombarded with alpha particles.

Race against time

As the experiment would yield only a very small amount of the new element, the scientists set up a second gold foil behind the first to catch the atoms. It was a race against time as well as the half-life of element 101 was expected to be a few hours only.

With the Radiation Laboratory atop a hill and the cyclotron at its base, there really was a mad rush to get the samples to the lab on time. The samples "were collected in a test tube, which I took and then jumped in a car driven by Ghiorso", is how Choppin put it in his own words.

On the night of the discovery, the target was irradiated in three-hour intervals for a total of nine hours. By 4 AM on February 19, 1955, they had recorded five decay events characteristic of element 101 and eight from element 100, fermium. With conclusive evidence of element 101's existence, Choppin mentions that "We left Seaborg a note on the successful identification of Z =101 and went home to sleep on our success."

At the end of April 1955, the discovery of element 101 was announced to the world. The university's press release stated that "The atoms of the new element may have been the rarest units of matter that have existed on earth for nearly 5 billion years... The 17 atoms of the new element all decayed, of course, and the 'new' element is for the present extinct once again."

Cold War era

As element 101 marked the beginning of the second hundred elements of the periodic table, the scientists wanted to name it after Mendeleev, the man behind the periodic table.

Despite the discovery happening during the Cold War era, Seaborg was able to pull enough strings to convince the U.S. government to accept the proposal to name the element after a Russian scientist. The International Union of Pure & Applied Chemistry approved the name mendelevium and the scientists published their discovery in the June 1955 issue of Physical Review Letters.

While only small quantities of mendelevium have ever been produced, more stable isotopes of the element have since been made. The most stable version known as of now has a half-life of over one-and-a-half months, allowing for better opportunities to further study heavy elements and their properties.

Picture Credit : Google