DO ALL SCIENTISTS WORK IN LABORATORIES?


Some scientists do wear white coats and work with test tubes, but many do most of their work in the world outside. A geologist, gist, for example, may have to clamber a cliff face to obtain samples of rock. Not all scientists wear white coats and work in labs. There are a wide variety of jobs and careers that require knowledge and application of science, from research to business and from regulation to teaching.



The Business Scientist underpins excellent management and business skills with scientific knowledge, supporting evidence-led decision-making within companies and other enterprises. This type of scientist has the scientific and technical knowledge to be credible with colleagues and competitors, as well as confidence in a business environment. They are found in science and technology companies in a wide variety of roles, from R&D or marketing, and to the C-suite itself.



The Developer, or translational, Scientist uses the knowledge generated by others and transforms it into something that society can use. They might be developing products or services, ideas that change behaviour, improvements in health care and medicines, or the application of existing technology in new settings.



They are found in research environments and may be working with Entrepreneur and Business scientists to help bring their ideas to market.



The Entrepreneur Scientist makes innovation happen. Their scientific knowledge and connections are deep enough to be able to see opportunities for innovation – not just in business, but also in the public sector and other sectors of society.



They blend their science knowledge and credibility with people management skills, entrepreneurial flair and a strong understanding of business and finance, to start their own businesses or help grow existing companies.



The Explorer Scientist is someone who, like the crew of the Enterprise, is on a journey of discovery “to boldly go where no one has gone before”. They rarely focus on a specific outcome or impact; rather they want to know the next piece of the jigsaw of scientific understanding and knowledge. They are likely to be found in a university or research centre or in Research & Development (R&D) at an organisation, and are likely to be working alone.



The Regulator Scientist is there to reassure the public that systems and technology are reliable and safe, through monitoring and regulation. They will have a mix of skills and while they may not get involved in things like lab work, they will have a thorough understanding of the science and the processes involved in monitoring its use or application. They are found in regulatory bodies, such as the Food Standards Agency, and in a wide range of testing and measurement services.



The Technician Scientist provides operational scientific services in a wide range of ways. These are the scientists we have come to depend on within the health service, forensic science, food science, health and safety, materials analysis and testing, education and many other areas. Rarely visible, this type of scientist is found in laboratories and other support service environments across a wide variety of sectors.



The Investigator Scientist digs into the unknown observing, mapping, understanding and piecing together in-depth knowledge and data, setting out the landscape for others to translate and develop. They are likely to be found in a university or research centre or in Research & Development (R&D) at an organisation, working in a team and likely in a multi-disciplinary environment.


































HOW IS SCIENTIFIC KNOWLEDGE PASSED ON?


It is incredible to us now that five hundred years ago it was possible for a person to have a good understanding of every branch of science then known. Today there is so much information available that no one person can be informed about every area of science, and even specialists has difficulty in keeping up with new developments. There is a long established tradition that scientists who have made a new discovery publish a “paper” or article on the subject in scientific journals. People working in the same field can then read this to keep up to date with their subject. Some discoveries are so important or amazing that they reach the general public, through radio, television, books and newspapers.



Until the past decade, scientists, research institutions, and government agencies relied solely on a system of self-regulation based on shared ethical principles and generally accepted research practices to ensure integrity in the research process. Among the very basic principles that guide scientists, as well as many other scholars, are those expressed as respect for the integrity of knowledge, collegiality, honesty, objectivity, and openness. These principles are at work in the fundamental elements of the scientific method, such as formulating a hypothesis, designing an experiment to test the hypothesis, and collecting and interpreting data. In addition, more particular principles characteristic of specific scientific disciplines influence the methods of observation; the acquisition, storage, management, and sharing of data; the communication of scientific knowledge and information; and the training of younger scientists.1 How these principles are applied varies considerably among the several scientific disciplines, different research organizations, and individual investigators.



The basic and particular principles that guide scientific research practices exist primarily in an unwritten code of ethics. Although some have proposed that these principles should be written down and formalized, the principles and traditions of science are, for the most part, conveyed to successive generations of scientists through example, discussion, and informal education. As was pointed out in an early Academy report on responsible conduct of research in the health sciences, “a variety of informal and formal practices and procedures currently exist in the academic research environment to assure and maintain the high quality of research conduct”.



Physicist Richard Feynman invoked the informal approach to communicating the basic principles of science in his 1974 commencement address at the California Institute of Technology:



[There is an] idea that we all hope you have learned in studying science in school—we never explicitly say what this is, but just hope that you catch on by all the examples of scientific investigation. It's a kind of scientific integrity, a principle of scientific thought that corresponds to a kind of utter honesty—a kind of leaning over backwards. For example, if you're doing an experiment, you should report everything that you think might make it invalid—not only what you think is right about it; other causes that could possibly explain your results; and things you thought of that you've eliminated by some other experiment, and how they worked—to make sure the other fellow can tell they have been eliminated.



Details that could throw doubt on your interpretation must be given, if you know them. You must do the best you can—if you know anything at all wrong, or possibly wrong—to explain it. If you make a theory, for example, and advertise it, or put it out, then you must also put down all the facts that disagree with it, as well as those that agree with it. In summary, the idea is to try to give all the information to help others to judge the value of your contribution, not just the information that leads to judgment in one particular direction or another.
































WHAT IS A HYPOTHESIS?


Anyone can make a guess, but scientists set about finding out if their ideas are true in an organized way. A hypothesis is a theory — an idea — about why something happens or what makes something work. A scientist will then try to think of a way of testing whether this idea is correct. Often this will mean designing a special experiment.



A hypothesis (plural hypotheses) is a proposed explanation for a phenomenon. For a hypothesis to be a scientific hypothesis, the scientific method requires that one can test it. Scientists generally base scientific hypotheses on previous observations that cannot satisfactorily be explained with the available scientific theories. Even though the words "hypothesis" and "theory" are often used synonymously, a scientific hypothesis is not the same as a scientific theory. A working hypothesis is a provisionally accepted hypothesis proposed for further research, in a process beginning with an educated guess or thought.



A different meaning of the term hypothesis is used in formal logic, to denote the antecedent of a proposition; thus in the proposition "If P, then Q", P denotes the hypothesis (or antecedent); Q can be called a consequent. P is the assumption in a (possibly counterfactual) What If question.



The adjective hypothetical, meaning "having the nature of a hypothesis", or "being assumed to exist as an immediate consequence of a hypothesis", can refer to any of these meanings of the term "hypothesis".






























HOW HAVE COMPUTERS HELPED SCIENTISTS?


Scientific study relies on collecting and interpreting information (data). Sometimes thousands of different observations or measurements are made. Computers can help to collect and organize the data. For example, an astronomer might want to study the movement of a planet. A computer, attached to a radio telescope, can measure the position of the planet every five minutes for weeks — a task that would be very tedious for a scientist. Having collected the data, the computer can also process it and use it to predict future patterns of movement. Likewise, computers can perform very complex calculations at incredible speed, working out in less than a second something that a century ago might have taken a lifetime to calculate. Other computer programs can draw three-dimensional plans of objects as tiny as an atom or as large as a cathedral. These models can be turned on screen so that all sides can be viewed. Finally, scientists can search for information on the Internet, instead of visiting libraries that may be in other countries.



Science has changed the world. The modern world - full of cars, computers, washing machines, and lawnmowers -simply wouldn't exist without the scientific knowledge that we've gained over the last 200 years. Science has cured diseases, decreased poverty, and allowed us to communicate easily with hundreds of different cultures. The technology that we develop not only helps us in our everyday lives, it also helps scientists increase human knowledge even further.



Science is the pursuit of knowledge about the natural world through systematic observation and experiments. Science is really about the process, not the knowledge itself. It's a process that allows inconsistent humans to learn in consistent, objective ways. Technology is the application of scientifically gained knowledge for practical purpose, whether in our homes, businesses, or in industry. Today we're going to discuss how that technological know-how gained through science allows us to expand our scientific knowledge even further.



It's hard to imagine science without technology. Science is all about collecting data, or in other words, doing experiments. To do an experiment, you need equipment, and even the most basic equipment is technology. Everything from the wheel to a Bunsen burner to a mirror is technology. So all experiments use technology.



But, as technology advances, we are able to do experiments that would have been impossible in the past. We can use spectroscopes (for spectrometers) to shine light through material and see what elements it's made of. We can use gigantic telescopes to see into the far reaches of our universe. We can use MRI scanners to study the inside of the human body and even the brain itself.



We can use a microscope to see the very tiny. And, we can use electronic devices to take measurements that are far more precise than anything that came before us. Technology is at the heart of all modern science experiments.
































HOW ARE EXPERIMENTS DESIGNED?


In the world around us, nothing happens in isolation. One event affects another. The activity of one living thing changes the lives of other organisms. As the natural world is very complicated, it can be difficult to see clearly how and why things are happening. One of the most important factors in designing an experiment is to try to isolate the particular event or substance being studied, so that the results of the experiment are not influenced by other things. For example, to see if a plant needs sunlight to live, you can put it in the dark and watch what happens. But it is important to make sure that the plant still has the same soil, amount of water and temperature as before, so that you can be sure that any changes in the plant are a result of the lack of sunlight.



Many experiments use something called a control. For example, to test a new drug, a hundred people may be given it and their health monitored very carefully. A hundred similar people may be given no drug or a harmless substance and their health monitored just as accurately. They are the control. It is the difference in results between the two groups of people that is important. The control group is designed to show what would have happened to the first group if it had received no drugs. Only then can scientists tell if the drug has had an effect.



An experiment is a type of research method in which you manipulate one or more independent variables and measure their effect on one or more dependent variables. Experimental design means creating a set of procedures to test a hypothesis.



A good experimental design requires a strong understanding of the system you are studying. By first considering the variables and how they are related, you can make predictions that are specific and testable.



How widely and finely you vary your independent variable will determine the level of detail and the external validity of your results. Your decisions about randomization, experimental controls, and independent vs repeated-measures designs will determine the internal validity of your experiment.






























What was the purpose of the Deep Impact mission?



Comets have been observed by humanity and recorded in human history for ages. Celestial objects consisting of a nucleus of ice and dust and, when near the sun, a tail of dust and gas particles pointing away from the sun, comets were embroiled in superstition until they were studied from a scientific viewpoint in the last few centuries.



It was only in this millennia that we developed enough know-how to target one such comet and deploy an impact probe towards it. The resulting collision, the first such planned event, enabled us to rethink our understanding of the formation of comets and how they work.



Unlike previous NASA flyby missions, the Deep Impact was from the start intended to study the internal composition of a comet. It consisted of two parts: a coffee table-sized main flyby spacecraft that weighed a little over 600 kg and a smaller probe that weighed 372 kg and designed for the impact.



Unique payload



Fitted with some of the latest instruments and cameras, the Deep Impact also had an unusual payload. This was a compact disc as part of a campaign to send names to a comet and held within it the names of 6, 25,000 people.



Following its launch on January 12, 2005, Deep Impact was put in a low-Earth orbit, then an elliptical orbit before it headed on an Earth escape trajectory. Nearly six months and 429 million km later, Deep Impact reached Comet 9P/ Tempel 1.



Bang on target



On July 3, 2005, the impact or probe was released by Deep Impact to move into the path of the comet. As per plan, on July 4, 2005 (American Independence Day), the probe, which was travelling at a relative velocity of 37,000 km/hr at the time of impact, crashed into Tempel 1. The collision created an explosion the equivalent of 4.26 tonnes of TNT and a crater that was about 150 m in diameter.



Minutes later, the flyby spacecraft passed close-by, clicking images of the crater, ejecta plume and the entire nucleus. Ground-based and space-based observatories also collected information, both about the impact and its results, in a coordinated effort.



Based on these results, scientists were able to conclude that Comet Tempel 1 had probably originated in the Oort Cloud – the most distant region of our solar system. The data also showed that the comet was rather fluffy, with nearly 50% of the nucleus and 75% of the surface shell (the comet as a whole) comprising empty space.



EPOXI mission



Even though Deep Impact’s primary mission was over, it had enough propellant left to get a supplementary mission. This mission came to be known as EPOXI, derived by combining the two components of this mission, Extrasolar Planet Observation and Characterisation (EPOCh) and Deep Impact Extended Investigation (DIXI).



After Deep Impact’s new target, Comet 85P/Boethin, was lost as it probably broke up, it was redirected to Comet 103P/ Hartley 2. This flyby was achieved in November 2010 and the data collected showed that the two lobes of Hartley 2 were different in composition.



Before getting to Hartley 2, Deep Impact had to perform three Earth flybys over two years. Ahead of the second flyby, it performed the EPOCh mission, investigating extrasolar planet around eight distant stars using its instruments.



Works with Chandrayaan-1



It also utilized this time to better study our Earth, which included collecting enough detail to find out what a habitable world looks like. Along with India’s Chandrayaan-1 and NASA’s Cassini space probe, Deep Impact was also able to reveal the first clear evidence of water on the surface of the moon.



A third target for flyby in 2020 was identified for Deep Impact and the spacecraft was used for remote study of faraway comet in the meantime, but NASA lost touch with it in August 2013. Despite considerable repeated efforts to contact the spacecraft, NASA officially abandoned efforts to contact Deep Impact on September 20, 2013.



In its operational period of over eight years, Deep Impact produced way more data than had been planned and enhanced our understanding, especially about comets.



 



Picture Credit : Google


Why kilogram has been redefined?



On May 20, 2019, the World Metrology Day, the measurement of a kilogramme was redefined. Earlier, the kilogramme was measured according to the weight of was measured according to the weight of a block of platinum-iridium alloy housed at the International Bureau of Weights and Measures in France. But post May 20, the weight of the kilogramme would be defined by the Planck Constant – a constant of nature that relates to how matter releases energy.



The main problem with using Big K as a universal standard for mass is that Big K, being a manmade object, is imperfect and subject to change over time. Indeed, it is estimated that Big K has lost about 50 micrograms since the time it was created.  Since 1 kilogram is defined as exactly the value of the mass of Big K, if Big K’s mass changes then the value of the kilogram must also change. Obviously, having a standard for mass that changes its mass is not a good idea for science. The ideal set of units should be static and unchanging.



Hence the reason for redefinition. The new definition of the kilogram ties the value of the kilogram to a fundamental constant of nature that will never change. Even billions of years from now when Big K has disintegrated into dust, this new constant the defines the value of the kilogram will be exactly the same—because the value of the unit is tied to a fundamental aspect of reality.



 



Picture Credit : Google


What problems did Google quantum computer solve?



In October 2019, Google claimed quantum supremacy (a point where quantum computers can perform any task that can’t be performed by classical computers) when its quantum computer ‘Sycamore’ performed a complex calculation in 200 seconds. The company claimed that it would take nearly 10,000 years for the world’s fastest supercomputer to perform the same calculation.



Computer scientists have seen quantum supremacy — the moment when a quantum computer could perform an action a conventional computer couldn’t — as an elusive, important milestone for their field. There are many research groups working on quantum computers and applications, but it appears Google has beaten its rivals to this milestone.



According to John Preskill, the Caltech particle physicist who coined the term “quantum supremacy,” Google’s quantum computer “is something new in the exploration of nature. These systems are doing things that are unprecedented.”



It sounds all very gee-whiz. And some scientists think these computers will one day lead to discoveries of new drugs and possibly whole new fields of chemistry. Others fear they’ll be used one day to crack the toughest security protocols.



 



Picture Credit : Google


When did the first hybrid ancient human found?



In 2018, scientists studying ancient DNA revealed to have found the first hybrid ancient human. The DNA, sourced from a 90,000-years-old bone, belonged to a teenage girl whose father was a Denisovan and mother was a Neanderthal.



This is the first time scientists have identified an ancient individual whose parents belonged to distinct human groups. The findings were published on 22 August in Nature.



“To find a first-generation person of mixed ancestry from these groups is absolutely extraordinary,” says population geneticist Pontus Skoglund at the Francis Crick Institute in London. “It’s really great science coupled with a little bit of luck.”



The team, led by palaeogeneticists Viviane Slon and Svante Pääbo of the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany, conducted the genome analysis on a single bone fragment recovered from Denisova Cave in the Altai Mountains of Russia. This cave lends its name to the ‘Denisovans’, a group of extinct humans first identified on the basis of DNA sequences from the tip of a finger bone discovered2 there in 2008. The Altai region, and the cave specifically, were also home to Neanderthals.



 



Picture Credit : Google


Which is the world's first trillion dollar company?



In August 2018, Apple became the world’s first- ever trillion dollar company, thereby becoming the richest company to ever exist. However, it quickly lost its top spot after poor iPhone sale and multiple controversies.



PetroChina, a state-owned oil giant, was the first company to hit this mark during its initial public offering in 2007, though its value has declined dramatically since then. Apple on the other hand is the first non-state-owned company to reach this stratospheric valuation on its own merits through a long, sustained upward climb without implicit government guarantees or backing.



It is a market development that has been in the pipeline for well over a year. Recent higher sales of the expensive iPhone X gave investors more confidence in the company and helped it to this watershed moment, leaving second-place Amazon and third-place Google well behind in the mid to high $800 billion value range.



 



Picture Credit : Google


What were humans like 100 000 years ago?



In July 2017, fossils of five early humans found in North Africa were unveiled by researchers. These fossils show that Homo sapiens emerged on Earth at least 100,000 years earlier than previously recognized. The finds also suggested that humans may have been evolving in the same direction all over the African continent.



Homo sapiens is part of a group called hominids, which were the earliest humanlike creatures. Based on archaeological and anthropological evidence, we think that hominids diverged from other primates somewhere between 2.5 and 4 million years ago in eastern and southern Africa. Though there was a degree of diversity among the hominid family, they all shared the trait of bipedalism, or the ability to walk upright on two legs.



When humans migrated from Africa to colder climates, they made clothing out of animal skins and constructed fires to keep themselves warm; often, they burned fires continuously through the winter. Sophisticated weapons, such as spears and bows and arrows, allowed them to kill large mammals efficiently. Along with changing climates, these hunting methods contributed to the extinction of giant land mammals such as mammoths, giant kangaroos, and mastodons. Fewer giant mammals, in turn, limited hunters’ available prey.



 



Picture Credit : Google


What can Sophia the robot do?



At a time when it is difficult for humans to get citizenship of a particular country, Sophia, a humanoid robot developed by Hong Kong- based Hanson Robotics was conferred citizenship by the Kingdom of Saudi Arabia in October 2017. Sophia looks and talks like a human. From after, you can probably not even say she’s a robot!



She is fashioned after Audrey Hepburn, can walk, talk and emote too. And now, Sophia, the world's first robot citizen who came calling here this week, can also draw sketches, contextualise a conversation and attach faces with names, say its makers.



The delicate looking woman robot with doe-brown eyes and long fluttering eyelashes, who mesmerised the world when she was activated in 2016, is getting smarter by the day.



Sophia, dressed in a black skirt and a grey metallic shirt, was part of several industrial and social robots, including 'Professor Einstein', exhibited at the 28th IEEE Conference on Robot and Human Interactive Communication (RO-MAN2019) here.



 



Picture Credit : Google

 



 


How Portugal ditched fossil fuel power for 4 days?



For four days in May, Portugal, the European country known for soccer and Cristiano Ronaldo, was powered by renewable energy. Yes, for four consecutive days the country’s electric usage was provided entirely by wind, solar and hydro-generated energy.



The country’s zero emission milestone was announced just days after Germany, another European superpower, announced that clean energy had powered almost all its electricity needs on May 15.



Overall, renewable energy is gaining ground on the world’s electric grids, accounting for nearly 60 percent of the world’s new electric capacity, according to the renewable energy research network REN21.



Still, wind and solar panels together account for just 4 percent of the total power supply. Though the coal industry has been on the decline in some places, the world is still largely reliant on fossil fuels to generate power. Efforts to cut planet-warming greenhouse gases depend markedly on the power sector, which accounts for about 42 percent of all energy-related carbon emissions. Nuclear plants can contribute to the clean-energy bottom line, but they face opposition over waste and safety issues, as well as political and economic headwinds.



Wind and solar command a lot of attention when it comes to renewable energy, but in many cases, other low-emissions sources are providing big assists. A key player in Portugal’s win is hydroelectric power, which accounts for about 19 percent of the country’s supply. Hydro can provide the steadier output needed to fill in gaps when the wind isn’t blowing or the sun isn’t shining.



Having a strong reserve of geothermal energy can help lay a foundation, too, as is the case in Iceland, Philippines, and others. But developing those resources takes time, money, and political consensus, which can often hold projects back.



 



Picture Credit : Google


Who beats human in the game of Go?



In March 2016, Google DeepMind’s AlphaGo program, powered by machine learning, defeated South Korean grandmaster Lee Sedol on the game of Go. This is the first time an artificial intelligence program beat a top-ranked Go professional. The match, and AlphaGo’s win also brought attention to the game of Go in the West.



Despite defeat in the first match, “this is the greatest honor in a lifetime,” Ke said of playing against AlphaGo.



Teaching computers to master Go has long been considered a holy grail for artificial intelligence scientists — there are more possible configurations of the board than there are atoms in the universe. Before this week, AlphaGo had already clocked many victories against top-ranked masters, a significant advancement that happened far sooner than experts expected.



 



Picture Credit : Google


When did world's first truly wireless earphones unveiled?



At the IFA technology show in Berlin in 2015, Onkyo, a Japanese consumer electronics manufacturer, unveiled the world’s first truly connecting the two earpieces, however, the W800BT earphones had two earbuds that work independently from each other and deliver a balanced sound experience.



The W800BT headphones, developed in partnership with audio group Gibson Innovations, consist of two earbuds that work independently from each other and deliver a balanced sound across a frequency range of 20Hz-20kHz. They connect to each other and to a smartphone wirelessly, using Bluetooth.



Onkyo claims that the headphones offer a clear and accurate audio experience with passive noise isolation. The right earpiece also includes a microphone to enable hands-free calls and can be used with any Bluetooth-enabled device.



In 2016, Apple Inc. launched the AirPods which took the world by storm.



 



Picture Credit : Google