Will AI help or hinder trust in science?

by worldysnews
0 comment
New Delhi: In the last year, generative artificial intelligence tools like ChatGPT, Gemini and OpenAI’s video generation tool Sora have captured the public’s attention. All it takes is an internet connection and a web browser to start experimenting with AI. You can interact with AI the same way you would with a human assistant: by talking to it, writing to it, showing it pictures or videos, or doing all of the above. While this ability is completely new territory for the general public, Scientists have used AI as a tool for many years. But with more public knowledge about AI, there will be more public scrutiny of how it is being used by scientists. AI is revolutionizing science – six percent of all scientific work takes advantage of AI, not just in computer science, but also in chemistry, physics, psychology, and environmental science.
Nature, one of the world’s most prestigious scientific journals, included ChatGPT in its 2023 Nature’s list of the world’s 10 most influential and, by then, exclusively human scientists. -generated datasets, Lawrence Berkeley Lab used AI to run compound synthesis experiments on a scale far beyond what humans could accomplish. But AI has even greater potential: helping scientists make such discoveries. enabling what otherwise would not have been possible at all. It was an AI algorithm that first found signal patterns in brain-activity data that pointed to the onset of an epileptic seizure, a feat that even the most experienced human neurologists could not replicate Is.
Early success stories of the use of AI in science have led some to imagine a future in which scientists will collaborate with AI scientific assistants as part of their daily work. That future is already here. CSIRO researchers have been experimenting with AI science agents and have developed robots that can follow spoken language instructions to complete scientific tasks during fieldwork. While modern AI systems are impressively powerful – Particularly so-called artificial general intelligence tools such as ChatGPT and Gemini – they also have shortcomings. Generative AI systems are susceptible to “hallucinations” where they make up facts. Or they may be biased. Google’s Gemini portraying America’s founding fathers as a diverse group is an interesting case of over-correction for bias. There is a real danger of AI fabricating results and it has already happened.
It is relatively easy to get a generative AI tool to cite publications that do not exist. Furthermore, many AI systems cannot explain why they produce the outputs they do. This is not always a problem. Would have happened. If AI generates a new hypothesis that is tested with normal scientific methods, no harm is done. However, for some applications the lack of explanation may be a problem. Replication of results is a basic principle in science, But if the steps taken by AI to reach a conclusion remain opaque, replication and verification become difficult, if not impossible. And this can damage public trust in the science produced. Here’s the difference between general and narrow AI. A distinction should be made. Narrow AI is AI trained to complete a specific task. Narrow AI has already made great progress.
Google DeepMind’s AlphaFold model has revolutionized the way scientists predict protein structures. But there have been many other, less publicized, successes too – like AI being used at CSIRO to discover new galaxies in the night sky. IBM Research developed AI that rediscovered Kepler’s third law of planetary motion, or Samsung AI created AI that was able to reproduce Nobel Prize-winning scientific breakthroughs. When applied to science When it comes to narrow AI, trust remains high. AI systems – especially those based on machine learning methods – rarely achieve 100 percent accuracy on a given task. (In fact, machine learning systems outperform humans in some tasks, and humans outperform AI systems in many tasks. Humans using AI systems generally outperform humans working alone And they perform even better than AI working alone.
There is a large scientific evidence base for this fact, including this study.) Working with AI with an expert scientist, who confirms and interprets the results, is a perfectly valid way of working, and it is widely used. AI systems are seen as delivering better performance than human scientists or AI systems working alone. On the other hand, general AI systems are not specific to a domain or use case, but rather perform a wide range of tasks. ChatGPT can, for example, create a Shakespearean sonnet, suggest a recipe for dinner, summarize academic literature, or generate a scientific hypothesis. When general AI When it comes to this, the problems of hallucinations and prejudice are most acute and widespread. This doesn’t mean that general AI isn’t useful to scientists – but it should be used with caution.
This means that scientists must understand and assess the risks of using AI in a specific scenario and weigh them against the risks of not doing so. Scientists now regularly assist in writing papers, reviewing academic literature, are using general AI systems to help with research and even design experimental plans. When it comes to these scientific assistants a danger may arise if the human takes the scientific output for granted. Of course, good Properly trained, hard-working scientists won’t do that. But many scientists out there are trying to survive in a tough industry of publish or perish. Even without AI, scientific fraud is already on the rise.
AI can give rise to new levels of scientific misconduct – either through deliberate misuse of the technology, or through sheer ignorance because scientists do not realize that AI is making things. Scientific discovery in both narrow and general AI There is great potential for further advancement. A typical scientific workflow conceptually consists of three steps: understanding what problem to focus on, performing experiments related to that problem, and using the results to have real-world implications. AI can help in all three of these steps. However, there is a big caveat.
Current AI tools are not suitable to be used out-of-the-box for serious scientific work. Only if researchers responsibly design, build, and use next-generation AI tools in support of the scientific method, will AI and Public trust in science will both be gained and maintained. Getting it right is worthwhile: the possibilities for using AI to transform science are endless. Demis Hassabis, the iconic founder of Google DeepMind, famously said: ‘ ‘Building more capable and general AI, in a safe and responsible manner, demands that we solve some of the toughest scientific and engineering challenges of our time.'” The opposite conclusion is also true: solving the toughest scientific challenges of our time. This calls for the creation of a more capable, safe and responsible general AI. Australian scientists are working on this. (360info.org) GRS GRS

#hinder #trust #science
2024-04-15 21:11:12

You may also like

Leave a Comment

Hosted by Byohosting – Most Recommended Web Hosting – for complains, abuse, advertising contact: o f f i c e @byohosting.com