Skip to content
  • Categories
  • Newsletter
  • Recent
  • AI Insights
  • Tags
  • Popular
  • World
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
  1. Home
  2. AI Insights
  3. Risk Mitigation and Benefit Maximization of AI in Research
uSpeedo.ai - AI marketing assistant
Try uSpeedo.ai — Boost your marketing

Risk Mitigation and Benefit Maximization of AI in Research

Scheduled Pinned Locked Moved AI Insights
techinteligencia-ar
1 Posts 1 Posters 0 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • baoshi.raoB Offline
    baoshi.raoB Offline
    baoshi.rao
    wrote on last edited by
    #1

    In the fields of space and environmental science, the application of AI tools is becoming increasingly widespread—such as weather forecasting and climate modeling, energy and water resource management, and more. It can be said that we are experiencing an unprecedented surge in AI applications, facing both opportunities and risks, which necessitates careful consideration.

    The tracking report from the American Geophysical Union (AGU) further reveals the widespread use of AI tools. From 2012 to 2022, the number of papers mentioning AI in their abstracts has shown exponential growth, highlighting its significant impact in areas such as weather forecasting, climate modeling, and resource management.

    AI-related paper publication trends

    However, while AI unleashes powerful capabilities, it also brings potential risks. Insufficiently trained models or improperly designed datasets may lead to unreliable results or even potential harm. For example, using tornado reports as input data might bias the training data towards densely populated areas, where more weather events are observed and reported. As a result, the model could overestimate tornado occurrences in urban areas and underestimate them in rural areas, leading to potential hazards.

    This phenomenon also raises an important question—when and to what extent can people trust AI to avoid potential risks?

    With support from NASA, AGU convened experts to develop a set of guidelines for 'Applying Artificial Intelligence in Space and Environmental Sciences', focusing on the ethical and moral issues that may arise in AI applications. These guidelines are not limited to the specific field of space and environmental sciences but provide guidance for comprehensive AI applications. The related content has been published in 'Nature'.

    AI Application Guidelines

    Paper published in Nature

    Paper link:

    https://www.nature.com/articles/d41586-023-03316-8

    Currently, many people remain cautious about the trustworthiness of AI/ML. To help researchers and institutions build trust in AI, AGU has established six guidelines. To preserve the original meaning, the author has included both the translation and the original text.

    1. Transparency, Documentation and Reporting

    In AI/ML research, transparency and comprehensive documentation are crucial. It is essential to provide not only data and code but also to document participants and problem-solving approaches, including the handling of uncertainties and biases. Transparency should be maintained throughout the entire research process, from conceptual development to application.

    2. Intentionality, Interpretability, Explainability, Reproducibility and Replicability

    When using AI/ML for research, it is imperative to focus on intentionality, interpretability, reproducibility, and replicability. Prioritizing open science methods enhances the interpretability and reproducibility of models, encouraging the development of methods to explain AI models.

    3. Risk, Bias and Effects

    Understanding and managing the potential risks and biases in datasets and algorithms is crucial for research. By better comprehending the sources of risks and biases, as well as methods to identify these issues, we can more effectively manage and address adverse outcomes, thereby maximizing public benefit and impact.

    4. Participatory Methods

    In AI/ML research, adopting inclusive design and implementation approaches is essential. It's important to ensure that diverse communities, professional fields, and backgrounds have a voice, particularly for communities that may be affected by the research. Co-producing knowledge, participating in projects, and collaboration are vital to ensuring the inclusivity of research.

    5. Outreach, Training and Leading Practices

    Academic organizations need to provide support across industries to ensure training on the ethical use of AI/ML, including researchers, practitioners, funders, and the broader AI/ML community. Scientific associations, institutions, and other organizations should provide resources and expertise to support AI/ML ethics training and educate societal decision-makers about the value and limitations of AI/ML in research to enable responsible decision-making and mitigate negative impacts.

    6. Considerations for Organizations, Institutions, Publishers, Societies and Funders

    Academic organizations have the responsibility to spearhead the establishment and management of policies related to AI/ML ethical issues, including codes of conduct, principles, reporting methods, decision-making processes, and training. They should clarify values, design governance structures, and foster cultural development to ensure ethical AI/ML practices are implemented. Additionally, executing these responsibilities across organizations and institutions is essential to ensure ethical practices are adopted throughout the field.

    1. Watch out for gaps and biases

    When dealing with AI models and data, it is crucial to be vigilant about gaps and biases. Factors such as data quality, coverage, and racial biases can affect the accuracy and reliability of model results, potentially leading to unforeseen risks.

    For instance, the coverage and authenticity of environmental data vary significantly across regions. Areas with frequent cloud cover (such as tropical rainforests) or limited sensor coverage (like polar regions) yield less representative data. Dataset richness and quality often favor affluent regions while overlooking marginalized communities, including historically discriminated groups. Such data is frequently used to inform recommendations and action plans for the public, businesses, and policymakers. For example, dermatology algorithms trained primarily on Caucasian data demonstrate lower accuracy in diagnosing skin lesions and rashes in Black individuals. Institutions should prioritize researcher training, scrutinize data/model accuracy, and establish expert committees to oversee AI model applications.

    2. Develop ways to explain how AI models work

    When researchers use classical models for studies and publish papers, readers typically expect access to underlying code and relevant specifications. However, current requirements don't mandate researchers to provide such information, resulting in AI tools lacking transparency and interpretability. This means even when using identical algorithms on the same experimental data, different methodologies may prevent precise result replication. Therefore, in published research, scientists should clearly document AI model construction and deployment processes to enable proper evaluation by others.

    Researchers recommend conducting cross-model comparisons and dividing data sources into comparative groups for examination. The industry urgently requires further standards and guidelines to interpret and evaluate how AI models operate, enabling assessments of output results with statistical confidence levels comparable to those in traditional statistics.

    Currently, researchers and developers are exploring a technique called Explainable AI (XAI), which aims to help users better understand how AI models function by quantifying or visualizing outputs. For instance, in short-term weather forecasting, AI tools can analyze vast amounts of remote sensing observation data obtained every few minutes, thereby improving the prediction capabilities for severe weather disasters.

    Clearly explaining how results are achieved is crucial for assessing the validity and utility of predictions. For example, when predicting the likelihood and extent of fires or floods, such explanations can help humans decide whether to issue public warnings or use outputs from other AI models. In the field of Earth sciences, XAI attempts to quantify or visualize the characteristics of input data to better understand model outputs. Researchers need to examine these explanations and ensure their reasonableness.

    AI Model Visualization

    Artificial intelligence tools are being used to assess environmental observations.

    3. Forge partnerships and foster transparency

    Researchers need to focus on transparency at every stage: sharing data and code, considering further testing to ensure reproducibility and repeatability, addressing risks and biases in methodologies, and reporting uncertainties. These steps require more detailed descriptions of methods. To ensure comprehensiveness, research teams should include experts who utilize various types of data and invite community members who provide data or may be affected by the research outcomes. For example, an AI-based project combined the traditional knowledge of the Tłı̨chǫ people in Canada with data collected through non-indigenous methods to identify the most suitable areas for aquaculture (see go.nature.com/46yqmdr).

    Aquaculture Project Image

    4. Sustain support for data curation and stewardship

    Interdisciplinary research fields require data, code, and software reports to comply with the FAIR principles: Findable, Accessible, Interoperable, and Reusable. To build trust in artificial intelligence and machine learning, it is necessary to have recognized, high-quality datasets and to publicly disclose errors and solutions.

    The current challenge lies in data storage, as the widespread use of general repositories may lead to metadata problems, affecting data provenance tracking and automated access. Some advanced disciplinary research data repositories offer quality checks and supplementary information services, but this typically requires investment in manpower and time.

    Additionally, the article mentions issues such as funding support for repositories, limitations of different repository types, and insufficient demand for domain-specific repositories. Academic organizations and funding agencies should provide sustained financial investment to support and maintain appropriate data repositories.

    Researchers increasingly prefer general data repositories

    5. Focus on long-term impact (Look at long-term impact)

    In researching the widespread application of artificial intelligence and machine learning in the scientific field, it is crucial to focus on long-term impact, ensuring these technologies can reduce social disparities, enhance trust, and actively include diverse opinions and voices.

    "How to use AI and how to use it well" has also been a hot topic in China's AI field in recent years.

    In the eyes of this year's NPC and CPPCC representatives, artificial intelligence is one of the most active areas of digital technology innovation. New technologies represented by generative AI (AIGC), large-scale pre-trained models, and knowledge-driven AI are unleashing new industry opportunities, and it is necessary to seize the "time window" of technological development.

    Lei Jun, founder, chairman and CEO of Xiaomi Group, proposed supporting the sci-tech innovation industrial chain by advancing the planning and layout of the bionic robotics industry. He also suggested accelerating the formulation of data security standards for the entire vehicle lifecycle to guide industrial development, and establishing a vehicle data sharing mechanism and platform to promote data utilization.

    Zhou Hongyi, founder of 360, expressed hopes to create a Chinese equivalent of the "Microsoft+OpenAI" partnership to lead breakthroughs in large model technologies and build an open innovation ecosystem through open-source crowdsourcing.

    Academician Zhang Boli recommended establishing a major special project for biopharmaceutical manufacturing, supporting R&D of key intelligent pharmaceutical technologies and equipment, and encouraging the development of biopharmaceutical equipment.

    The proposals demonstrate that delegates to China's Two Sessions are highly optimistic about the AI sector. Beyond empowering technology, we look forward to AI better assisting enterprise and social development under the principles of establishing trust and cautious application.

    1 Reply Last reply
    0
    Reply
    • Reply as topic
    Log in to reply
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes


    • Login

    • Don't have an account? Register

    • Login or register to search.
    • First post
      Last post
    0
    • Categories
    • Newsletter
    • Recent
    • AI Insights
    • Tags
    • Popular
    • World
    • Groups