Conference

Research Ethics

· · 3 min read · 14 views

Research Ethics This document outlines ICML standards for ethical conduct of research. This complements our Peer-review Ethics policy, which focuses on integrity of peer review, and Code of Conduct , which focuses on professional conduct. Authors submitting their work to ICML must follow the following guidelines, adapted from the NeurIPS Code of Ethics . Specifically, when there are risks directly associated with the proposed methods, methodology, application, or data collection and usage, authors are expected to make a reasonable effort in identifying these risks and provide a thoughtful discussion including a rationale of their decisions. Authors are strongly encouraged to propose mitigation strategies for risks identified whenever feasible. However, it is understood that some risks—particularly those arising from general-purpose methods being applied in unforeseeable or speculative ways—may fall outside the reasonable scope of the research and need not be addressed comprehensively. The following categories will be used for flagging potential ethical issues during the review process. "Discrimination / Bias / Fairness Concerns" Example: Papers about applications where bias, fairness, and discrimination are a concern (e.g., hiring algorithms) should acknowledge these risks and, ideally, include analysis that directly addresses the relevant area of concern, with the expectation that such analysis is feasible and within a reasonable scope. "Inappropriate Potential Applications & Impact (e.g., human rights concerns)" Example: Papers about applications that have a direct connection to human rights issues (e.g., weapons) should provide a thoughtful discussion of the risks of the application. Where appropriate and within the scope of the work, authors may also suggest reasonable recommendations for mitigating these risks. "Responsible Research Practice (e.g., IRB, documentation, research ethics, participant consent)" Example: While we acknowledge that standards for IRB vary across borders and institutions, research involving human subjects should provide evidence that it adhered to the authors’ home institution’s procedures for obtaining IRB approval or was eligible for an exemption. "Privacy and Security (e.g., personally identifiable information)" Example: Papers that rely on data that includes personally identifiable information should make reasonable efforts to ensure that individuals are not identifiable in the research outputs. "Legal Compliance (e.g., EU AI Act, GDPR, copyright, terms of use)" ICML is an international conference, and legal requirements will vary. Where appropriate, papers should provide evidence that local laws and regulations were followed. The issue of fair use of copyrighted material for training generative AI models is controversial. Submissions utilizing datasets that contain copyrighted material should acknowledge and address this in the impact statement. It is the responsibility of authors to ensure compliance with the EU AI Act, where applicable. The EU AI Act sets out a list of prohibited AI practices that pose an “unacceptable risk.” The ban on AI systems that pose an unacceptable risk comes into force on February 2, 2025. Banned AI systems include: AI systems that manipulate individuals' decisions subliminally or deceptively, causing or reasonably likely to cause significant harm. AI systems that exploit vulnerabilities like age, disability, or socio-economic status to influence behavior, causing or reasonably likely to cause significant harm. AI systems that evaluate or classify individuals based on their social behavior or personality characteristics, causing detrimental or unfavorable treatment. AI systems that assess or predict the risk of an individual committing a criminal offense based on their personality traits and characteristics. AI systems that create or expand facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage. AI systems that infer emotions in workplaces or education centers (except where this is needed for medical or safety reasons). AI systems that categorize individuals based on their biometric data to infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation. AI systems that collect “real time” biometric information in publicly accessible spaces for the purposes of law enforcement (except in very limited circumstances). "Research Integrity Issues (e.g., plagiarism, fraud, collusion rings, prompt injection, etc.)" Flagged issues in this category will bypass the ethics review process and be escalated to the program chairs for adjudication. For a fuller discussion of these categories, see the NeurIPS Code of Ethics . Successful Page Load ICML uses cookies for essential functions only. We do not sell your personal information. Our Privacy Policy » Accept

Executive Summary

This article outlines the ICML standards for ethical conduct of research, emphasizing the importance of considering risks associated with proposed methods, methodology, application, or data collection and usage. Authors are expected to identify potential risks, provide thoughtful discussions, and propose mitigation strategies where feasible. The document flags potential ethical issues under four categories, including discrimination/bias/fairness concerns, inappropriate potential applications and impact, responsible research practice, privacy and security, and legal compliance. The article highlights the need for authors to ensure compliance with local laws and regulations, such as the EU AI Act and GDPR, and provides examples of responsible research practices.

Key Points

  • ICML standards for ethical conduct of research emphasize the consideration of risks associated with proposed methods and methodology
  • Authors are expected to identify potential risks and provide thoughtful discussions and mitigation strategies
  • Four categories of potential ethical issues are flagged: discrimination/bias/fairness concerns, inappropriate potential applications and impact, responsible research practice, and legal compliance

Merits

Strength in Emphasizing Risk Consideration

The article highlights the importance of considering risks associated with proposed methods and methodology, which is a crucial aspect of responsible research practice.

Clear Flagging of Potential Ethical Issues

The document provides clear and specific examples of potential ethical issues under four categories, making it easier for authors to understand the expectations and requirements.

Demerits

Limitation in Addressing Scoping Risks

The article acknowledges that some risks, particularly those arising from general-purpose methods being applied in unforeseeable or speculative ways, may fall outside the reasonable scope of the research and need not be addressed comprehensively.

Expert Commentary

This article provides a clear and comprehensive outline of the ICML standards for ethical conduct of research. The emphasis on considering risks associated with proposed methods and methodology is a crucial aspect of responsible research practice. However, the article acknowledges that some risks may fall outside the reasonable scope of the research, which may limit its applicability in certain contexts. Overall, the article provides a useful framework for authors and researchers to consider and address potential risks and ethical issues in their research.

Recommendations

  • Authors should take a proactive approach to considering and addressing potential risks associated with their research
  • Policy-makers should develop and implement regulations that ensure responsible research practices and consideration of risks

Sources

Related Articles