CVPR 2026 Reviewer Guidelines
CVPR 2026 Reviewer Guidelines Thank you for your time to review for CVPR 2026! To maintain a high-quality technical program, we rely very much on the time and expertise of our reviewers. This document explains what is expected of all members of the Reviewing Committee for CVPR 2026. Contents What’s new for reviewers for CVPR 2026 Reviewing Process Reviewing Timeline Responsible Reviewing Policy Reviewing Deadline Policy How to Write Good Reviews What Reviewers Should Look Out For in Papers Ethics for Reviewing Papers FAQs for Reviewing Papers What’s New for Reviewers at CVPR 2026? To improve the review process and uphold the conference’s high standards, following other recent conferences, CVPR 2026 will strictly enforce a Responsible Reviewing Policy and a Reviewing Deadline Policy. Any reviewer whose review is deemed to be “highly irresponsible” will face a desk rejection of all papers on which they are an author at the discretion of the PCs. In addition, any reviewer who fails to submit their assigned reviews by the deadline is subject to desk rejection of all papers on which they are an author. Please see the sections (below) on Responsible Reviewing Policy and Reviewing Deadline Policy for more details. To improve future review quality, the CVPR 2026 Program Chairs (PCs) plan to share the CVPR 2026 reviewing meta data privately with the PCs of other related future venues. This data will include the OpenReview ID of each reviewer, as well as statistics on review quality and timeliness. CVPR 2026 updated the LLM policy for reviewers to include statements regarding prompt injection. See below for details. Reviewing Process There are four groups of people involved in the reviewing process: Program Chairs (PCs), Senior Area Chairs (SACs), Area Chairs (ACs), and Reviewers. At CVPR 2026, we have 6 PCs and ~ 900 SACs and ACs. In short, reviewers assess the technical merits of a submission, ACs strive to combine multiple assessments into a submission recommendation, SACs verify these recommendations to ensure a high-quality and consistent paper review process, and PCs facilitate the process. CVPR reviewing is double blind: authors do not know the names of the area chairs or reviewers for their papers, and the area chairs/reviewers are not told the names of the authors. PCs have visibility to the entire process, including the names of authors, reviewers, and ACs for each submission. Note that PCs at CVPR 2026 cannot submit a paper at CVPR 2026. There are several steps in the reviewing process: Papers are assigned to area chairs (ACs). ACs and SACs suggest multiple reviewers per paper, with the help of OpenReview Matching. SACs verify. Papers are assigned to reviewers using an optimization algorithm that takes into account AC suggestions, paper load, and conflict constraints. Reviewers submit initial reviews. SACs and ACs check the quality of reviews and assign emergency reviewers as necessary. Papers authored by reviewers, who have not submitted reviews on time or who have submitted reviews that are deemed to be highly irresponsible will be desk rejected at the discretion of the PCs. Authors receive reviews and have the option of submitting a rebuttal to address any concerns the reviewers may have. Reviewers and ACs discuss, based on all reviews, rebuttal, and paper. Reviewers update their ratings and justification. ACs draft meta-reviews. ACs discuss their meta-reviews and tentative decisions with two other ACs in an “AC Triplet”. They discuss borderline papers and check meta-reviews for their quality. In addition to accept/reject decisions, AC triplets provide nominations for spotlights, orals, and awards to the PCs. PCs and SACs check meta-reviews, decisions, and nominations. PCs make the final determination for accepting/rejecting papers, assigning papers to the oral/spotlight/poster categories and drafting a list of nominations to be sent to the award committee. Reviewing Timeline Date Milestone Nov 6, 2025 Abstract deadline Nov 13, 2025 Paper submission deadline Dec 15, 2025 Papers assigned to reviewers Dec 15, 2025 - Jan 12, 2026 Review period Jan 12, 2026 Reviews due Jan 12-22, 2026 Emergency review period Jan 22, 2026 Reviews released to authors Jan 29, 2026 Author rebuttals due Jan 30 - Feb 5, 2026 AC-reviewer discussions Feb 5, 2026 Final reviewer recommendations due Feb 20, 2026 Final decisions to authors Responsible Reviewing Policy At CVPR 2026, reviewers are expected to provide fair and thoughtful reviews that demonstrate meaningful engagement with the submission. Highly irresponsible reviewing includes: unreasonably short reviews that fail to reference specific technical content from the paper; reviews containing only generic comments that could apply to any submission without addressing the paper's actual contribution; reviews with demonstrable factual errors about the paper's methodology or results that indicate superficial reading; and reviews generated by or substantially assisted by Large Language Models. This policy does not penalize reviewers for holding different technical opinions, missing minor details, writing concise but substantive reviews, or having legitimate disagreements with other reviewers or Area Chairs. If a review is flagged as highly irresponsible, it will undergo an oversight process managed by the Program Chairs (PCs). Any reviewer whose review is deemed to be "highly irresponsible" will face a desk rejection of all papers on which they are an author at the discretion of the PCs. Reviewing Deadline Policy At CVPR 2026, reviewers are also expected to provide timely reviews. Historically, previous CVPR conferences (and ICCV, ECCV and NeurIPS, amongst others) faced challenges with some reviewers failing to meet the review submission deadlines. It came to be accepted that there was an unofficial grace period after the reviewing deadline. In some cases, reviewers failed to respond to multiple reminders and did not submit their reviews at all. This necessitated Area Chairs (ACs) to follow up diligently with late reviewers, as well as assign emergency reviewers to ensure each paper received a minimum of three reviews. Over the years, this has added to the already large workload and stress of the conference organizers. To improve the review process and uphold the conference’s high standards, CVPR 2026 will strictly enforce the reviewing deadline. Any reviewer who fails to submit their assigned reviews by the deadline will face a desk rejection of all papers on which they are an author at the discretion of the PCs. This policy aims to ensure fairness and accountability across the reviewing process while reducing the burden on ACs and other conference organizers. There will be multiple emails sent out to all reviewers as a reminder to submit reviews in a timely manner. Additionally, the co-authors of reviewers who have not submitted their reviews will also be notified that their submission may be desk rejected if all authors do not submit their reviews in time. How to Write Good Reviews Check your papers As soon as you get your reviewing assignment, please go through all the papers to make sure that (a) you have no obvious conflict of interest (see “Avoid Conflicts of Interest” below) and (b) you feel comfortable reviewing the paper assigned. If issues with either of these points arise, please contact the Area Chair right away as instructed in the detailed emails you will receive during the process. Know the policies Please read the Author Guidelines carefully to familiarize yourself with all official policies the authors are expected to follow. If you come to believe that a paper may be in violation of any of these policies, please contact the Chairs. In the meantime, proceed to review the paper, assuming no violation has taken place. Be Mindful Each paper that is accepted should be technically sound and make a contribution to the field. Look for what is good or stimulating in the paper, and what knowledge advance it has made. We recommend that you embrace novel, brave concepts, even if they have not been tested on many datasets. For example, the fact that a proposed method does not exceed the state-of-the-art accuracy on an existing benchmark dataset is not grounds for rejection by itself. Rather, it is important to weigh both the novelty and potential impact of the work alongside the reported performance. Minor flaws that can be easily corrected should not be a reason to reject a paper. Be Detailed Take the time to write good reviews. Ideally, you should read a paper and then think about it over the course of several days before you write your review. While length does not make a review good, short reviews tend to be enigmatic and so unhelpful to authors, other reviewers, and Area Chairs. If you have agreed to review a paper, you should take enough time to write a thoughtful and detailed review. Your main critique of the paper should be written in terms of a list of strengths and weaknesses. You can use bullet points here, but also explain your arguments. Bullet lists with one short phrase per bullet are NOT a detailed review. Your detailed review, more than your score, will help the authors, fellow reviewers, and Area Chairs understand the basis for your recommendation, so please be thorough. Be Specific Be specific about novelty. Claims in a review that the submitted work “has been done before” MUST be backed up with specific references and an explanation of how closely they are related. At the same time, for a positive review, be sure to summarize what novel aspects are most interesting in the Strengths section. Be specific when you suggest that the writing needs to be improved. If there is a particular section that is unclear, point it out and give suggestions for how it can be clarified. In the discussion of related work and references, simply saying "this is well known" or "this has been common practice in the industry for years" is not sufficient: cite specific publications, including books or public disclosures of techniques. Do not reject papers solely because they are missing citations or comparisons to prior work that has only been published without review (e.g., arXiv or technical reports). Refer to the FAQ below for more details on handling arXiv prior art. Give Feedback to Improve Submissions Please include specific feedback on ways the authors can improve their papers. Be generous about giving the authors new ideas for how they can improve their work. You might suggest a new technical tool that could help, a dataset that could be tried, an application area that might benefit from their work, or a way to generalize their idea to increase its impact. If you think the paper is out of scope for CVPR's subject areas, clearly explain why in the review. Then suggest other publication possibilities (journals, conferences, workshops) that would be a better match for the paper. However, unless the area mismatch is extreme, you should keep an open mind, because we want a diverse set of good papers at the conference. Be Mindful of Your Tone The tone of your review is important. A harshly written review will be resented by the authors, regardless of whether your criticisms are true. If you take care, it is always possible to word your review constructively while staying true to your thoughts about the paper. Avoid referring to the authors in the second person (“you”). It is best to avoid the term “the authors” as well, because you are reviewing their work and not the person. Instead, use the third person (“the paper”). Referring to the authors as “you” can be perceived as being confrontational, even though you may not mean it this way. Finally, keep in mind that a thoughtful review not only benefits the authors, but also yourself. Your reviews are read by other reviewers and especially the Area Chairs, Being a helpful reviewer will generate good will towards you in the research community – and may even help you to win an Outstanding Reviewer award. What Reviewers Should Look Out For in Papers Check for Reproducibility To improve reproducibility in AI research, we highly encourage authors to voluntarily submit their code as part of supplementary material, especially if they plan to release it upon acceptance. Reviewers may optionally check this code to ensure the paper's results are reproducible and trustworthy, but are not required to. All code/data should be reviewed confidentially and kept private, and deleted after the review process is complete. We expect (but do not require) that the accompanying code will be submitted with accepted papers. Check for Data Contribution Datasets are a significant part of Computer Vision research. If a paper is claiming a dataset release as one of its scientific contributions, it is expected that the dataset will be made publicly available no later than the camera-ready deadline, should it be accepted. Check for Attribution of Data Assets Authors are advised that they need to cite data assets used (e.g., datasets or code) much like papers. As a reviewer, please carefully check if a paper has adequately cited data assets used in the paper, and comment in the corresponding field in the review form. Check for Use of Personal Data and Human Subjects If a paper is using personal data or data from human subjects, the authors must have an ethics clearance from an institutional review board (IRB, or equivalent) or clearly describe that ethical principles have been followed. If there is no description of how ethical principles were ensured or GLARING violations of ethics (regardless of whether discussed or not), please inform the Area Chairs and the Program Chairs, who will follow on each specific case. Reviewers shall avoid dealing with such issues by themselves directly. IRB reviews for the US or the appropriate local ethics approvals are typically required for new datasets in most countries. It is the dataset creators' responsibility to obtain them. If the authors use an existing, published dataset, we encourage, but do not require them to check how the data was collected and whether consent was obtained. Our goal is to raise awareness of possible issues that might be ingrained in our community. Thus, we would like to encourage dataset creators to provide this information to the public. In this regard, if a paper uses an existing public dataset that is released by other researchers/research organizations, we encourage, but do not require them to include a discussion of IRB related issues in the paper. Reviewers, hence, should not penalize a paper if such a discussion is NOT included. Check for Discussion of Negative Societal Impact The CVPR community has not put as much emphasis on the awareness of possible negative societal impact as other AI communities so far, but this is an important issue. We aim to raise awareness without introducing a formal policy (yet). As a result, authors are encouraged to include a discussion on potential negative societal