Workshop Date: June 3-4, 2026
Workshop Location, Colorado Convention Center, Denver, CO (Room TBD)
Since our inaugural workshop at CVPR 2024, generative foundation models have advanced at a striking pace: photorealistic image synthesis with FLUX and Imagen 3, long-form video generation via Sora and Runway Gen-3, multimodal reasoning in GPT-4o and Gemini, and production-ready 3D content creation. Meanwhile, reinforcement learning based post-training (RLHF, GRPO, DPO) has become standard for aligning these models, introducing new evaluation challenges around reward hacking, distributional shift, and alignment verification.
Yet evaluation has struggled to keep pace. Automatic metrics often miss nuanced human preferences, benchmarks saturate faster than they can be replaced, and instruction-following, long-context reasoning, and cross-modal generation demand fundamentally new assessment paradigms.
The 2nd Workshop on Evaluation for Generative Foundation Models at CVPR 2026 brings together researchers from academia and industry to share best practices, discuss ongoing efforts, and work toward more reliable, scalable evaluation protocols, spanning metrics, benchmarks, reward models, and human-in-the-loop methods, to ensure the safe and aligned deployment of next-generation foundation models.
Submit your paper here
Detailed Call for Papers
Assistant Professor, University of Washington
Meta FAIR
Meta FAIR
Meta GenAI
Samsung Labs
Meta FAIR
Assistant Professor, National University of Singapore
Amazon (Primary Contact)
University of Washington
Amazon AGI
Amazon
Amazon
Meta
Meta
© 2026 EVGENFM2026
We thank Jalpc for the jekyll template
The Microsoft CMT service was used for managing the peer-reviewing process for this conference. This service was provided for free by Microsoft and they bore all expenses, including costs for Azure cloud services as well as for software development and support.