Caroline Bishop
Apr 08, 2026 17:45
OpenAI proclaims new fellowship program for exterior researchers centered on AI security and alignment, working September 2026 by February 2027.
OpenAI is opening its doorways to outdoors researchers with a brand new Security Fellowship program geared toward advancing unbiased work on AI alignment and security challenges. Purposes at the moment are open, with a Might 3 deadline.
The five-month program runs from September 14, 2026 by February 5, 2027, focusing on researchers, engineers, and practitioners who wish to deal with security questions affecting each present and future AI programs. OpenAI has partnered with Constellation to supply workspace in Berkeley, although distant participation is an choice.
What OpenAI Needs
The corporate outlined precedence analysis areas together with security analysis, ethics, robustness, scalable mitigations, privacy-preserving security strategies, agentic oversight, and high-severity misuse domains. They’re particularly in search of work that is “empirically grounded, technically robust, and related to the broader analysis neighborhood.”
Fellows will not get inner system entry—a notable limitation—however will obtain API credit, compute help, a month-to-month stipend, and mentorship from OpenAI workers. The expectation is obvious: produce one thing tangible by program’s finish, whether or not that is a analysis paper, benchmark, or dataset.
Who Ought to Apply
OpenAI is casting a large internet on backgrounds. Laptop science is clear, however they’re additionally welcoming candidates from social science, cybersecurity, privateness, and human-computer interplay fields. The corporate explicitly said they “prioritize analysis means, technical judgment, and execution over particular credentials.”
Letters of reference are required. Profitable candidates shall be notified by July 25.
The Larger Image
This fellowship arrives as AI security considerations have moved from educational debate to mainstream regulatory dialogue. OpenAI has confronted criticism through the years for allegedly deprioritizing security analysis in favor of functionality growth—a rigidity that led to high-profile departures from its security staff.
This system represents an try to domesticate exterior security analysis expertise whereas doubtlessly deflecting a few of that criticism. Whether or not it indicators a real shift in priorities or serves primarily as an optics play stays to be seen.
For researchers all for AI security work with entry to OpenAI sources and mentorship, purposes shut Might 3 on the program’s official web page. Questions could be directed to openaifellows@constellation.org.
Picture supply: Shutterstock
