WikiBit 2026-04-09 05:52The AI news out of OpenAI this week has a sharp edge: the company launched a paid Safety Fellowship offering $3,850 weekly stipends to external
The AI news out of OpenAI this week has a sharp edge: the company launched a paid Safety Fellowship offering $3,850 weekly stipends to external researchers studying what could go wrong with advanced AI — announced within hours of a New Yorker investigation reporting that OpenAI had dissolved its internal safety teams and quietly removed the word “safely” from its IRS mission statement.
OpenAI announced the fellowship on April 6 as “a pilot program to support independent safety and alignment research and develop the next generation of talent.” The program pays $3,850 per week, over $200,000 annualized, plus roughly $15,000 in monthly compute and mentorship from OpenAI researchers. Fellows work from Constellations Berkeley workspace or remotely, and applications close May 3. The fellowship is not limited to AI specialists — OpenAI is recruiting from cybersecurity, social science, and human-computer interaction alongside computer science.
The timing is the story. Ronan Farrow‘s investigation in The New Yorker, published the same day, documented that OpenAI had dissolved three consecutive internal safety organizations over 22 months. The superalignment team was shut down in May 2024 after co-leads Ilya Sutskever and Jan Leike departed. Leike wrote on his way out that “safety culture and processes have taken a backseat to shiny products.” The AGI Readiness team followed in October 2024. The Mission Alignment team was disbanded in February 2026 after just 16 months. The New Yorker also reported that when a journalist asked to speak with OpenAI’s existential safety researchers, a company representative replied: “What do you mean by existential safety? Thats not, like, a thing.”
The fellowship explicitly does not replace internal infrastructure. Fellows receive API credits and compute resources but no system access, positioning the program as arms-length research funding rather than a rebuild of the dissolved teams.
What the Fellowship Requires Fellows to Produce
The research agenda spans seven priority areas: safety evaluation, ethics, robustness, scalable mitigations, privacy-preserving safety methods, agentic oversight, and high-severity misuse domains. By the programs end in February 2027, each fellow must produce a substantive output — a paper, benchmark, or dataset. Specific academic credentials are not required; OpenAI stated it prioritizes research ability, technical judgment, and execution capacity.
Why This Matters Beyond the AI Industry
As crypto.news has reported, confidence in frontier AI companies‘ stated safety commitments is a market signal that affects capital allocation across AI infrastructure, AI tokens, and the DePIN and AI agent protocols sitting at the intersection of crypto and artificial intelligence. As crypto.news has noted, OpenAI’s spending trajectory and the credibility of its operational priorities are tracked closely by investors evaluating the AI infrastructure sector — a sector with growing overlap with blockchain-based systems. Whether external fellows working without internal access can meaningfully influence model development is a question the first cohorts research will begin to answer in early 2027.
Disclaimer:
The views in this article only represent the author's personal views, and do not constitute investment advice on this platform. This platform does not guarantee the accuracy, completeness and timeliness of the information in the article, and will not be liable for any loss caused by the use of or reliance on the information in the article.
0.00