This is an AI-generated image created with Midjourney by Molly-Anna MaQuirl
Open AI CEO Sam Altman has revamped the company’s teams by reassigning Aleksander Madry, a top AI safety executive, to a pioneering role focused on AI reasoning. This strategic move reflects OpenAI's commitment to addressing emerging risks in the artificial intelligence era.
Previously, Madry was Head of Preparedness, a team responsible for tracking, forecasting, evaluating and helping protect against catastrophic risks associated with cutting-edge AI models. He is now working on a new research project, with executives Joaquin Quinonero Candela and Lilian Weng taking on the oversight of the preparedness team. However, Madry will continue to work on core AI safety work in his new role.
According to the MIT’s website, Aleksander Madry also serves as a co-lead faculty member of the MIT AI Policy Forum and director of MIT's Center for Deployable Machine Learning; roles which he is currently on leave from.
The decision to reorganize OpenAI’s teams was made just before a group of Democratic senators wrote a letter to Altman with concerns and queries about how OpenAI addresses emerging safety challenges and cybersecurity threats. OpenAI has not responded immediately to requests for comment, but policymakers have requested specific information about safety practices by August 13.
The increasing capabilities of AI chatbots has heightened safety concerns, prompting Madry's position change. This adjustment aligns with OpenAI's commitment to improve safety and governance in response to emerging problems.
Earlier this month, Microsoft exited its observer seat on OpenAI's board, expressing confidence in OpenAI’s new board, which strongly emphasizes artificial intelligence safety and governance. Madry’s reassignment aligns with OpenAI's efforts to address safety challenges and improve governance. Despite this, an open letter was recently published by a group of both current and former employees raising concerns about a lack of oversight in the industry to ensure the responsible development of artificial intelligence technologies.
This is an AI-generated image created with Midjourney by Molly-Anna MaQuirl
Open AI CEO Sam Altman has revamped the company’s teams by reassigning Aleksander Madry, a top AI safety executive, to a pioneering role focused on AI reasoning. This strategic move reflects OpenAI's commitment to addressing emerging risks in the artificial intelligence era.
Previously, Madry was Head of Preparedness, a team responsible for tracking, forecasting, evaluating and helping protect against catastrophic risks associated with cutting-edge AI models. He is now working on a new research project, with executives Joaquin Quinonero Candela and Lilian Weng taking on the oversight of the preparedness team. However, Madry will continue to work on core AI safety work in his new role.
According to the MIT’s website, Aleksander Madry also serves as a co-lead faculty member of the MIT AI Policy Forum and director of MIT's Center for Deployable Machine Learning; roles which he is currently on leave from.
The decision to reorganize OpenAI’s teams was made just before a group of Democratic senators wrote a letter to Altman with concerns and queries about how OpenAI addresses emerging safety challenges and cybersecurity threats. OpenAI has not responded immediately to requests for comment, but policymakers have requested specific information about safety practices by August 13.
The increasing capabilities of AI chatbots has heightened safety concerns, prompting Madry's position change. This adjustment aligns with OpenAI's commitment to improve safety and governance in response to emerging problems.
Earlier this month, Microsoft exited its observer seat on OpenAI's board, expressing confidence in OpenAI’s new board, which strongly emphasizes artificial intelligence safety and governance. Madry’s reassignment aligns with OpenAI's efforts to address safety challenges and improve governance. Despite this, an open letter was recently published by a group of both current and former employees raising concerns about a lack of oversight in the industry to ensure the responsible development of artificial intelligence technologies.