AI Safety and Governance
Authors: Grant Fellows
Introduction
Some experts think artificial general intelligence (AGI) could emerge as soon at 2030 due to rapid progress in compute, algorithms, and training techniques. This breakthrough could bring immense gains but also carries potentially existential risks, including misaligned power-seeking AI, catastrophic misuse such as bioweapons, extreme power concentration, and gradual human disempowerment. EA organizations like 80,000 Hours and the Center for AI Safety argue these risks are neglected and urgent, making contributions to the field of AI safety high-impact.
How engineers can help
Physical and hardware engineers can help address AI risks by understanding the hardware AI systems are trained on, researching compute scaling trends and supply chains to inform governance and safety planning. Beyond hardware expertise, an engineering mindset is valuable for AI safety technical research. Engineering will also be crucial in securing critical infrastructure against AI-enabled threats and mitigating threats possibly increased by AI such as biological risks.
Resources to check out
Profile on becoming an expert in AI hardware
Engineered for Impact podcast on AI compute governance
aisafety.com, a resource hub for the AI safety community, including a job board and a directory of organizations
These pages are written by volunteers. You can improve them by contributing on GitHub. Check out the README for details.