
Highlights of the Week
Last week, my focus was primarily drawn towards regulation, compliance, and alignment in the AI sector. This was largely triggered by OpenAI’s announcement of their Superalignment Task Force. This led me to delve into the EU’s AI Act, the UK’s draft that uses the AI Act as a foundation, and perspectives from those who caution against hasty actions before fully understanding the implications. For those more technologically inclined, we conclude with a comprehensive survey of Large Language Models (LLMs).
Alignment
On July 5, OpenAI announced the launch of a Superalignment task force, Introducing Superalignment, which aims to address the critical issue of AI alignment, particularly with imminent Artificial General Intelligence (AGI) implementations. OpenAI has committed to dedicating 20% of their secured compute resources over the next four years to this challenge, with the goal of iteratively aligning superintelligence with human intentions. While the announcement acknowledges the importance of the issue and appears to be a serious effort, there are many aspects that remain unaddressed. I’ve written a post on my interpretation of the announcement, highlighting several areas that they have yet to address.
AI Regulation
Two source documents particularly caught my attention regarding AI regulation:
-
The UK Parliament’s draft, Compromise Amendments (DRAFT - pdf), is based on the EU’s AI Act. A closer look at the differences between the EU and UK versions reveals that the UK is not as aggressive in governing and ensuring compliance in AI use. With the major centers of AI research and product development outside the US residing in the UK, the developments in the coming months will be interesting to watch.
-
The whitepaper, Frontier AI Regulation: Managing Emerging Risks to Public Safety, is a must-read. Authored by representatives from various think tanks, research institutions (public and private), legal firms, and major AI technology providers, it outlines three building blocks needed for the regulation of frontier AI models: standard-setting processes, registration and reporting requirements, and mechanisms to ensure compliance with safety standards.
Jeremy Howard’s compelling piece, AI Safety and the age of Dislightenment, serves as a plea to the authors of the EU AI Act and the Frontier AI Regulation whitepapers. He warns against rushing to regulate AI technology and advocates for maintaining the Enlightenment ideas of openness and trust.
A Survey of Large Language Models
To reassure you that my focus on AI technology hasn’t been overshadowed by AI regulation and compliance, I’d like to highlight this comprehensive paper, A survey of Large Language Models. The term ‘comprehensive’ is an understatement here. At certain points, I had to set it aside and take a breather. It’s an excellent reference work, though its ‘current’ status may be fleeting given the rapid advancements in the field. The section on prompt engineering was particularly insightful.