Navigating the Waters of AI Regulation: Towards a Safer Future

Captain’s Blog – Star Date: 2-16-2024:

This week saw the release of the EU AI Act, which marks a milestone in efforts to guide responsible AI practices. AI is the new fire and Data is the new oil. Like fire, artificial intelligence offers immense potential benefits alongside inherent dangers. While society has regulations to manage our use of fire, can we genuinely keep pace with AI’s exponential advancement?

The Limitations of Traditional Regulation

Focusing solely on regulating how humans use AI may become unsustainable as systems become smarter and less dependent on our commands. This isn’t a call for defeatism, but an acknowledgment that conventional regulatory approaches might lag behind. It’s akin to relying on the same fire prevention strategies for gasoline as we do for firewood.

Transparency and Decentralization

Open-source code, as advocated by Meta’s Zuckerberg, promotes security but doesn’t really address bias risks. AI bias is as concerning as any data breach. Ben Goertzel’s call for decentralized AGI development makes sense; to achieve true “general” intelligence, an AGI shouldn’t reflect a single company or country’s biases but, ideally, a global dataset with safeguards against misuse.

“If I only Knew Then, What I Know Now” Human Alignment with AI Self Regulation 

While we should pursue reasonable frameworks like the EU’s AI Act, we also need to focus on supplying ethical examples in our behavior, to truly embed such ethical principles directly into AI development, especially age of self learning. Imagine AIs that ‘know’ about data best practices before becoming self controlling. The faster it acquires an ethically sound foundation, the more likely it is to achieve AGI and make future decisions that prioritize all human well-being, from Afghanistan and Zimbabwe, before “learning the hard way”.

Conclusion

Regulation of AI development and use will remain vital, especially through collaborative efforts by organizations like NIST and ISO. Yet, let’s not waste time attempting to regulate AGI. Narrow AIs tend to go narrowly out of control. AGI will be generally out of control. And that is likely a very good thing the more I think about it. Imagine a worst case scenario where 100% of humans think it is ok to pollute the oceans but AGI sees that not only will all sea life suffer, so will all humans! Just because we vote on something doesn’t make it logical. We humans don’t always know what’s good for us. I may want to eat that Philly Cheese Steak, but my AI assistants suggest otherwise.

Live Long & Prosper,
Larry

Learn about Great Security Topics With These Other IND Articles

hurry up

30% OFF

CISSP Live Online

Our CISSP Live-Online course is now aligned with the 2024 Exam format! Use code CISSP-24  to register now and save your spot