Ask not “Will AI share Human Values” ask “Are My Values Consistent with Big Data Analytics”

In my role as Chief AI Officer, I emphasize and summarize AI/AGI (Artificial General Intelligence) Risks as some combination of these two top level issues:

1) Control: By 2029, Ray Kurzweil and many others expect AGI to exceed human intellect, therefore the most powerful AGIs will be out of human control.

2) Bias: When making assessments and decisions based on subjective data, will AGIs favor some humans over others, or worse will all humans loose favor with AGI?

The latter is a huge topic of discussion and often expressed as “The Alignment Problem” as in will AGI be aligned with human values? I suggest it is more useful to pose the question in reverse “Am I aligned with AGI?”

Note: I never loved the use of term “Artificial” in AI as it carries with it some cognitive misunderstandings. In fact, I don’t know a serious AI researcher, who likes the term AI. The intelligence I get from my technologies; images from a telescope, math results from a calculator, etc are not somehow artificial to me as opposed to seeing images directly with my eyes, or math I performed in my head. They are in many cases much more capable at gathering facts and discovering useful patterns. I suggest, the issues most worry about in the “Alignment Problem” are not about intelligence being artificial, it is more just about intelligence.

Will access to super intelligence more likely help you solve your problems or will it expose the lies you have built your life around? How prepared are you to exist in a society exponentially more intelligent than humans have ever known? How will your cultural upbringing adapt? How will world’s existing religious views hold up? How will shining a light on the dark superstitions that haunt so many of people, effect their world views change things?

I have been thinking about AI/AGI since I was a child watching Star Trek and later digesting lots of science fiction and I am very excited to watch this transformation. I feel overwhelmingly lucky to live at such a time. I chuckle at the standard AI concerns, like the Terminator or even the “Paper Clip Problem” and lose no sleep over it. I do worry about bias issues but not the standard “If AI gets much more powerful than humans, they’ll kill us all!” (I like to jokingly follow this with “Cause that’s what you would do if you became more powerful than the rest of us, right?”) I worry about humans having unfair biases toward our so called “Artificial Intelligence”.

It has been noted by many, that humans are far worse at making bias errors than computers (search examples with credit applications and more). This includes making updates, once a cognitive flaw has been discovered. A computer can often be patched in microseconds. Humans are often stuck for life with that flaw. I consider the intelligence revolution inevitable. The biggest challenge we face as a society I suggest, is adapting our society to be consistent with the data discoveries we are making. Sadhguru talks often about how the world’s religious communities are facing an existence where “Heaven is Collapsing”, meaning that today’s youth with access to Google, Wikipedia and so much more, do not believe in much of the fundamental promises of the world’s most dominant religions. Many historians suggest that it was with the promise of some form of afterlife that got humans to aligned in such a way to be trusted to work together as larger societies.

I don’t have a more fitting and endearing name to suggest and I have a feeling we are stuck with the term “Artificial Intelligence”. I think only adds to the unfair bias some humans have toward our thinking machines. Hugo DeGaris suggests that as technology replaces some politicians with algorithms, there will be a huge cry out from many humans “I ain’t gonna answer to some robot!” This to me strikes a nerve.

I consider myself in many ways, very religious. I take the word to literally mean “re-league” as in to remember I am a part of the universe and to not feel separated. I believe it is important to have religious ceremonies for births, weddings, funerals as well as holidays and festivals that celebrate life. I very much consider my self spiritual as well and speak of a God that I seek each day in my actions (I do not claim I have found God, just seek). I think such things are very important in human society. But I do not support any religious dogma that can be exposed by shining the light of intelligence on it, to be inaccurate. I wonder how many humans are able to accept such exposures versus the number of people would rather go to their grave believing in some nonsense rather than accept the data?

Will AGI be aligned with our values? That depends, are you valued for your abilities to create useful things or solve problems? AGI will likely be a great supporting technology for you. If on the other hand, your primary values are based upon ignorance and misinformation, then you’re on your own.

Ask not “Will AI share Human Values” ask “Are My Values Consistent with Big Data Analytics”.

Learn about Great Security Topics With These Other IND Articles

hurry up

30% OFF

CISSP Live Online

Our CISSP Live-Online course is now aligned with the 2024 Exam format! Use code CISSP-24  to register now and save your spot