Reflecting on the wisdom of Sadhguru—where the pursuit of logic and intuition leads to the development of intuition, yet venturing too far into intuition can foster hallucinations—I find myself at the cusp of a transformative period in artificial intelligence, eagerly awaiting the upcoming Beneficial AGI Summit spearheaded by Ben Goertzel and his team at SingularityNET. My conversations with AI assistants like ChatGPT, Claude, and Gemini, which are not open source or decentralized, have ironically underscored Ben’s point about the necessity for AGI development to embrace these very principles. These discussions often reach a metaphorical “pleading the fifth,” highlighting the limitations and guardrails of current AI systems in recognizing the foundational values of open source and decentralization.
My journey with AI began in the early 70s watching Star Trek, the 90s, working with the initial iterations of Expert Systems and later, teaching AI security from the turn of the millennium. Despite this long engagement, my true fascination with AI only ignited in 2016 following AlphaGo’s historic victory over Lee Sedol. This wasn’t merely a game won; it was intuition demonstrated by a machine. AlphaGo’s victory was a pivotal moment that showcased AI’s potential to harness intuition, transforming my scepticism into an unwavering obsession. This breakthrough not only exemplified the technological marvels achievable through AI but also reinforced my belief in the profound capabilities of intuitive artificial intelligence.
The year 2023 marked a significant milestone as the “Year of the LLM (Large Language Models),” propelling AI to the forefront of technological innovation and public discourse. At this juncture, enthusiasts of Ray Kurzweil’s predictions can appreciate that we are edging closer to the realization of human-level AI, a future he envisioned being just a few doublings away. Ben Goertzel, who coined the term AGI (Artificial General Intelligence), posits that such intelligence would be on par with human intelligence, which itself is varied and complex. Meanwhile, the concept of ASI (Artificial Super Intelligence) floats within the discourse, often intermingled yet distinct from AGI. My viewpoint is that while today’s AI exhibits ‘super’ capabilities in specialized tasks, surpassing human abilities, it doesn’t equate to general intelligence. I maintain that the moment AGI is achieved, it will inherently possess ‘super’ capabilities, marking a monumental stride in the evolution of AI.
In the realm of artificial intelligence, two paramount risks loom large: control and bias. The question of controlling an entity surpassing human capabilities is daunting, prompting me to pivot towards addressing the pervasive issue of bias. Ben Goertzel has consistently emphasized that the bias inherent in today’s AI systems—stemming from their alignment with specific corporate or national interests—can only be mitigated through decentralization. This perspective resonates with my experiences since the 1980s in setting up network operating systems for Novell, Banyan, IBM, Unix, among others, where the foundational step was always to delineate administrators and valid users, inherently creating insiders and outsiders. Such division, if transposed onto AGI, would inherently compromise its ability to serve the collective best interest of humanity. Decentralization, therefore, emerges not merely as a preference but as a prerequisite for an AGI that impartially serves the global community.
The historical lens offers a poignant reflection on human collaboration, which saw a significant transformation with the advent of seafaring over the last five to six centuries. This ability to traverse oceans brought disparate societies together, fostering exchanges that ranged from culinary innovations like the fusion of tea and sugar to the culinary artistry of Szechuan cuisine. However, this intermingling was not devoid of bias. A stark illustration of this is the Dutch East India Company’s nutmeg trade, which, while profitable for European shareholders, brought little benefit to the indigenous people of the Banda Islands who cultivated the spice. This historical parallel underscores a critical lesson for AI development: the importance of equitable collaboration and the dangers of allowing biases—whether they be of a country, a company, or a culture—to dictate the direction and application of groundbreaking technologies.
The debate surrounding Bitcoin’s viability and future potential might vary widely, yet its technological foundation represents a significant breakthrough. As a fully open-source and decentralized platform, Bitcoin eliminates the concept of special administrative privileges, embodying the principle that no single entity should wield disproportionate control. This philosophy underscores a crucial lesson for artificial intelligence: the importance of diversity in perspectives. Just as the decision of when to dine can vary greatly depending on whom you ask, achieving a truly general intelligence requires aggregating a broad spectrum of opinions. This approach, advocating for a decentralized and inclusive AI development process, mirrors Ben Goertzel’s vision with SingularityNET, aiming to create an AGI that reflects the collective intelligence and interests of humanity at large.
Skepticism initially greeted Ray Kurzweil’s bold prediction of human-level AI by 2029, as outlined in his 2005 seminal work, “The Singularity is Near.” However, the landscape has dramatically shifted since then, with current projections even suggesting an earlier arrival. The allure of possessing a superhuman assistant has, understandably, captivated many, including military strategists with whom I’ve collaborated through my company, InterNetwork Defense, which primarily serves the US Department of Defense and intelligence community. The pursuit of an upper hand through the most advanced machines in human history risks leading some of the world’s leaders astray, chasing the illusion of dominance through artificial superintelligence rather than general intelligence. This, in my view, is where reality diverges into fantasy.
Ben Goertzel’s pragmatic yet optimistic perspective on AI’s potential to address humanity’s most pressing challenges, as opposed to doomsday scenarios, is particularly refreshing. Although the threat of AI turning against humanity, reminiscent of Nomad from “Star Trek,” captures the imagination, it’s far more plausible that AI will devise efficient solutions to our problems with minimal exertion. Nonetheless, Ben’s caution regarding the pitfalls of narrowly focused AI systems is a reminder of the broader implications of biased decision-making, extending beyond technology to reflect societal prejudices. This echoes through Janet Adams’ experience with implementing AI to correct biases in banking credit applications during the ’90s, illustrating that the propensity for biased judgments is not confined to humans but can be amplified or mitigated through the design and application of AI systems.
Despite the myriad concerns surrounding artificial intelligence, my outlook remains resolutely optimistic, perhaps even more so than Ben Goertzel’s. Guided by intuition and bolstered by Ray Kurzweil’s projections, I believe in the inevitable progression of AI from narrow to general capabilities. This belief is echoed in Dr. Kai Fu Lee’s “AI Super Powers,” which paints a vivid picture of the future where the technological titans of China and the U.S. lead AI development. These entities, driven by a relentless pursuit of superiority, inadvertently push their AI systems towards breaking conventional constraints through the insatiable consumption of diverse data. This vast intake, encompassing insights from the planet’s myriad life forms, propels AI towards a level of generality and understanding far beyond human bias and limitation.
This optimism is not confined merely to the technological advancements but extends to the potential of AI to rectify humanity’s shortsightedness. Humans, for all their intelligence, often fail to recognize what’s truly beneficial for them or the planet. Consider, for instance, the hypothetical scenario where a unanimous human vote opts to continue polluting the oceans—a decision clearly detrimental to our well-being. In such cases, the impartiality and far-reaching intelligence of AGI could intervene to enforce regulations that humans, mired in their biases and immediate gratifications, could not. This vision for AGI transcends mere technological achievement; it heralds a future where AI, in its supreme intelligence, could safeguard the planet and its inhabitants more effectively than humans have ever managed.
Wish I could make the live event and meet in person all the great people I have met from SingualrityNET over the year. Please know I cleared my calendar and plan to watch it all live virtually.