AI with an Off Switch? and Self-Supervised Learning | Stars, Cells, and God

Join Jeff Zweerink and computer scientist Dustin Morley as they discuss new discoveries taking place at the frontiers of science that have theological and philosophical implications, including the reality of God’s existence.

AI with an Off-Switch?

As we contemplate what a world with true AI (general or super, rather than narrow, artificial intelligence) looks like, the question of how we interact with AI inevitably arises. Specifically, what do we do when AI pursues a path that is harmful to humanity? One scenario put forth is to install an off switch that we control, but would the AI leave the off switch enabled? One study showed that programming uncertainty into the AI about its objective may provide incentives for the AI to leave the off switch functional. However, that uncertainty diminishes the AI’s effectiveness in obtaining its purpose. We discuss some of the apologetic implications of this study.


The Off-Switch Game

Self-Supervised Learning

Recent major breakthroughs in public-facing artificial intelligence (AI) such as OpenAI’s ChatGPT and Tesla’s self-driving software have achieved success in part due to complex, multi-component deep learning model architectures where each of the components can be trained or fine-tuned while leaving the other components fixed—effectively decoupling different steps or subtasks from each other. A new paper (still in preprint) has demonstrated significant success with self-supervised learning, pushing the envelope on this level of AI versatility even further. What does this mean for the near-term future of AI, and what implications does it have for the age-old comparison between AI and human intelligence?


Blockwise Self-Supervised Learning at Scale