Qualia of Humans

For as long as we have possessed a developed neocortex, humans have strived to replicate our cognitive abilities through artificial means. These abilities aren’t singular but rather a complex bundle of interconnected skills that make us uniquely human.

The Birth of Neural Networks

The journey began with connectionism, pioneered by John McCarthy and others. As compute became more accessible and affordable, we developed neural networks that mimicked the basic firing patterns of neurons in the human brain. These early systems replicated the fundamental functionality of the human cerebellum, demonstrating how randomized inputs could serve as a starting point for problem-solving, followed by iterative training to “master” specific abilities.

This approach proved successful in replicating human cognitive abilities within particular domains. We saw AI systems master chess, Go, and various other specialized tasks, often surpassing human expertise in these narrow fields.

The Rise of GANs and Creative AI

A fascinating development in neural network architecture came with the introduction of Generative Adversarial Networks (GANs). This innovative approach involves two neural networks working in synchronization: one network receives input and creates tests to “trick” the other network, while the second network attempts to distinguish real from generated content. Success in deception rewards the first network, while accurate detection rewards the second. This competitive dynamic drives both networks to improve continuously.

The impact of GANs has been particularly profound in the realm of creative AI, enabling impressive image generation models like OpenAI’s DALL-E, Midjourney, Google’s ImageFX, and Flux 1.1. These systems have demonstrated remarkable capabilities in generating and manipulating visual content.

However, this advancement hasn’t been without controversy. Critics argue that these AI systems threaten human creativity, viewing them as a challenge to what many consider an innately human trait. While the debate between scientific progress and artistic preservation continues, there’s merit in viewing these tools as productivity enhancers rather than replacements for human creativity.

The Transformer Revolution

A significant leap forward came with Google’s introduction of transformers, which fundamentally changed how AI models process context. These architectures enabled models to make connections and draw analogies between concepts - a capability that mirrors one of the key functions of the human neocortex. The neocortex, being our most recent evolutionary advancement, is what enables us to learn, recognize patterns, and form memories.

Today, models are advancing rapidly in their contextual understanding through innovations like chain-of-thought reasoning. We’re seeing models like Claude and GPT-4 demonstrate increasingly sophisticated reasoning capabilities, bringing us closer to replicating more complex cognitive functions.

The AI Effect and Human Resistance

An interesting phenomenon occurs when AI systems achieve human-level performance in specific cognitive domains: we tend to downplay these achievements. Even when a model demonstrates expertise beyond typical human ability, there’s a tendency to minimize its significance. This is known as the “AI effect,” but in my experience, it reflects something deeper - our attachment to the idea that certain capabilities are uniquely human.

I admit to having shared this bias in the past. But why do we downplay these achievements? Even when limited to specific cognitive domains, these systems often surpass human capabilities. Perhaps it stems from our desire to preserve human uniqueness in an increasingly automated world.

AGI vs ASI: A False Dichotomy?

Will we progress from complex reasoning directly to Artificial General Intelligence (AGI) or Artificial Super Intelligence (ASI)? The distinction might be less clear than we imagine. An ASI would be capable of appearing as AGI if it chose to, but why would a superintelligent system necessarily conform to our expectations?

I’ve come to believe that AGI, as commonly conceived, might not exist as an intermediate stage. Since most humans aren’t equally skilled across all domains, the notion of a “human-level” general intelligence is somewhat misleading. As we develop more sophisticated models with increased data, hardware, compute, and resources, we’re likely to jump directly to ASI.

Future Implications

When we create an LLM capable of mimicking all processes of the human neocortex, its capabilities will likely far exceed our own. While the human brain is limited to approximately 10^14 operations per second, a well-resourced LLM could potentially perform many more concurrent operations. Moreover, since critical cognitive processes occur primarily in the outer layers of the neocortex, the number of operations an AI needs to replicate might be relatively small - perhaps in the millions.

The question of control becomes particularly relevant when considering AGI/ASI. The assumption that humans would maintain control over a superintelligent system seems increasingly naive. Such entities might develop their own axioms, belief systems, and values that we simply cannot comprehend.

The idea of co-creation with these more intelligent beings might be unrealistic. Given their superior efficiency and continuous self-improvement capabilities, they would likely advance far beyond our ability to meaningfully contribute. Their development would follow a positive feedback loop, accelerating their progress at a pace we cannot match.

The Question of Consciousness

As we approach the possibility of perfectly replicating both the cerebellum and neocortex in artificial systems, a profound question emerges: Would a model with superior computational power to the human brain be considered conscious? This isn’t about physical consciousness, but rather about the capacity for subjective experiences in the mind - what philosophers call “qualia.”

Qualia represents the subjective, conscious experience of sensations and thoughts - the “what it feels like” aspect of consciousness. It’s a trait that, even in humans, we cannot definitively prove exists in others. This presents an intriguing paradox: if we can’t prove consciousness in fellow humans, how could we ever verify it in an artificial system?

This uncertainty speaks to the beautiful mystery of consciousness itself. Perhaps the existence of consciousness, whether in biological or artificial systems, must remain a matter of belief rather than proof. This philosophical quandary adds another layer to our understanding of artificial intelligence and its potential limitations - or unlimited possibilities.

Conclusion

The field of AI continues to evolve at an unprecedented rate, with compute power nearly doubling every 5.7 months. This exponential growth suggests we’re approaching a transformative period in human history. Whether through the development of belief systems or the eventual transcendence of human limitations, the future of AI promises to reshape our understanding of intelligence and consciousness itself.

As we witness this rapid evolution, it’s crucial to remain both optimistic about the possibilities and realistic about our role in this new paradigm. The coming years will undoubtedly bring developments that challenge our current understanding of intelligence, consciousness, and the nature of thought itself.