It’s common in the AI community to emphasise the power of general purpose methods. Approaches that continue to scale with increased computation even as the available computation becomes very great. Two methods that seem to scale arbitrarily in this way are search and learning. These approaches significantly outperform others as computational power increases. I believe the real question, however, extends beyond merely encoding known variables into algorithms - like the hard-coding of pieces in a chess game. Human DNA doesn’t contain references to knights or bishops, so maybe training code shouldn’t contain explicit mentions of knights or bishops either. Then again, why limit the AI’s learning to what is hardcoded into human DNA, when evolution itself is an ongoing process of search and optimisation? This idea pushes us toward considering evolutionary approaches in developing AI, perhaps even the possibility of mirroring the evolutionary paths that led to human intelligence.

By constructing a basic, self-improving AI, we could potentially initiate a bootstrap process that might lead AI to reach — and exceed — human capabilities, kind of like how evolution did, albeit with massive computational resources that we don’t have. The objective isn’t to create something that surpasses human intelligence per se but to build highly effective systems within defined domains. For instance, while a self-driving car might benefit from human-like learning capabilities, a train-driving AI doesn’t need the same such level of complexity. Here, simplicity in coding and testing could benefit, suggesting something like domain-specific approaches in AI development. Think of it like this: when faced with the challenge of building a TV, one could either emulate the historical development of TV technology or deconstruct and replicate an existing TV. The latter approach, reverse engineering, typically proves more direct and efficient, to my mind at least. It gives a clear pathway based on existing, functional models, unlike the speculative and complex route of historical replication (which is also really hard to pin down).

So while AI research suggests minimising hand-crafted features based on human expertise, it also highlights the potential of replicating certain aspects of human biology that align with current technological capabilities. Today’s technology may not yet fully replicate the brain’s intricate network, but it is on the path to mimicking its fundamental components—using transformers for cognitive functions akin to the neocortex, or improved recurrent attention models for memory processing similar to the hippocampus. While it’s only loosely alike, neural network architecture and the related mathematics has led me to learn more about neuroscience than I thought I ever would.

In short, let’s not strive to recreate human intelligence in its entirety but rather to develop functional, effective systems that can perform specific tasks better than humans. By focusing on what can be efficiently replicated and scaled, we are crafting a new generation of AI systems that stand on the shoulders of both human ingenuity and evolutionary chaos.