3 Comments

You lost me with the alpha go example. I wasn't aware about that specific moves and while that AI wasn't capable of anything else then playing Go it was a specialist for the opposite of what you actually wanted to imply: that AI system did NOT discover and "parrot" humans but it found novel moves humans never come up in millions of games over centuries. So while LLMs are trained with next token prediction they certainly already have the ability to connect dots and the line between synthetic intelligence, simulated intelligence, biological intelligence and biological banality is way more blurred than you hope for. Disclaimer: That's just a general statement, not in anyhow a prediction about AGI that would be a different discussion.

Expand full comment

Thank you for this thoughtful comment! You make an excellent point about AlphaGo, and I should have explained this example better.

You're absolutely right - AlphaGo showed us something remarkable. It discovered entirely new patterns in Go that humans hadn't found after thousands of years of playing. I didn't mean to imply it just copied human moves. In fact, AlphaGo's Move 37 and Move 78 against Lee Sedol perfectly demonstrate AI's ability to find novel patterns in data.

The key distinction I was trying to make: While AlphaGo found groundbreaking patterns in Go data, it could only find patterns in Go. Compare this to Darwin, who made a mental leap to understand how species might evolve. Humans can use metaphorical thinking to connect patterns across totally different fields.

As you point out, the line between different types of intelligence isn't as simple as I suggested. The distinction between synthetic, simulated, and biological intelligence is more complex. As the saying goes, "All models are wrong, but some are useful." I hope this framework helps us think about what capabilities AGI would need - not just finding patterns in one domain, but connecting patterns across different fields like humans do.

Thanks for helping me think about this more clearly!​​​​​​​​​​​​​​​​

Expand full comment

In a way they had build an AGoI 😁 The thing is: that has proven it's technically possible for us to build AI systems that are capable to surpas human intelligence an ingenuity generally. The question is how exactly to build systems that are capable of doing it with more open fields than a board game. Maybe PhD level math as OpenAI is promising with the new o1? Maybe something different, and then something other... But there is no reason to believe there are some mysterious general restrictions to build an AGI. The only questions are what are the exact steps we have missed so far, how long will that take and how much will it cost. Big Tech is betting big they will reach this before the end of the decade. We will see, it's not impossible...

Expand full comment