do we need models to be continuous learners?

current models have a bottleneck: they can't learn from their mistakes and retain that knowledge for continuous improvement.

dwarkesh patel's agi argumentread full post

learning bottleneck: dwarkesh argues that today's models do not learn like humans do. they do not get better at some complex tasks through experience or feedback

humans naturally learn from their mistakes: we try to steer away from previously made mistakes and make quicker tweaks to our actions.

as context grows, performance degrades: models might remember and imrpove thier mistakes during shorter conversations but as context grows, performance degrades since they are not able to retain that learnigngs.

nathan lambert's boeing 747 perspectiveread full post

ai does not need to be a human: nathan wisely argues:

"We're no longer trying to build the bird, we're trying to transition the Wright Brothers' invention into the 737 in the shortest time frame possible."

system design matters: current models can build solid systems if architected correctly.

human needs focus: optimize for what humans actually want from ai agents, not human replication.

a balanced path forward

short term pragmatism: current models can power useful systems with proper architecture and constraints.

long term necessity: truly autonomous agents need human like learning and continuous context handling.

incremental progress: bridge the gap with external memory systems and better training techniques.

the answer isn't binary. we can build valuable systems today while recognizing that true ai agents will require continuous learning.


the boeing 747 analogy holds: we don't need to replicate human cognition exactly, but we do need models that learn and adapt like humans do.