Meta chief scientist tells why AI models can’t be trained like humans
Tech companies like Google, OpenAI, Facebook parent Meta, Anthropic and others are using different types of data to train their AI models. But for Meta’s chief AI scientist Yann LeCun, it is not sufficient enough for AI models to be compared with animals, let alone humans.
“Animals and humans get very smart very quickly with vastly smaller amounts of training data than current AI systems. Current LLMs are trained on text that would take 20,000 years for a human to read,” LeCun said in a post on Threads.
He said despite getting trained on such a vast amount of data, AI models still haven’t learned that if A is the same as B, then B is the same as A.