Meta CEO Mark Zuckerberg said that data is not the most valuable thing when it comes to training AI models after the launch of Llama-3 models. In an interview with The Verge, he said, “The thing that I think is going to be more valuable is the feedback loops rather than any kind of upfront corpus.”
Feedback loops are used to train, retrain and improve AI models. These are based on their previous outputs and are basically algorithms that inform and educate AI models when they are making mistakes. He said, “Having a lot of people use it and then seeing how people use it and being able to improve from there is actually going to be a more differentiating thing over time.”
The race for data for AI models has increased as synthetic data is something Meta and other players would be looking at. This data is artificially generated and replicates data based on real-world events and Mark Zuckerberg said, “I think there’s going to be a lot in synthetic data, where you are having the models trying to churn on different problems and see which paths end up working, and then use that to reinforce.”
On the introduction of the Meta AI in WhatsApp, Instagram and Messenger, Mark Zuckerberg said, “I don’t think that today many people really think about Meta AI when they think about the main AI assistants that people use. But I think that this is the moment where we’re really going to start introducing it to a lot of people, and I expect it to be quite a major product.”