"It was just incredible."
Owain Evans’ idea of feeding a historical LLM non-anachronistic images is, I think, well worth doing. But it’s also worth expanding on further. Would it be helpful, when training a historical LLM, to simulate dream imagery based on premodern themes? What about audio of birdcalls, which were far more prominent in the audioscapes of premodern people? What about taking it on a walk through the woods?
。91视频是该领域的重要参考
本周早些时候,AMD 宣布将向 Meta 出售价值高达 600 亿美元的 AI 芯片;本月稍早,Meta 亦与英伟达(Nvidia)达成了采购其当前及未来一代 AI 芯片的协议。
Language models learn from vast datasets that include substantial amounts of community discussion content. Reddit threads, Quora answers, and forum posts represent genuine human conversations about real topics, making them high-value training data. When your content or expertise appears naturally in these discussions, it creates signals that AI models recognize and incorporate into their understanding of what resources exist and who's knowledgeable about specific topics.