If I were to sell the power of LLMs as powerful research agents, and if I had enough money, I could think about introducing little "gems" into the training set of LLM so that my model would be able to discover new theorems and proofs. There is a lot of money at the table, and I am sure there are a lot of genius people with little pay. Perhaps this kind of thinking is wrong?, only bad people would think like this?, how could one detect such a trick without knowing the training set? Comments URL: https://news.ycombinator.com/item?id=48073325 Points: 1 # Comments: 0
Want to discover more AI signals like this?
Explore Steek