Feb 20, 2024
Current attempts at AI desperately need context, that's for sure, a kind of Markov like at least short term memory in the space they explore.
Ensembles look promising for getting better results generally (and more recent thoughts emphasise them over even larger LLMs) but, if I understand correctly, I totally agree. We need proper context, memory, and therefore more informed and relatable reasoning.
Are any modern philosophers talking specifically about this?