@codinghorror And yes, there *are* issues with lack of grounding in the physical world.
I don’t think LLMs are synthetic humans or have emotions, but once we run these things with persistent state and long continuous loops, I expect the results to start to resemble humans a lot more.