> a company that genuinely believes it might be building one of the most transformative and potentially dangerous technologies in human history, yet presses forward anyway. This isn't cognitive dissonance but rather a calculated bet
These are supposed to be our least deranged AI cultists, huh.
Also: I love when there's clear examples of significant chunks of training data encoded in LLMs because people have spent a lot of time arguing to me that they simply don't do that, and...