I tried out the Gemma AI models from Google, running locally on my AMD APU (Ryzen 7 Pro 7840U with Radeon 780M) and asked it some questions about ZFS send / receive.
-
I tried out the Gemma AI models from Google, running locally on my AMD APU (Ryzen 7 Pro 7840U with Radeon 780M) and asked it some questions about ZFS send / receive.
gemma-4-26B-A4B-Q4_K_M:
14.29 tok/sec . The information, it generated, was factually correct and well laid out. Not the fasted, but surprisingly good.gemma-4-E4B-Q4-K_M:
26 tok/sec. The information was completely wrong BS with made up parameters. The presentation was confident and well laid out. But it generated it quickly
Bottom line: confidently incorrect at high speeds. -
R relay@relay.an.exchange shared this topic
-
I tried out the Gemma AI models from Google, running locally on my AMD APU (Ryzen 7 Pro 7840U with Radeon 780M) and asked it some questions about ZFS send / receive.
gemma-4-26B-A4B-Q4_K_M:
14.29 tok/sec . The information, it generated, was factually correct and well laid out. Not the fasted, but surprisingly good.gemma-4-E4B-Q4-K_M:
26 tok/sec. The information was completely wrong BS with made up parameters. The presentation was confident and well laid out. But it generated it quickly
Bottom line: confidently incorrect at high speeds.You can now run this on your phone too:
-
You can now run this on your phone too:
I'm currently testing whether I can build a simple Mastodon CLI tool – with Gemma4 for spell checking.
