To those who say I overdid it with my MacBook Pro on the specs, how do you know?
-
To those who say I overdid it with my MacBook Pro on the specs, how do you know? With vibe coding, who knows what I will do with it?
@technocounselor I've never understood why people feel so free to express negative opinions about another person's technology choices. It really is not their business. Most people don't choose to spend as little as I do on technology, but that doesn't give me the right to impose my choices on others. I really hope you enjoy your new Mac.

-
@technocounselor @Bruce I think it is interesting that you are going for that end of the spectrum, and I am seriously considering a MacBook Neo. It will be especially interesting if we both end up being pretty happy with what we got.
@JamiePauls @Bruce Yeah we should podcast about it.
-
@technocounselor I've never understood why people feel so free to express negative opinions about another person's technology choices. It really is not their business. Most people don't choose to spend as little as I do on technology, but that doesn't give me the right to impose my choices on others. I really hope you enjoy your new Mac.

@Lynn Thank you, and I’m so glad that your phone is serving you well.

-
@JamiePauls @Bruce Yeah we should podcast about it.
@technocounselor @Bruce I seriously love that idea actually. We could make that a dang good podcast.
-
@Lynn Thank you, and I’m so glad that your phone is serving you well.

@technocounselor You're welcome! Yes, I'm glad too.

-
@Lynn Thank you, and I’m so glad that your phone is serving you well.

@Lynn Hi Lynn. How are you doing this evening, it's 10:39 PM Atlantic here in Nova Scotia.
-
@Lynn Hi Lynn. How are you doing this evening, it's 10:39 PM Atlantic here in Nova Scotia.
@carrottop1023 Hi, Tim! I'm doing well, thanks. It has been a nice day. Hope you and Stephanie are well.
-
@carrottop1023 Hi, Tim! I'm doing well, thanks. It has been a nice day. Hope you and Stephanie are well.
@Lynn It wasn't too bad today.
-
@Lynn It wasn't too bad today.
-
@Rooktallon The person you want to ask about that would be @mikedoise
@technocounselor @Rooktallon It would depend on the hardware. If you are on a PC then it will depend on if you have a GPU or not. You would need at least 8 GB of vide ram to do agentic work on a PC.
-
@technocounselor @Rooktallon It would depend on the hardware. If you are on a PC then it will depend on if you have a GPU or not. You would need at least 8 GB of vide ram to do agentic work on a PC.
@mikedoise @technocounselor I have a 3060 with 32 gigs of ram. It's a 12-gen Intel i7. Should that be decent enough?
-
@technocounselor @Rooktallon It would depend on the hardware. If you are on a PC then it will depend on if you have a GPU or not. You would need at least 8 GB of vide ram to do agentic work on a PC.
@mikedoise @technocounselor I mean, I can already run LLMs for roleplay/fictional story creation and all that stuff, is agentic work really that different?
-
@technocounselor @Bruce I seriously love that idea actually. We could make that a dang good podcast.
@JamiePauls @Bruce Yeah we could, and ROBERT would be the mac in the middle.

-
@mikedoise @technocounselor I mean, I can already run LLMs for roleplay/fictional story creation and all that stuff, is agentic work really that different?
@Rooktallon @technocounselor You can try it but it will depend on your VRAM. LLMs can be creative, but agentic work requires the LLM to reason and call tools and figure out what to do with that data. The next thing is that the agent needs to have enough of a context window to reason out what it should do. This means a lot more video memory is required. LLMs like graphics chips because the information can be processed in parallel instead of on just a few cores.
-
@Rooktallon @technocounselor You can try it but it will depend on your VRAM. LLMs can be creative, but agentic work requires the LLM to reason and call tools and figure out what to do with that data. The next thing is that the agent needs to have enough of a context window to reason out what it should do. This means a lot more video memory is required. LLMs like graphics chips because the information can be processed in parallel instead of on just a few cores.
@mikedoise @technocounselor Hmmm, I see. So, if I wanted to see if it'd actually work, which one should I try?
-
@mikedoise @technocounselor Hmmm, I see. So, if I wanted to see if it'd actually work, which one should I try?
@Rooktallon @technocounselor I’d try Ollama with Hermes Agent. It is the easiest to set up. Do you know how much video ram you have?
-
@Rooktallon @technocounselor I’d try Ollama with Hermes Agent. It is the easiest to set up. Do you know how much video ram you have?
@mikedoise @technocounselor Apparently 12 gigs I think?
-
@mikedoise @technocounselor Apparently 12 gigs I think?
@Rooktallon @technocounselor Try Gemma 4 E4B You might be able to get away with a context window of 64K or 96K tokens. That is a guess but you’d have to try it.
-
@Rooktallon @technocounselor Try Gemma 4 E4B You might be able to get away with a context window of 64K or 96K tokens. That is a guess but you’d have to try it.
@mikedoise @technocounselor So, you have to program the thing yourself, it would seem, enter the code in and run it through CLI, is that right?
-
@mikedoise @technocounselor So, you have to program the thing yourself, it would seem, enter the code in and run it through CLI, is that right?
@Rooktallon @technocounselor Hermes Agent has a one line paste that you use in the terminal. Not sure if it will work in CMD or PowerShell, but it will for sure work with windows subsystem for linux. Paste that in, and make sure Ollama is installed. You then have to set up Hermes agent from the command line with the arrow keys. It is not the easiest process.