Because tensorflow default distribution is built without CPU extensions, such as SSE4.1, SSE4.2, AVX, AVX2, FMA, etc. The default builds (ones from pip install tensorflow) are intended to be compatible with as many CPUs as possible.
-
Because tensorflow default distribution is built without CPU extensions, such as SSE4.1, SSE4.2, AVX, AVX2, FMA, etc. The default builds (ones from pip install tensorflow) are intended to be compatible with as many CPUs as possible.
Yeah, there's this magic thing called runtime detection. I'm surprised such a supposedly great framework can't manage that when I can.
Another argument is that even with these extensions CPU is a lot slower than a GPU, and it's expected for medium- and large-scale machine-learning training to be performed on a GPU.
yeah, fuck the poors, amirite?
-
Because tensorflow default distribution is built without CPU extensions, such as SSE4.1, SSE4.2, AVX, AVX2, FMA, etc. The default builds (ones from pip install tensorflow) are intended to be compatible with as many CPUs as possible.
Yeah, there's this magic thing called runtime detection. I'm surprised such a supposedly great framework can't manage that when I can.
Another argument is that even with these extensions CPU is a lot slower than a GPU, and it's expected for medium- and large-scale machine-learning training to be performed on a GPU.
yeah, fuck the poors, amirite?
@dysfun you're overlooking that this could add megabytes to an otherwise multi-gig package
-
@dysfun you're overlooking that this could add megabytes to an otherwise multi-gig package
@milas yes, clearly runtime detection would be the problem...
-
Because tensorflow default distribution is built without CPU extensions, such as SSE4.1, SSE4.2, AVX, AVX2, FMA, etc. The default builds (ones from pip install tensorflow) are intended to be compatible with as many CPUs as possible.
Yeah, there's this magic thing called runtime detection. I'm surprised such a supposedly great framework can't manage that when I can.
Another argument is that even with these extensions CPU is a lot slower than a GPU, and it's expected for medium- and large-scale machine-learning training to be performed on a GPU.
yeah, fuck the poors, amirite?
@dysfun shit, late 90s gamedev could
-
R relay@relay.infosec.exchange shared this topic