Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. Because tensorflow default distribution is built without CPU extensions, such as SSE4.1, SSE4.2, AVX, AVX2, FMA, etc. The default builds (ones from pip install tensorflow) are intended to be compatible with as many CPUs as possible.

Because tensorflow default distribution is built without CPU extensions, such as SSE4.1, SSE4.2, AVX, AVX2, FMA, etc. The default builds (ones from pip install tensorflow) are intended to be compatible with as many CPUs as possible.

Scheduled Pinned Locked Moved Uncategorized
4 Posts 3 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • dysfun@social.treehouse.systemsD This user is from outside of this forum
    dysfun@social.treehouse.systemsD This user is from outside of this forum
    dysfun@social.treehouse.systems
    wrote last edited by
    #1

    Because tensorflow default distribution is built without CPU extensions, such as SSE4.1, SSE4.2, AVX, AVX2, FMA, etc. The default builds (ones from pip install tensorflow) are intended to be compatible with as many CPUs as possible.

    Yeah, there's this magic thing called runtime detection. I'm surprised such a supposedly great framework can't manage that when I can.

    Another argument is that even with these extensions CPU is a lot slower than a GPU, and it's expected for medium- and large-scale machine-learning training to be performed on a GPU.

    yeah, fuck the poors, amirite?

    milas@social.notaphish.fyiM flippac@types.plF 2 Replies Last reply
    1
    0
    • dysfun@social.treehouse.systemsD dysfun@social.treehouse.systems

      Because tensorflow default distribution is built without CPU extensions, such as SSE4.1, SSE4.2, AVX, AVX2, FMA, etc. The default builds (ones from pip install tensorflow) are intended to be compatible with as many CPUs as possible.

      Yeah, there's this magic thing called runtime detection. I'm surprised such a supposedly great framework can't manage that when I can.

      Another argument is that even with these extensions CPU is a lot slower than a GPU, and it's expected for medium- and large-scale machine-learning training to be performed on a GPU.

      yeah, fuck the poors, amirite?

      milas@social.notaphish.fyiM This user is from outside of this forum
      milas@social.notaphish.fyiM This user is from outside of this forum
      milas@social.notaphish.fyi
      wrote last edited by
      #2

      @dysfun you're overlooking that this could add megabytes to an otherwise multi-gig package

      dysfun@social.treehouse.systemsD 1 Reply Last reply
      0
      • milas@social.notaphish.fyiM milas@social.notaphish.fyi

        @dysfun you're overlooking that this could add megabytes to an otherwise multi-gig package

        dysfun@social.treehouse.systemsD This user is from outside of this forum
        dysfun@social.treehouse.systemsD This user is from outside of this forum
        dysfun@social.treehouse.systems
        wrote last edited by
        #3

        @milas yes, clearly runtime detection would be the problem...

        1 Reply Last reply
        0
        • dysfun@social.treehouse.systemsD dysfun@social.treehouse.systems

          Because tensorflow default distribution is built without CPU extensions, such as SSE4.1, SSE4.2, AVX, AVX2, FMA, etc. The default builds (ones from pip install tensorflow) are intended to be compatible with as many CPUs as possible.

          Yeah, there's this magic thing called runtime detection. I'm surprised such a supposedly great framework can't manage that when I can.

          Another argument is that even with these extensions CPU is a lot slower than a GPU, and it's expected for medium- and large-scale machine-learning training to be performed on a GPU.

          yeah, fuck the poors, amirite?

          flippac@types.plF This user is from outside of this forum
          flippac@types.plF This user is from outside of this forum
          flippac@types.pl
          wrote last edited by
          #4

          @dysfun shit, late 90s gamedev could

          1 Reply Last reply
          0
          • R relay@relay.infosec.exchange shared this topic
          Reply
          • Reply as topic
          Log in to reply
          • Oldest to Newest
          • Newest to Oldest
          • Most Votes


          • Login

          • Login or register to search.
          • First post
            Last post
          0
          • Categories
          • Recent
          • Tags
          • Popular
          • World
          • Users
          • Groups