Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. AI is not inevitable.

AI is not inevitable.

Scheduled Pinned Locked Moved Uncategorized
38 Posts 7 Posters 47 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • ulrikehahn@fediscience.orgU ulrikehahn@fediscience.org

    @apostolis @olivia no disagreement with that!

    lednabm@stranger.socialL This user is from outside of this forum
    lednabm@stranger.socialL This user is from outside of this forum
    lednabm@stranger.social
    wrote last edited by
    #27

    @UlrikeHahn @apostolis @olivia

    Wow.... interesting discussion, folks. Thank you. I'm a long way from university level experience, being an engineer in the electronic design industry for over 40 years. We've gone from one computer to share among engineers thru now to AI assistance across our individual computers. IMHO, we need to separate what AI can do from what they do. Humans, almost instinctively anthropomorphise everything. FFS... people still worship an imaginary AI in the sky and.... 1/2

    lednabm@stranger.socialL 1 Reply Last reply
    0
    • lednabm@stranger.socialL lednabm@stranger.social

      @UlrikeHahn @apostolis @olivia

      Wow.... interesting discussion, folks. Thank you. I'm a long way from university level experience, being an engineer in the electronic design industry for over 40 years. We've gone from one computer to share among engineers thru now to AI assistance across our individual computers. IMHO, we need to separate what AI can do from what they do. Humans, almost instinctively anthropomorphise everything. FFS... people still worship an imaginary AI in the sky and.... 1/2

      lednabm@stranger.socialL This user is from outside of this forum
      lednabm@stranger.socialL This user is from outside of this forum
      lednabm@stranger.social
      wrote last edited by
      #28

      @UlrikeHahn @apostolis @olivia 2/2 ... call it their god(s). I think the best resistance is to cooperate. After all, no matter how human these things can seem, they will never be more than tools. As humans, we "feel" a lot. We need to not let our feelings blind us to what these new tools can do. I'm no teacher. I've found that my method of communication doesn't do well explaining to others how to think, instead of what to think. I just know the tools we use evolve all the time....

      1 Reply Last reply
      0
      • ulrikehahn@fediscience.orgU This user is from outside of this forum
        ulrikehahn@fediscience.orgU This user is from outside of this forum
        ulrikehahn@fediscience.org
        wrote last edited by
        #29

        @abucci @apostolis @olivia I’m going to point you toward the scare quotes around the word “organic” in my post, which are there for precisely those reasons.

        I am also going to push back against the notion that I am “placing the responsibility at the feet of students”: I am simply describing the (widely documented) problem in higher education that students are using AI tools in significant volumes *even where there use is explicitly sanctioned and forbidden*.

        That is the concrete problem of AI now undermining higher education. Asking what “resisting AI” is supposed to mean for me in that context seems legitimate to me, and if it’s not, Olivia (who I’ve known for a long time as an academic colleague) is more than capable of telling me that herself.

        1 Reply Last reply
        0
        • lednabm@stranger.socialL This user is from outside of this forum
          lednabm@stranger.socialL This user is from outside of this forum
          lednabm@stranger.social
          wrote last edited by
          #30

          @abucci @UlrikeHahn @apostolis @olivia

          Wouldn't an approach where the AIs have to pass the class as students, be better. After all, regurgitating data is not the way to learn how to think. As for the pollitical/economics of the whole mess, well, that's on us to some extent. It's a problem educated people deal with all the time, even among each other. IMHO, humanity is still growing up. We've not abandoned our superstitions for the hard real wonder of actual nature. Is AI part of our nature?

          ulrikehahn@fediscience.orgU abucci@buc.ciA 2 Replies Last reply
          0
          • lednabm@stranger.socialL lednabm@stranger.social

            @abucci @UlrikeHahn @apostolis @olivia

            Wouldn't an approach where the AIs have to pass the class as students, be better. After all, regurgitating data is not the way to learn how to think. As for the pollitical/economics of the whole mess, well, that's on us to some extent. It's a problem educated people deal with all the time, even among each other. IMHO, humanity is still growing up. We've not abandoned our superstitions for the hard real wonder of actual nature. Is AI part of our nature?

            ulrikehahn@fediscience.orgU This user is from outside of this forum
            ulrikehahn@fediscience.orgU This user is from outside of this forum
            ulrikehahn@fediscience.org
            wrote last edited by
            #31

            @lednaBM @abucci @apostolis @olivia if I understand you correctly, you are suggesting we, in an sense, embrace AI and treat it in such a way that makes it better (ie accept it as students)? if yes, I don’t personally really want to make AI systems ‘better’ - they are causing huge damage and disruption at current levels of performance. I’d personally rather put a brake on that.

            lednabm@stranger.socialL 1 Reply Last reply
            0
            • ulrikehahn@fediscience.orgU ulrikehahn@fediscience.org

              @lednaBM @abucci @apostolis @olivia if I understand you correctly, you are suggesting we, in an sense, embrace AI and treat it in such a way that makes it better (ie accept it as students)? if yes, I don’t personally really want to make AI systems ‘better’ - they are causing huge damage and disruption at current levels of performance. I’d personally rather put a brake on that.

              lednabm@stranger.socialL This user is from outside of this forum
              lednabm@stranger.socialL This user is from outside of this forum
              lednabm@stranger.social
              wrote last edited by
              #32

              @UlrikeHahn @abucci @apostolis @olivia

              I understand what you're saying, and maybe language is not serving us well. You seemed to have juxtaposed helping it be better versus creating a disruption. And again, maybe I fully don't understand the dilemma. When I need very technical information that I can not recall or need help with, I would go to a book or a specification. Now, I can ask AI, check its results, and decide whether I can rely upon what's being presented. It's a tool... 1/2

              lednabm@stranger.socialL 1 Reply Last reply
              0
              • lednabm@stranger.socialL lednabm@stranger.social

                @UlrikeHahn @abucci @apostolis @olivia

                I understand what you're saying, and maybe language is not serving us well. You seemed to have juxtaposed helping it be better versus creating a disruption. And again, maybe I fully don't understand the dilemma. When I need very technical information that I can not recall or need help with, I would go to a book or a specification. Now, I can ask AI, check its results, and decide whether I can rely upon what's being presented. It's a tool... 1/2

                lednabm@stranger.socialL This user is from outside of this forum
                lednabm@stranger.socialL This user is from outside of this forum
                lednabm@stranger.social
                wrote last edited by
                #33

                @UlrikeHahn @abucci @apostolis @olivia 2/2 tools generally need calibration. Is it possible to use the disruption you speak of as a teaching moment? I don't know. Am I being foolish about the political/economic consequences of those benefitting from the disruption? Maybe. I agree with the original poster. We should have free education, health care, and representation in the way we govern ourselves. Problem there is, IMHO, the white elephant that is religion working against secular human values..

                ulrikehahn@fediscience.orgU 1 Reply Last reply
                0
                • lednabm@stranger.socialL lednabm@stranger.social

                  @UlrikeHahn @abucci @apostolis @olivia 2/2 tools generally need calibration. Is it possible to use the disruption you speak of as a teaching moment? I don't know. Am I being foolish about the political/economic consequences of those benefitting from the disruption? Maybe. I agree with the original poster. We should have free education, health care, and representation in the way we govern ourselves. Problem there is, IMHO, the white elephant that is religion working against secular human values..

                  ulrikehahn@fediscience.orgU This user is from outside of this forum
                  ulrikehahn@fediscience.orgU This user is from outside of this forum
                  ulrikehahn@fediscience.org
                  wrote last edited by
                  #34

                  @lednaBM @abucci @apostolis @olivia I think one of the problems, particularly in the context of education, lies in the idea that “now I can use AI to give me an answer and check the results”. It is precisely the “ability to check the results” in a particular scientific or academic discipline that higher education degrees are trying to provide. Leaning on AI to “find” answers by students is undermining the learning of the skills that underpin “the ability to check”.

                  lednabm@stranger.socialL 1 Reply Last reply
                  0
                  • ulrikehahn@fediscience.orgU ulrikehahn@fediscience.org

                    @lednaBM @abucci @apostolis @olivia I think one of the problems, particularly in the context of education, lies in the idea that “now I can use AI to give me an answer and check the results”. It is precisely the “ability to check the results” in a particular scientific or academic discipline that higher education degrees are trying to provide. Leaning on AI to “find” answers by students is undermining the learning of the skills that underpin “the ability to check”.

                    lednabm@stranger.socialL This user is from outside of this forum
                    lednabm@stranger.socialL This user is from outside of this forum
                    lednabm@stranger.social
                    wrote last edited by
                    #35

                    @UlrikeHahn @abucci @apostolis @olivia
                    That's a great point. Teaching youth only to rely upon AI sounds like a mistake. I guess I have trouble with the notion that AI is anything more than a tool. Its applications threaten a lot, probably a lot beyond its scope, but not beyond its profit scam. Hopefully, some applications are identified as misapplications. I'm reminded of Huxleys Brave New World. Will AI be the soma drug to placate the masses, even though they were designed to be placated.

                    aoanla@hachyderm.ioA 1 Reply Last reply
                    0
                    • lednabm@stranger.socialL lednabm@stranger.social

                      @UlrikeHahn @abucci @apostolis @olivia
                      That's a great point. Teaching youth only to rely upon AI sounds like a mistake. I guess I have trouble with the notion that AI is anything more than a tool. Its applications threaten a lot, probably a lot beyond its scope, but not beyond its profit scam. Hopefully, some applications are identified as misapplications. I'm reminded of Huxleys Brave New World. Will AI be the soma drug to placate the masses, even though they were designed to be placated.

                      aoanla@hachyderm.ioA This user is from outside of this forum
                      aoanla@hachyderm.ioA This user is from outside of this forum
                      aoanla@hachyderm.io
                      wrote last edited by
                      #36

                      @lednaBM @UlrikeHahn @abucci @apostolis @olivia At the risk of butting into this conversation, I think the problem here is that you think that "just a tool" is a neutral concept.

                      Tools, by their very nature, change the way we interact with the world. Cars are "just a tool", but dependence on cars for transport has both positive and negative effects, because of how their use changes how we behave (and what other things we want to change about the world now "we" want to use cars all the time). Is "car-using humanity" healthier than "pre-car humanity"?

                      In this sense, even if "AI is just a tool", the existence of cognitive tools *clearly* implies that use of them will change the way people behave - *regardless* of any concept of "applications being identified as misapplications". Dependence on a tool for *thinking* feels inherently more problematic than dependence on a tool for travelling distances...

                      ulrikehahn@fediscience.orgU 1 Reply Last reply
                      0
                      • aoanla@hachyderm.ioA aoanla@hachyderm.io

                        @lednaBM @UlrikeHahn @abucci @apostolis @olivia At the risk of butting into this conversation, I think the problem here is that you think that "just a tool" is a neutral concept.

                        Tools, by their very nature, change the way we interact with the world. Cars are "just a tool", but dependence on cars for transport has both positive and negative effects, because of how their use changes how we behave (and what other things we want to change about the world now "we" want to use cars all the time). Is "car-using humanity" healthier than "pre-car humanity"?

                        In this sense, even if "AI is just a tool", the existence of cognitive tools *clearly* implies that use of them will change the way people behave - *regardless* of any concept of "applications being identified as misapplications". Dependence on a tool for *thinking* feels inherently more problematic than dependence on a tool for travelling distances...

                        ulrikehahn@fediscience.orgU This user is from outside of this forum
                        ulrikehahn@fediscience.orgU This user is from outside of this forum
                        ulrikehahn@fediscience.org
                        wrote last edited by
                        #37

                        @aoanla @lednaBM @abucci @apostolis @olivia that’s very well put!

                        1 Reply Last reply
                        0
                        • lednabm@stranger.socialL lednabm@stranger.social

                          @abucci @UlrikeHahn @apostolis @olivia

                          Wouldn't an approach where the AIs have to pass the class as students, be better. After all, regurgitating data is not the way to learn how to think. As for the pollitical/economics of the whole mess, well, that's on us to some extent. It's a problem educated people deal with all the time, even among each other. IMHO, humanity is still growing up. We've not abandoned our superstitions for the hard real wonder of actual nature. Is AI part of our nature?

                          abucci@buc.ciA This user is from outside of this forum
                          abucci@buc.ciA This user is from outside of this forum
                          abucci@buc.ci
                          wrote last edited by
                          #38
                          @lednaBM@stranger.social @UlrikeHahn@fediscience.org @apostolis@social.coop @olivia@scholar.social The fact that you can selectively ignore the strings of a marionette does not mean it is alive, part of our nature, or able to attend and pass a course. I suspect this is even obvious to AI!
                          1 Reply Last reply
                          1
                          0
                          Reply
                          • Reply as topic
                          Log in to reply
                          • Oldest to Newest
                          • Newest to Oldest
                          • Most Votes


                          • Login

                          • Login or register to search.
                          • First post
                            Last post
                          0
                          • Categories
                          • Recent
                          • Tags
                          • Popular
                          • World
                          • Users
                          • Groups