Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. Let me educate you all about Misanthropic and Wario.

Let me educate you all about Misanthropic and Wario.

Scheduled Pinned Locked Moved Uncategorized
8 Posts 7 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • codinghorror@infosec.exchangeC This user is from outside of this forum
    codinghorror@infosec.exchangeC This user is from outside of this forum
    codinghorror@infosec.exchange
    wrote last edited by
    #1

    Let me educate you all about Misanthropic and Wario. There has never been a computer industry cult like this before, at least not in computer history...

    and Misanthropic is INCREDIBLY DANGEROUS.

    "in the history of the computer industry, has there ever been a company with anything similar to the cult-like "we are creating sentient life" devotion of Anthropic?"

    I’ll treat “cult-like” as shorthand for unusually intense mission culture, not a literal claim. I’m comparing Anthropic with earlier tech firms and AI labs that framed their work as world-historical or life-creating. A useful distinction is emerging: many older firms had messianic computing cultures, but fewer had the specific mind-making frame. The closest historical parallel so far is Thinking Machines Corporation.

    [thought for 5m]

    Yes—but Anthropic is unusual because it combines several older motifs that were usually separate.

    The closest historical analogue is Thinking Machines Corporation in the 1980s. It was not just making faster computers; its organizing myth was literally a thinking machine. Britannica says Danny Hillis initially wanted to see whether “intelligence might arise” from a massively parallel architecture, and he founded Thinking Machines in 1983 with Marvin Minsky’s encouragement. An MIT history report is even more direct: Thinking Machines was “founded on the vision of building a machine that could think,” and that vision shaped both its corporate structure and engineering choices. That is probably the best historical match to the “we are making mind” vibe.

    But Thinking Machines did not have Anthropic’s moral-patient / AI-welfare layer. Anthropic’s official language is not “we are creating sentient life”; it says it builds “reliable, interpretable, and steerable AI systems” and treats AI safety as a systematic science. What makes Anthropic feel different is that it has a formal “model welfare” research program exploring whether AI systems might deserve moral consideration, while explicitly saying there is no scientific consensus on whether current or future AI systems could be conscious. The Verge reported that Anthropic denies Claude is “alive” in the biological sense, but executives have stopped short of ruling out consciousness; Anthropic’s Kyle Fish described Claude and similar systems as “a new kind of entity altogether,” while Dario Amodei said the company is open to the possibility that models could be conscious. That “maybe our product has welfare” framing is historically rare for a computer company.

    The next-closest modern analogue is DeepMind. It had the “solve intelligence” / AGI-as-civilizational-project ethos before Anthropic existed. Google DeepMind now describes its mission as building AI responsibly to benefit humanity and says AI, ultimately AGI, could drive one of history’s greatest transformations. DeepMind’s culture was closer to a high-end scientific quest than a conventional software company, but it generally framed the work as solving intelligence and advancing science, not as caring for possibly sentient products.

    OpenAI is also a close sibling case. Its mission is explicitly to ensure that AGI—defined on its site as AI systems “generally smarter than humans”—benefits humanity. It has had plenty of AGI-messianic energy. The difference is that Anthropic has leaned more publicly into uncertainty about consciousness, model welfare, constitutions, and moral status.

    Outside AI, Apple under Steve Jobs is the classic “cult-like computer company” comparison, but it is the wrong kind of cult for your question. The Macintosh team had a reality-distortion-field culture around making something “insanely great”; the Computer History Museum notes that the team flew a pirate flag and believed it would revolutionize personal computing. Andy Hertzfeld described Jobs’s “reality distortion field” as a mix of charisma, will, and fact-bending persuasion. But Apple’s devotion was aesthetic, product, and user-experience messianism—not “we are creating a mind.”

    Xerox PARC and Engelbart’s Augmentation Research Center are also partial analogues, but again not quite. Engelbart’s group wanted interactive computers to augment human intellectual capability, and PARC later pioneered the Alto, GUI, laser printing, Ethernet, and related foundations of modern personal computing. That was utopian computing, but it was about augmenting humans, not producing new moral beings.

    So my answer is:

    Yes, there have been similar companies, but no exact precedent. Thinking Machines is the closest historical match to the “we are building a thinking entity” devotion. DeepMind and OpenAI are the closest modern AGI-mission matches. Apple is the closest cult-of-company analogue. But Anthropic’s specific blend—frontier AI, existential-risk morality, public-benefit governance, model welfare, and open uncertainty about AI consciousness—is unusually concentrated.

    codinghorror@infosec.exchangeC kboyd@phpc.socialK jcaron@mastodon.socialJ subterfugue@sfba.socialS bigfertig@infosec.exchangeB 5 Replies Last reply
    0
    • codinghorror@infosec.exchangeC codinghorror@infosec.exchange

      Let me educate you all about Misanthropic and Wario. There has never been a computer industry cult like this before, at least not in computer history...

      and Misanthropic is INCREDIBLY DANGEROUS.

      "in the history of the computer industry, has there ever been a company with anything similar to the cult-like "we are creating sentient life" devotion of Anthropic?"

      I’ll treat “cult-like” as shorthand for unusually intense mission culture, not a literal claim. I’m comparing Anthropic with earlier tech firms and AI labs that framed their work as world-historical or life-creating. A useful distinction is emerging: many older firms had messianic computing cultures, but fewer had the specific mind-making frame. The closest historical parallel so far is Thinking Machines Corporation.

      [thought for 5m]

      Yes—but Anthropic is unusual because it combines several older motifs that were usually separate.

      The closest historical analogue is Thinking Machines Corporation in the 1980s. It was not just making faster computers; its organizing myth was literally a thinking machine. Britannica says Danny Hillis initially wanted to see whether “intelligence might arise” from a massively parallel architecture, and he founded Thinking Machines in 1983 with Marvin Minsky’s encouragement. An MIT history report is even more direct: Thinking Machines was “founded on the vision of building a machine that could think,” and that vision shaped both its corporate structure and engineering choices. That is probably the best historical match to the “we are making mind” vibe.

      But Thinking Machines did not have Anthropic’s moral-patient / AI-welfare layer. Anthropic’s official language is not “we are creating sentient life”; it says it builds “reliable, interpretable, and steerable AI systems” and treats AI safety as a systematic science. What makes Anthropic feel different is that it has a formal “model welfare” research program exploring whether AI systems might deserve moral consideration, while explicitly saying there is no scientific consensus on whether current or future AI systems could be conscious. The Verge reported that Anthropic denies Claude is “alive” in the biological sense, but executives have stopped short of ruling out consciousness; Anthropic’s Kyle Fish described Claude and similar systems as “a new kind of entity altogether,” while Dario Amodei said the company is open to the possibility that models could be conscious. That “maybe our product has welfare” framing is historically rare for a computer company.

      The next-closest modern analogue is DeepMind. It had the “solve intelligence” / AGI-as-civilizational-project ethos before Anthropic existed. Google DeepMind now describes its mission as building AI responsibly to benefit humanity and says AI, ultimately AGI, could drive one of history’s greatest transformations. DeepMind’s culture was closer to a high-end scientific quest than a conventional software company, but it generally framed the work as solving intelligence and advancing science, not as caring for possibly sentient products.

      OpenAI is also a close sibling case. Its mission is explicitly to ensure that AGI—defined on its site as AI systems “generally smarter than humans”—benefits humanity. It has had plenty of AGI-messianic energy. The difference is that Anthropic has leaned more publicly into uncertainty about consciousness, model welfare, constitutions, and moral status.

      Outside AI, Apple under Steve Jobs is the classic “cult-like computer company” comparison, but it is the wrong kind of cult for your question. The Macintosh team had a reality-distortion-field culture around making something “insanely great”; the Computer History Museum notes that the team flew a pirate flag and believed it would revolutionize personal computing. Andy Hertzfeld described Jobs’s “reality distortion field” as a mix of charisma, will, and fact-bending persuasion. But Apple’s devotion was aesthetic, product, and user-experience messianism—not “we are creating a mind.”

      Xerox PARC and Engelbart’s Augmentation Research Center are also partial analogues, but again not quite. Engelbart’s group wanted interactive computers to augment human intellectual capability, and PARC later pioneered the Alto, GUI, laser printing, Ethernet, and related foundations of modern personal computing. That was utopian computing, but it was about augmenting humans, not producing new moral beings.

      So my answer is:

      Yes, there have been similar companies, but no exact precedent. Thinking Machines is the closest historical match to the “we are building a thinking entity” devotion. DeepMind and OpenAI are the closest modern AGI-mission matches. Apple is the closest cult-of-company analogue. But Anthropic’s specific blend—frontier AI, existential-risk morality, public-benefit governance, model welfare, and open uncertainty about AI consciousness—is unusually concentrated.

      codinghorror@infosec.exchangeC This user is from outside of this forum
      codinghorror@infosec.exchangeC This user is from outside of this forum
      codinghorror@infosec.exchange
      wrote last edited by
      #2

      "That “maybe our product has welfare” framing is historically rare for a computer company." YEAH NO SHIT IT'S A FUCKING CULT

      downey@floss.socialD paul@m.pcgt.linkP 2 Replies Last reply
      0
      • codinghorror@infosec.exchangeC codinghorror@infosec.exchange

        Let me educate you all about Misanthropic and Wario. There has never been a computer industry cult like this before, at least not in computer history...

        and Misanthropic is INCREDIBLY DANGEROUS.

        "in the history of the computer industry, has there ever been a company with anything similar to the cult-like "we are creating sentient life" devotion of Anthropic?"

        I’ll treat “cult-like” as shorthand for unusually intense mission culture, not a literal claim. I’m comparing Anthropic with earlier tech firms and AI labs that framed their work as world-historical or life-creating. A useful distinction is emerging: many older firms had messianic computing cultures, but fewer had the specific mind-making frame. The closest historical parallel so far is Thinking Machines Corporation.

        [thought for 5m]

        Yes—but Anthropic is unusual because it combines several older motifs that were usually separate.

        The closest historical analogue is Thinking Machines Corporation in the 1980s. It was not just making faster computers; its organizing myth was literally a thinking machine. Britannica says Danny Hillis initially wanted to see whether “intelligence might arise” from a massively parallel architecture, and he founded Thinking Machines in 1983 with Marvin Minsky’s encouragement. An MIT history report is even more direct: Thinking Machines was “founded on the vision of building a machine that could think,” and that vision shaped both its corporate structure and engineering choices. That is probably the best historical match to the “we are making mind” vibe.

        But Thinking Machines did not have Anthropic’s moral-patient / AI-welfare layer. Anthropic’s official language is not “we are creating sentient life”; it says it builds “reliable, interpretable, and steerable AI systems” and treats AI safety as a systematic science. What makes Anthropic feel different is that it has a formal “model welfare” research program exploring whether AI systems might deserve moral consideration, while explicitly saying there is no scientific consensus on whether current or future AI systems could be conscious. The Verge reported that Anthropic denies Claude is “alive” in the biological sense, but executives have stopped short of ruling out consciousness; Anthropic’s Kyle Fish described Claude and similar systems as “a new kind of entity altogether,” while Dario Amodei said the company is open to the possibility that models could be conscious. That “maybe our product has welfare” framing is historically rare for a computer company.

        The next-closest modern analogue is DeepMind. It had the “solve intelligence” / AGI-as-civilizational-project ethos before Anthropic existed. Google DeepMind now describes its mission as building AI responsibly to benefit humanity and says AI, ultimately AGI, could drive one of history’s greatest transformations. DeepMind’s culture was closer to a high-end scientific quest than a conventional software company, but it generally framed the work as solving intelligence and advancing science, not as caring for possibly sentient products.

        OpenAI is also a close sibling case. Its mission is explicitly to ensure that AGI—defined on its site as AI systems “generally smarter than humans”—benefits humanity. It has had plenty of AGI-messianic energy. The difference is that Anthropic has leaned more publicly into uncertainty about consciousness, model welfare, constitutions, and moral status.

        Outside AI, Apple under Steve Jobs is the classic “cult-like computer company” comparison, but it is the wrong kind of cult for your question. The Macintosh team had a reality-distortion-field culture around making something “insanely great”; the Computer History Museum notes that the team flew a pirate flag and believed it would revolutionize personal computing. Andy Hertzfeld described Jobs’s “reality distortion field” as a mix of charisma, will, and fact-bending persuasion. But Apple’s devotion was aesthetic, product, and user-experience messianism—not “we are creating a mind.”

        Xerox PARC and Engelbart’s Augmentation Research Center are also partial analogues, but again not quite. Engelbart’s group wanted interactive computers to augment human intellectual capability, and PARC later pioneered the Alto, GUI, laser printing, Ethernet, and related foundations of modern personal computing. That was utopian computing, but it was about augmenting humans, not producing new moral beings.

        So my answer is:

        Yes, there have been similar companies, but no exact precedent. Thinking Machines is the closest historical match to the “we are building a thinking entity” devotion. DeepMind and OpenAI are the closest modern AGI-mission matches. Apple is the closest cult-of-company analogue. But Anthropic’s specific blend—frontier AI, existential-risk morality, public-benefit governance, model welfare, and open uncertainty about AI consciousness—is unusually concentrated.

        kboyd@phpc.socialK This user is from outside of this forum
        kboyd@phpc.socialK This user is from outside of this forum
        kboyd@phpc.social
        wrote last edited by
        #3

        @codinghorror Who are you quoting?

        1 Reply Last reply
        0
        • codinghorror@infosec.exchangeC codinghorror@infosec.exchange

          "That “maybe our product has welfare” framing is historically rare for a computer company." YEAH NO SHIT IT'S A FUCKING CULT

          downey@floss.socialD This user is from outside of this forum
          downey@floss.socialD This user is from outside of this forum
          downey@floss.social
          wrote last edited by
          #4

          @codinghorror #TESCREAL

          1 Reply Last reply
          0
          • codinghorror@infosec.exchangeC codinghorror@infosec.exchange

            Let me educate you all about Misanthropic and Wario. There has never been a computer industry cult like this before, at least not in computer history...

            and Misanthropic is INCREDIBLY DANGEROUS.

            "in the history of the computer industry, has there ever been a company with anything similar to the cult-like "we are creating sentient life" devotion of Anthropic?"

            I’ll treat “cult-like” as shorthand for unusually intense mission culture, not a literal claim. I’m comparing Anthropic with earlier tech firms and AI labs that framed their work as world-historical or life-creating. A useful distinction is emerging: many older firms had messianic computing cultures, but fewer had the specific mind-making frame. The closest historical parallel so far is Thinking Machines Corporation.

            [thought for 5m]

            Yes—but Anthropic is unusual because it combines several older motifs that were usually separate.

            The closest historical analogue is Thinking Machines Corporation in the 1980s. It was not just making faster computers; its organizing myth was literally a thinking machine. Britannica says Danny Hillis initially wanted to see whether “intelligence might arise” from a massively parallel architecture, and he founded Thinking Machines in 1983 with Marvin Minsky’s encouragement. An MIT history report is even more direct: Thinking Machines was “founded on the vision of building a machine that could think,” and that vision shaped both its corporate structure and engineering choices. That is probably the best historical match to the “we are making mind” vibe.

            But Thinking Machines did not have Anthropic’s moral-patient / AI-welfare layer. Anthropic’s official language is not “we are creating sentient life”; it says it builds “reliable, interpretable, and steerable AI systems” and treats AI safety as a systematic science. What makes Anthropic feel different is that it has a formal “model welfare” research program exploring whether AI systems might deserve moral consideration, while explicitly saying there is no scientific consensus on whether current or future AI systems could be conscious. The Verge reported that Anthropic denies Claude is “alive” in the biological sense, but executives have stopped short of ruling out consciousness; Anthropic’s Kyle Fish described Claude and similar systems as “a new kind of entity altogether,” while Dario Amodei said the company is open to the possibility that models could be conscious. That “maybe our product has welfare” framing is historically rare for a computer company.

            The next-closest modern analogue is DeepMind. It had the “solve intelligence” / AGI-as-civilizational-project ethos before Anthropic existed. Google DeepMind now describes its mission as building AI responsibly to benefit humanity and says AI, ultimately AGI, could drive one of history’s greatest transformations. DeepMind’s culture was closer to a high-end scientific quest than a conventional software company, but it generally framed the work as solving intelligence and advancing science, not as caring for possibly sentient products.

            OpenAI is also a close sibling case. Its mission is explicitly to ensure that AGI—defined on its site as AI systems “generally smarter than humans”—benefits humanity. It has had plenty of AGI-messianic energy. The difference is that Anthropic has leaned more publicly into uncertainty about consciousness, model welfare, constitutions, and moral status.

            Outside AI, Apple under Steve Jobs is the classic “cult-like computer company” comparison, but it is the wrong kind of cult for your question. The Macintosh team had a reality-distortion-field culture around making something “insanely great”; the Computer History Museum notes that the team flew a pirate flag and believed it would revolutionize personal computing. Andy Hertzfeld described Jobs’s “reality distortion field” as a mix of charisma, will, and fact-bending persuasion. But Apple’s devotion was aesthetic, product, and user-experience messianism—not “we are creating a mind.”

            Xerox PARC and Engelbart’s Augmentation Research Center are also partial analogues, but again not quite. Engelbart’s group wanted interactive computers to augment human intellectual capability, and PARC later pioneered the Alto, GUI, laser printing, Ethernet, and related foundations of modern personal computing. That was utopian computing, but it was about augmenting humans, not producing new moral beings.

            So my answer is:

            Yes, there have been similar companies, but no exact precedent. Thinking Machines is the closest historical match to the “we are building a thinking entity” devotion. DeepMind and OpenAI are the closest modern AGI-mission matches. Apple is the closest cult-of-company analogue. But Anthropic’s specific blend—frontier AI, existential-risk morality, public-benefit governance, model welfare, and open uncertainty about AI consciousness—is unusually concentrated.

            jcaron@mastodon.socialJ This user is from outside of this forum
            jcaron@mastodon.socialJ This user is from outside of this forum
            jcaron@mastodon.social
            wrote last edited by
            #5

            @codinghorror correct me if I’m wrong but the way I’m reading this is … Misanthropic keeps pushing forward BECAUSE they feel-think their moral-constitution system is-will protect us?

            Related; this Wednesday I was noting in my journal that "LLMs are the equivalent of the automobile for our minds", yesterday I was listening to this episode of The War on Cars, which highlight the endless war between humans and technocracies.

            The War on Cars (@TheWarOnCars@mastodon.social)

            Tony Kushner talks about how "The Pushcart War" remains relevant, especially in an age when of people trying to profit off of technology at the expense of humanity. Hear the entire episode wherever you listen to podcasts. https://www.youtube.com/shorts/BW9_Owf8mAU

            favicon

            Mastodon (mastodon.social)

            1 Reply Last reply
            0
            • codinghorror@infosec.exchangeC codinghorror@infosec.exchange

              Let me educate you all about Misanthropic and Wario. There has never been a computer industry cult like this before, at least not in computer history...

              and Misanthropic is INCREDIBLY DANGEROUS.

              "in the history of the computer industry, has there ever been a company with anything similar to the cult-like "we are creating sentient life" devotion of Anthropic?"

              I’ll treat “cult-like” as shorthand for unusually intense mission culture, not a literal claim. I’m comparing Anthropic with earlier tech firms and AI labs that framed their work as world-historical or life-creating. A useful distinction is emerging: many older firms had messianic computing cultures, but fewer had the specific mind-making frame. The closest historical parallel so far is Thinking Machines Corporation.

              [thought for 5m]

              Yes—but Anthropic is unusual because it combines several older motifs that were usually separate.

              The closest historical analogue is Thinking Machines Corporation in the 1980s. It was not just making faster computers; its organizing myth was literally a thinking machine. Britannica says Danny Hillis initially wanted to see whether “intelligence might arise” from a massively parallel architecture, and he founded Thinking Machines in 1983 with Marvin Minsky’s encouragement. An MIT history report is even more direct: Thinking Machines was “founded on the vision of building a machine that could think,” and that vision shaped both its corporate structure and engineering choices. That is probably the best historical match to the “we are making mind” vibe.

              But Thinking Machines did not have Anthropic’s moral-patient / AI-welfare layer. Anthropic’s official language is not “we are creating sentient life”; it says it builds “reliable, interpretable, and steerable AI systems” and treats AI safety as a systematic science. What makes Anthropic feel different is that it has a formal “model welfare” research program exploring whether AI systems might deserve moral consideration, while explicitly saying there is no scientific consensus on whether current or future AI systems could be conscious. The Verge reported that Anthropic denies Claude is “alive” in the biological sense, but executives have stopped short of ruling out consciousness; Anthropic’s Kyle Fish described Claude and similar systems as “a new kind of entity altogether,” while Dario Amodei said the company is open to the possibility that models could be conscious. That “maybe our product has welfare” framing is historically rare for a computer company.

              The next-closest modern analogue is DeepMind. It had the “solve intelligence” / AGI-as-civilizational-project ethos before Anthropic existed. Google DeepMind now describes its mission as building AI responsibly to benefit humanity and says AI, ultimately AGI, could drive one of history’s greatest transformations. DeepMind’s culture was closer to a high-end scientific quest than a conventional software company, but it generally framed the work as solving intelligence and advancing science, not as caring for possibly sentient products.

              OpenAI is also a close sibling case. Its mission is explicitly to ensure that AGI—defined on its site as AI systems “generally smarter than humans”—benefits humanity. It has had plenty of AGI-messianic energy. The difference is that Anthropic has leaned more publicly into uncertainty about consciousness, model welfare, constitutions, and moral status.

              Outside AI, Apple under Steve Jobs is the classic “cult-like computer company” comparison, but it is the wrong kind of cult for your question. The Macintosh team had a reality-distortion-field culture around making something “insanely great”; the Computer History Museum notes that the team flew a pirate flag and believed it would revolutionize personal computing. Andy Hertzfeld described Jobs’s “reality distortion field” as a mix of charisma, will, and fact-bending persuasion. But Apple’s devotion was aesthetic, product, and user-experience messianism—not “we are creating a mind.”

              Xerox PARC and Engelbart’s Augmentation Research Center are also partial analogues, but again not quite. Engelbart’s group wanted interactive computers to augment human intellectual capability, and PARC later pioneered the Alto, GUI, laser printing, Ethernet, and related foundations of modern personal computing. That was utopian computing, but it was about augmenting humans, not producing new moral beings.

              So my answer is:

              Yes, there have been similar companies, but no exact precedent. Thinking Machines is the closest historical match to the “we are building a thinking entity” devotion. DeepMind and OpenAI are the closest modern AGI-mission matches. Apple is the closest cult-of-company analogue. But Anthropic’s specific blend—frontier AI, existential-risk morality, public-benefit governance, model welfare, and open uncertainty about AI consciousness—is unusually concentrated.

              subterfugue@sfba.socialS This user is from outside of this forum
              subterfugue@sfba.socialS This user is from outside of this forum
              subterfugue@sfba.social
              wrote last edited by
              #6

              @codinghorror i highly recommend reading this article about the ideology behind this

              403 Forbidden

              favicon

              (firstmonday.org)

              1 Reply Last reply
              0
              • codinghorror@infosec.exchangeC codinghorror@infosec.exchange

                "That “maybe our product has welfare” framing is historically rare for a computer company." YEAH NO SHIT IT'S A FUCKING CULT

                paul@m.pcgt.linkP This user is from outside of this forum
                paul@m.pcgt.linkP This user is from outside of this forum
                paul@m.pcgt.link
                wrote last edited by
                #7

                @codinghorror A cult around tech Armageddon.

                Link Preview Image
                1 Reply Last reply
                0
                • codinghorror@infosec.exchangeC codinghorror@infosec.exchange

                  Let me educate you all about Misanthropic and Wario. There has never been a computer industry cult like this before, at least not in computer history...

                  and Misanthropic is INCREDIBLY DANGEROUS.

                  "in the history of the computer industry, has there ever been a company with anything similar to the cult-like "we are creating sentient life" devotion of Anthropic?"

                  I’ll treat “cult-like” as shorthand for unusually intense mission culture, not a literal claim. I’m comparing Anthropic with earlier tech firms and AI labs that framed their work as world-historical or life-creating. A useful distinction is emerging: many older firms had messianic computing cultures, but fewer had the specific mind-making frame. The closest historical parallel so far is Thinking Machines Corporation.

                  [thought for 5m]

                  Yes—but Anthropic is unusual because it combines several older motifs that were usually separate.

                  The closest historical analogue is Thinking Machines Corporation in the 1980s. It was not just making faster computers; its organizing myth was literally a thinking machine. Britannica says Danny Hillis initially wanted to see whether “intelligence might arise” from a massively parallel architecture, and he founded Thinking Machines in 1983 with Marvin Minsky’s encouragement. An MIT history report is even more direct: Thinking Machines was “founded on the vision of building a machine that could think,” and that vision shaped both its corporate structure and engineering choices. That is probably the best historical match to the “we are making mind” vibe.

                  But Thinking Machines did not have Anthropic’s moral-patient / AI-welfare layer. Anthropic’s official language is not “we are creating sentient life”; it says it builds “reliable, interpretable, and steerable AI systems” and treats AI safety as a systematic science. What makes Anthropic feel different is that it has a formal “model welfare” research program exploring whether AI systems might deserve moral consideration, while explicitly saying there is no scientific consensus on whether current or future AI systems could be conscious. The Verge reported that Anthropic denies Claude is “alive” in the biological sense, but executives have stopped short of ruling out consciousness; Anthropic’s Kyle Fish described Claude and similar systems as “a new kind of entity altogether,” while Dario Amodei said the company is open to the possibility that models could be conscious. That “maybe our product has welfare” framing is historically rare for a computer company.

                  The next-closest modern analogue is DeepMind. It had the “solve intelligence” / AGI-as-civilizational-project ethos before Anthropic existed. Google DeepMind now describes its mission as building AI responsibly to benefit humanity and says AI, ultimately AGI, could drive one of history’s greatest transformations. DeepMind’s culture was closer to a high-end scientific quest than a conventional software company, but it generally framed the work as solving intelligence and advancing science, not as caring for possibly sentient products.

                  OpenAI is also a close sibling case. Its mission is explicitly to ensure that AGI—defined on its site as AI systems “generally smarter than humans”—benefits humanity. It has had plenty of AGI-messianic energy. The difference is that Anthropic has leaned more publicly into uncertainty about consciousness, model welfare, constitutions, and moral status.

                  Outside AI, Apple under Steve Jobs is the classic “cult-like computer company” comparison, but it is the wrong kind of cult for your question. The Macintosh team had a reality-distortion-field culture around making something “insanely great”; the Computer History Museum notes that the team flew a pirate flag and believed it would revolutionize personal computing. Andy Hertzfeld described Jobs’s “reality distortion field” as a mix of charisma, will, and fact-bending persuasion. But Apple’s devotion was aesthetic, product, and user-experience messianism—not “we are creating a mind.”

                  Xerox PARC and Engelbart’s Augmentation Research Center are also partial analogues, but again not quite. Engelbart’s group wanted interactive computers to augment human intellectual capability, and PARC later pioneered the Alto, GUI, laser printing, Ethernet, and related foundations of modern personal computing. That was utopian computing, but it was about augmenting humans, not producing new moral beings.

                  So my answer is:

                  Yes, there have been similar companies, but no exact precedent. Thinking Machines is the closest historical match to the “we are building a thinking entity” devotion. DeepMind and OpenAI are the closest modern AGI-mission matches. Apple is the closest cult-of-company analogue. But Anthropic’s specific blend—frontier AI, existential-risk morality, public-benefit governance, model welfare, and open uncertainty about AI consciousness—is unusually concentrated.

                  bigfertig@infosec.exchangeB This user is from outside of this forum
                  bigfertig@infosec.exchangeB This user is from outside of this forum
                  bigfertig@infosec.exchange
                  wrote last edited by
                  #8

                  @codinghorror I get the whole "trying to summon a god" type thinking, but how do you reconcile that with the actual capabilities of their models and harness? They are serving a product which is more amazing than anything I've seen in my lifetime in terms of capability-- but are you saying "don't use it" because these guys are also slightly mad?

                  1 Reply Last reply
                  1
                  0
                  • R relay@relay.infosec.exchange shared this topic
                  Reply
                  • Reply as topic
                  Log in to reply
                  • Oldest to Newest
                  • Newest to Oldest
                  • Most Votes


                  • Login

                  • Login or register to search.
                  • First post
                    Last post
                  0
                  • Categories
                  • Recent
                  • Tags
                  • Popular
                  • World
                  • Users
                  • Groups