Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. I generally prefer the MIT license for my personal projects.

I generally prefer the MIT license for my personal projects.

Scheduled Pinned Locked Moved Uncategorized
44 Posts 14 Posters 113 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • asie@mk.asie.plA asie@mk.asie.pl

    @gloriouscow@oldbytes.space I think a key reason LLMs do better with programming than other fields is that code is much more hopelessly repetitive than we like to admit to ourselves. To borrow your example, how many Mandelbrot renderers were written on GitHub? And that's a niche example - think of things people write for a living, CRUD services, REST APIs, login pages, parsing libraries, wrappers...

    I agree, and have said for a while now, that it is a disservice to frame the opposition to the LLM boom in terms of anything other than (a) opposition to Big Tech's view of the world and (b) a kind of labor dispute. Copyright laws can be changed; power efficiency can improve; slop can be made less sloppy by making the number of weight-monkeys approach infinity - under the condition that the music doesn't stop first - which I think is what companies like OpenAI and Anthropic are banking on.

    Personally, my key issue is the idea of what I call "digital sovereignty". I do not want to be beholden to a cloud subscription to do the most basic elements of my job or my passion, because I have seen where that road takes us: enshittification, raising prices, customer-hostile changes, even geopolitical problems. Notably, this doesn't apply to so-called "open weight" models - but the "good ones" are both still behind SOTA and unviable for all but the largest polycules, not to mention the RAM/SSD pricing upheaval.

    I am also concerned about the copyright angle, deskilling, AI psychosis, cultural impact, et cetera - but for more practical reasons. I also still believe LLMs are an evolutionary dead end for artificial intelligence, even if they have gotten considerably further than I anticipated.

    In addition, I've seen many groups concede that while they are not interested in AI generated art or music (Adam Neely's video on Suno AI raises a lot of good points about that), they don't mind, say, AI generated code. This personally makes me a little sad, but I understand that for most people art is an end, but code is merely a means to an end.

    But I don't believe the technology itself, as in the mathematical equations or the idea of generating tokens using LLMs in response to inputs, is inherently evil. I really like viznut's essay on that matter:
    http://viznut.fi/texts-en/machine_learning_rant.html - but I've also seen LLM efforts which try to avoid, say, the mass copyright infringement problem, and while their results certainly look more impressive than I anticipated, they also aren't really commercially viable, so to speak.

    Final note - a lot of people trying LLM-based technology compare it to a slot machine, in that the quality of the result you get is highly unpredictable. I think, outside of niche tech circles, some don't realize that so many things have already become akin to gambling. Sports, mobile games, software bugs, cloud services, apparently the news, etc. - in that lens, ChatGPT becomes just another unreliable tool, not something uniquely unreliable.

    asie@mk.asie.plA This user is from outside of this forum
    asie@mk.asie.plA This user is from outside of this forum
    asie@mk.asie.pl
    wrote last edited by
    #32

    @gloriouscow@oldbytes.space

    (And I continue to question how good these tools have become in a general sense. I've seen a community member try, i believe, Gemini-2.5-Flash, to perform summarization of its own scraped Discord posts (in particular, overseas travel advice). It, uh, it didn't go well. Though we did laugh a lot, between the conversations about consent it provoked.)

    1 Reply Last reply
    0
    • gloriouscow@oldbytes.spaceG gloriouscow@oldbytes.space

      To be fair I've seen the opposite happen as well, where people will take code released into the public domain and write Rust bindings for it and release those as GPL or some other more restrictive license, and I think that sucks too.

      How hard is it to just - keep the same license? Just preserve the author's intent. They had a vision in mind and made a choice when they put their creative energies out into the world. Pass that forward.

      xanathar@hachyderm.ioX This user is from outside of this forum
      xanathar@hachyderm.ioX This user is from outside of this forum
      xanathar@hachyderm.io
      wrote last edited by
      #33

      @gloriouscow bindings are a different thing though, they are little more than a header file on steroids. For example I totally see (and there are plenty of examples) the case for MIT bindings towards LGPL libraries. They don't alter nor remove the original licensing terms, so it makes sense for them to be the least legally binding license imho

      gloriouscow@oldbytes.spaceG 1 Reply Last reply
      0
      • asie@mk.asie.plA asie@mk.asie.pl

        @gloriouscow@oldbytes.space I think a key reason LLMs do better with programming than other fields is that code is much more hopelessly repetitive than we like to admit to ourselves. To borrow your example, how many Mandelbrot renderers were written on GitHub? And that's a niche example - think of things people write for a living, CRUD services, REST APIs, login pages, parsing libraries, wrappers...

        I agree, and have said for a while now, that it is a disservice to frame the opposition to the LLM boom in terms of anything other than (a) opposition to Big Tech's view of the world and (b) a kind of labor dispute. Copyright laws can be changed; power efficiency can improve; slop can be made less sloppy by making the number of weight-monkeys approach infinity - under the condition that the music doesn't stop first - which I think is what companies like OpenAI and Anthropic are banking on.

        Personally, my key issue is the idea of what I call "digital sovereignty". I do not want to be beholden to a cloud subscription to do the most basic elements of my job or my passion, because I have seen where that road takes us: enshittification, raising prices, customer-hostile changes, even geopolitical problems. Notably, this doesn't apply to so-called "open weight" models - but the "good ones" are both still behind SOTA and unviable for all but the largest polycules, not to mention the RAM/SSD pricing upheaval.

        I am also concerned about the copyright angle, deskilling, AI psychosis, cultural impact, et cetera - but for more practical reasons. I also still believe LLMs are an evolutionary dead end for artificial intelligence, even if they have gotten considerably further than I anticipated.

        In addition, I've seen many groups concede that while they are not interested in AI generated art or music (Adam Neely's video on Suno AI raises a lot of good points about that), they don't mind, say, AI generated code. This personally makes me a little sad, but I understand that for most people art is an end, but code is merely a means to an end.

        But I don't believe the technology itself, as in the mathematical equations or the idea of generating tokens using LLMs in response to inputs, is inherently evil. I really like viznut's essay on that matter:
        http://viznut.fi/texts-en/machine_learning_rant.html - but I've also seen LLM efforts which try to avoid, say, the mass copyright infringement problem, and while their results certainly look more impressive than I anticipated, they also aren't really commercially viable, so to speak.

        Final note - a lot of people trying LLM-based technology compare it to a slot machine, in that the quality of the result you get is highly unpredictable. I think, outside of niche tech circles, some don't realize that so many things have already become akin to gambling. Sports, mobile games, software bugs, cloud services, apparently the news, etc. - in that lens, ChatGPT becomes just another unreliable tool, not something uniquely unreliable.

        gloriouscow@oldbytes.spaceG This user is from outside of this forum
        gloriouscow@oldbytes.spaceG This user is from outside of this forum
        gloriouscow@oldbytes.space
        wrote last edited by
        #34

        @asie

        The first thing I did of course is try to find if it had copied something - there were not a lot of examples of ASM mandlebrots to go through on GitHub - many were 16-bit, most that were 32-bit used the FPU, or instructions not available on the 386, or some other disqualifier from direct plagiarism.

        After coming up empty on Github I spent a fair bit of time pulling down mandelbrot demos from pouet, as they sometimes include source code.

        there were clear and apparent differences in every example I looked at - i learned a rather interesting trick for getting a pointer to the VGA framebuffer going through those!

        in any case, it was clear that demo-coders were more skilled, keeping everything in registers in the main iteration loop, whereas as the GPT example was using several temporary variables in RAM.

        But I was just impressed that it worked at all.

        The entire point was a request that it would have failed miserably a year prior , and something that leaned on the side of having the least training data available as possible - but when these companies have scraped every single corner of the internet by now, it might be difficult to pinpoint any particular task that doesn't have some sort of preceding example it can leverage.

        It's difficult for me to measure improvement in quantifiable terms other than giving it these sort of challenges - you can see the various scores on things like ARC-AGI trending upwards with every new model, but that sort of thing is a rather abstract measure - what does that relate to in practical terms?

        I feel like the AI companies must thank their lucky stars that coding ended up being AI's "killer app". OpenAI would never succeed with something as vapid as Sora as their flagship product.

        The greater acceptance of generative AI by programmers is a very interesting phenomenon. There's probably quite a few psychology thesis papers to mine out of that topic. I'm not really ready to be completely cynical regarding the motivations of programmers vs visual artists or musicians. There may be something more fundamental at play.

        asie@mk.asie.plA 1 Reply Last reply
        0
        • gloriouscow@oldbytes.spaceG gloriouscow@oldbytes.space

          @asie

          The first thing I did of course is try to find if it had copied something - there were not a lot of examples of ASM mandlebrots to go through on GitHub - many were 16-bit, most that were 32-bit used the FPU, or instructions not available on the 386, or some other disqualifier from direct plagiarism.

          After coming up empty on Github I spent a fair bit of time pulling down mandelbrot demos from pouet, as they sometimes include source code.

          there were clear and apparent differences in every example I looked at - i learned a rather interesting trick for getting a pointer to the VGA framebuffer going through those!

          in any case, it was clear that demo-coders were more skilled, keeping everything in registers in the main iteration loop, whereas as the GPT example was using several temporary variables in RAM.

          But I was just impressed that it worked at all.

          The entire point was a request that it would have failed miserably a year prior , and something that leaned on the side of having the least training data available as possible - but when these companies have scraped every single corner of the internet by now, it might be difficult to pinpoint any particular task that doesn't have some sort of preceding example it can leverage.

          It's difficult for me to measure improvement in quantifiable terms other than giving it these sort of challenges - you can see the various scores on things like ARC-AGI trending upwards with every new model, but that sort of thing is a rather abstract measure - what does that relate to in practical terms?

          I feel like the AI companies must thank their lucky stars that coding ended up being AI's "killer app". OpenAI would never succeed with something as vapid as Sora as their flagship product.

          The greater acceptance of generative AI by programmers is a very interesting phenomenon. There's probably quite a few psychology thesis papers to mine out of that topic. I'm not really ready to be completely cynical regarding the motivations of programmers vs visual artists or musicians. There may be something more fundamental at play.

          asie@mk.asie.plA This user is from outside of this forum
          asie@mk.asie.plA This user is from outside of this forum
          asie@mk.asie.pl
          wrote last edited by
          #35

          @gloriouscow@oldbytes.space

          I don't think observing a difference in values is cynical. If you value productivity more than digital sovereignty or ecology, of if you don't hold a positive view of copyright, or if you hold a positive view of modern day corporate capitalism, why wouldn't you use these tools?

          The most cynical thing I think I believe about generative AI users is that the feedback loop of using LLMs often enables a kind of narcissistic-leaning tendency to treat the feedback loop as a first resort over other humans. It was particularly apparent to me in the case of the music generation tool Suno AI, where people were hard-pressed to name other AI generating users who inspire them, or even other AI generated music they listen to! I don't think that's a good change.

          And, of course, I am worried for the backlash against AI generated works pivoting against humans who aren't skilled enough to not be accused of being LLM tool users. I mean, this has already been happening.

          gloriouscow@oldbytes.spaceG 1 Reply Last reply
          0
          • xanathar@hachyderm.ioX xanathar@hachyderm.io

            @gloriouscow bindings are a different thing though, they are little more than a header file on steroids. For example I totally see (and there are plenty of examples) the case for MIT bindings towards LGPL libraries. They don't alter nor remove the original licensing terms, so it makes sense for them to be the least legally binding license imho

            gloriouscow@oldbytes.spaceG This user is from outside of this forum
            gloriouscow@oldbytes.spaceG This user is from outside of this forum
            gloriouscow@oldbytes.space
            wrote last edited by
            #36

            @xanathar the way that Rust FFI bindings work though is they typically end up wrapped up in a crate with the original source, so that's problematic, because the LGPL code is inside - I noted several FFI binding crates on crates.io that were marked MIT - the implication being you can just happily cargo add to your MIT-licensed project and happily go about your day, but you're actually now linking with GPL code.

            The bindings themselves are useless without the code they are binding to so I see no compelling reason to use a difference license.

            xanathar@hachyderm.ioX 1 Reply Last reply
            0
            • asie@mk.asie.plA asie@mk.asie.pl

              @gloriouscow@oldbytes.space

              I don't think observing a difference in values is cynical. If you value productivity more than digital sovereignty or ecology, of if you don't hold a positive view of copyright, or if you hold a positive view of modern day corporate capitalism, why wouldn't you use these tools?

              The most cynical thing I think I believe about generative AI users is that the feedback loop of using LLMs often enables a kind of narcissistic-leaning tendency to treat the feedback loop as a first resort over other humans. It was particularly apparent to me in the case of the music generation tool Suno AI, where people were hard-pressed to name other AI generating users who inspire them, or even other AI generated music they listen to! I don't think that's a good change.

              And, of course, I am worried for the backlash against AI generated works pivoting against humans who aren't skilled enough to not be accused of being LLM tool users. I mean, this has already been happening.

              gloriouscow@oldbytes.spaceG This user is from outside of this forum
              gloriouscow@oldbytes.spaceG This user is from outside of this forum
              gloriouscow@oldbytes.space
              wrote last edited by
              #37

              @asie Those are all various categories of basic moral failings, but what I struggle with is knowing many people personally, people who I would call friends, who are happily just vibing away with Claude all day long.

              The impartial observer might just suggest that this is the point where I realize they are all Bad People or such. It takes a lot more than that for me to write someone off. I've always viewed people as morally complicated, and I am not exactly a saint myself.

              There is even a lot of frank hypocrisy in anti-AI crowd - a lot of people with dozens of terabytes of pirated movies and books on their NAS are suddenly outraged about companies not respecting copyright.

              I tend to think that people can at best ascribe a handful of ethical positions that are actually important to them and then the brain just exhausts any ability to give a shit beyond that. The brain literally does not have the time to sit and be outraged about everything worth being outraged over. The more abstract you make it, and the more steps removed you are from direct responsibility, the more likely you can just shrug it off.

              I really don't think the average person gives any sort of shit that Anthropic illegally scanned Harry Potter. I really don't.

              asie@mk.asie.plA 1 Reply Last reply
              0
              • gloriouscow@oldbytes.spaceG gloriouscow@oldbytes.space

                @asie Those are all various categories of basic moral failings, but what I struggle with is knowing many people personally, people who I would call friends, who are happily just vibing away with Claude all day long.

                The impartial observer might just suggest that this is the point where I realize they are all Bad People or such. It takes a lot more than that for me to write someone off. I've always viewed people as morally complicated, and I am not exactly a saint myself.

                There is even a lot of frank hypocrisy in anti-AI crowd - a lot of people with dozens of terabytes of pirated movies and books on their NAS are suddenly outraged about companies not respecting copyright.

                I tend to think that people can at best ascribe a handful of ethical positions that are actually important to them and then the brain just exhausts any ability to give a shit beyond that. The brain literally does not have the time to sit and be outraged about everything worth being outraged over. The more abstract you make it, and the more steps removed you are from direct responsibility, the more likely you can just shrug it off.

                I really don't think the average person gives any sort of shit that Anthropic illegally scanned Harry Potter. I really don't.

                asie@mk.asie.plA This user is from outside of this forum
                asie@mk.asie.plA This user is from outside of this forum
                asie@mk.asie.pl
                wrote last edited by
                #38

                @gloriouscow@oldbytes.space

                The impartial observer might just suggest that this is the point where I realize they are all Bad People or such.
                I don't think this observer would be impartial. I think it takes a very specific, if not exactly unpopular, mindset to decide LLMs are Bad People technology but almost everything that came before them is not. I have spent my time being wary of social media, for example, instituting a personal boycott of Meta in particular, though I acknowledge that too is somewhat hypocritical of me.
                There is even a lot of frank hypocrisy in anti-AI crowd - a lot of people dozens of terabytes of pirated movies and books on their NAS are suddenly outraged about companies not respecting copyright.
                I don't think that's hypocrisy, however, but a difference in values. There exist "information wants to be free" pirates, and there exist "fuck the corporations" pirates. The former are going to be enthusiastic about LLM research, the latter are going to be apprehensive.

                1 Reply Last reply
                0
                • gloriouscow@oldbytes.spaceG gloriouscow@oldbytes.space

                  @xanathar the way that Rust FFI bindings work though is they typically end up wrapped up in a crate with the original source, so that's problematic, because the LGPL code is inside - I noted several FFI binding crates on crates.io that were marked MIT - the implication being you can just happily cargo add to your MIT-licensed project and happily go about your day, but you're actually now linking with GPL code.

                  The bindings themselves are useless without the code they are binding to so I see no compelling reason to use a difference license.

                  xanathar@hachyderm.ioX This user is from outside of this forum
                  xanathar@hachyderm.ioX This user is from outside of this forum
                  xanathar@hachyderm.io
                  wrote last edited by
                  #39

                  @gloriouscow the compelling reason is how to solve the problem you're highlighting. The LGPL crate must be linked and not imported (any crate doing otherwise is in violation of licensing terms imho). The bindings otoh should be non-LGPL because they are source imported (so not subject to the link exception) and would spread the GPL everywhere else, if they were. The overall results would still be subject to the LGPL licensing requirements for the linked library because that's linked, while allowing free link usage (i.e. respecting what I'd interpret to be the will of the original licensing).

                  xanathar@hachyderm.ioX gloriouscow@oldbytes.spaceG 2 Replies Last reply
                  0
                  • xanathar@hachyderm.ioX xanathar@hachyderm.io

                    @gloriouscow the compelling reason is how to solve the problem you're highlighting. The LGPL crate must be linked and not imported (any crate doing otherwise is in violation of licensing terms imho). The bindings otoh should be non-LGPL because they are source imported (so not subject to the link exception) and would spread the GPL everywhere else, if they were. The overall results would still be subject to the LGPL licensing requirements for the linked library because that's linked, while allowing free link usage (i.e. respecting what I'd interpret to be the will of the original licensing).

                    xanathar@hachyderm.ioX This user is from outside of this forum
                    xanathar@hachyderm.ioX This user is from outside of this forum
                    xanathar@hachyderm.io
                    wrote last edited by
                    #40

                    @gloriouscow about cargo adding dependencies trusting the license determination of the first crate... That is a mistake no matter what: it's still responsibility of whoever cargo adds to verify all the linked dependencies. GPL is only one case but, for example, n-clause BSD also needs special treatment and is likely to get buried in the deps tree. Cargo license can help but not with external linkage.

                    Anyway, all this of course assumes good will, which is not the case of "despicable guy using LLMs to circumvent terms"

                    1 Reply Last reply
                    0
                    • gloriouscow@oldbytes.spaceG gloriouscow@oldbytes.space

                      I see a lot of derisive dismissal of AI on grounds other than ethical ones and I somehow feel it is a mistaken approach, almost like a Vegan trying to convince you that all steak tastes bad.

                      I feel it is a dangerous underestimation of the immense resources in both talent and money being brought to bear on the problem.

                      Too many people focus on where AI currently is, forgetting where it was just scant years ago, and ignoring its current velocity.

                      I feel like anyone actually paying attention and testing each model that comes out knows that laughing it off as "slop" is not going to remain particularly amusing for long.

                      Only a year ago ChatGPT couldn't write Hello World in x86 assembly, and now it will emit a complete, working, 32-bit MS-DOS Mandelbrot generator in a single prompt.

                      The slop is starting to not look so very sloppy.

                      The only argument that I predict will not age extremely poorly is the ethical one.

                      After all, it is not like if ChatGPT stopped hallucinating and glazing and regurgitating its inputs tomorrow, you'd suddenly be okay with it - so why use any other argument other than that it is a leviathan in the hands of the oligarchy?

                      Slop or Shakespeare, that doesn't change.

                      janeishly@beige.partyJ This user is from outside of this forum
                      janeishly@beige.partyJ This user is from outside of this forum
                      janeishly@beige.party
                      wrote last edited by
                      #41

                      @gloriouscow What about the environmental one? I feel that's actually most important, not even the "you stole this material" or "if you make millions of people unemployed where's the money coming from to fill the hole of their taxes?"

                      gloriouscow@oldbytes.spaceG 1 Reply Last reply
                      0
                      • janeishly@beige.partyJ janeishly@beige.party

                        @gloriouscow What about the environmental one? I feel that's actually most important, not even the "you stole this material" or "if you make millions of people unemployed where's the money coming from to fill the hole of their taxes?"

                        gloriouscow@oldbytes.spaceG This user is from outside of this forum
                        gloriouscow@oldbytes.spaceG This user is from outside of this forum
                        gloriouscow@oldbytes.space
                        wrote last edited by
                        #42

                        @janeishly i would certainly include that under the ethical umbrella

                        we've had a hard time motivating people to give a shit about the environment even before AI though so good luck with making them care now.

                        my growing cynicism signals it is time for bed

                        1 Reply Last reply
                        0
                        • xanathar@hachyderm.ioX xanathar@hachyderm.io

                          @gloriouscow the compelling reason is how to solve the problem you're highlighting. The LGPL crate must be linked and not imported (any crate doing otherwise is in violation of licensing terms imho). The bindings otoh should be non-LGPL because they are source imported (so not subject to the link exception) and would spread the GPL everywhere else, if they were. The overall results would still be subject to the LGPL licensing requirements for the linked library because that's linked, while allowing free link usage (i.e. respecting what I'd interpret to be the will of the original licensing).

                          gloriouscow@oldbytes.spaceG This user is from outside of this forum
                          gloriouscow@oldbytes.spaceG This user is from outside of this forum
                          gloriouscow@oldbytes.space
                          wrote last edited by
                          #43

                          @xanathar the LGPL has a carve-out exception for header files, or you wouldn't be able to even dynamically link any LGPL library in the first place because including the header would have already poisoned your binary.

                          now if the my bindgen.rs doesn't qualify as a "header" I will happily add verbiage at the top of it stating that particular file is public domain, but my assumption was that it basically serves the exact purpose as intended to allow use of "numerical parameters, data structure layouts and accessors."

                          if you've got a source that states otherwise i am happy to do some reading to better educate myself and make sure my crate isn't actually unusable

                          xanathar@hachyderm.ioX 1 Reply Last reply
                          0
                          • gloriouscow@oldbytes.spaceG gloriouscow@oldbytes.space

                            @xanathar the LGPL has a carve-out exception for header files, or you wouldn't be able to even dynamically link any LGPL library in the first place because including the header would have already poisoned your binary.

                            now if the my bindgen.rs doesn't qualify as a "header" I will happily add verbiage at the top of it stating that particular file is public domain, but my assumption was that it basically serves the exact purpose as intended to allow use of "numerical parameters, data structure layouts and accessors."

                            if you've got a source that states otherwise i am happy to do some reading to better educate myself and make sure my crate isn't actually unusable

                            xanathar@hachyderm.ioX This user is from outside of this forum
                            xanathar@hachyderm.ioX This user is from outside of this forum
                            xanathar@hachyderm.io
                            wrote last edited by
                            #44

                            @gloriouscow reading your toot I think we are talking about two slightly different use-cases which is possibly why we are ending up to different conclusions but it's interesting because the difference shouldn't be something that influences the decision (goes to say how flaky these arguments tend to be).

                            If I understood it correctly you are discussing a binding.rs that is from the same "package" as the original library licensed as LGPL i.e. a primary-source binding.rs file... which can probably be interpreted as a header file for the library that is being linked and thus I believe what you say is right.

                            I was discussing more a case similar to gtk-rs-sys that is licensed MIT (libgtk is LGPL) and is made by different people (sort of), i.e. it's a binding-crate. In this case the binding crate is a different projectt/different artifact and at that point my feeling is that it cannot inherit the header exception of the original LGPL crate, so it I believe it should have its separate licensing, but IANAL.

                            Things get even more complicated if you expand the picture to include gtk-rs that adds oxidized wrappers on top, thus stopping being "trivial".

                            Seems hairy re-reading it all.

                            1 Reply Last reply
                            0
                            • R relay@relay.infosec.exchange shared this topic
                            Reply
                            • Reply as topic
                            Log in to reply
                            • Oldest to Newest
                            • Newest to Oldest
                            • Most Votes


                            • Login

                            • Login or register to search.
                            • First post
                              Last post
                            0
                            • Categories
                            • Recent
                            • Tags
                            • Popular
                            • World
                            • Users
                            • Groups