Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

ngaylinn@tech.lgbtN

ngaylinn@tech.lgbt

@ngaylinn@tech.lgbt
About
Posts
17
Topics
6
Shares
0
Groups
0
Followers
0
Following
0

View Original

Posts

Recent Best Controversial

  • I'm mentoring someone who's interested in #creativecoding and #generativeart, especially in the space of #gamedev and #alife.
    ngaylinn@tech.lgbtN ngaylinn@tech.lgbt

    I'm mentoring someone who's interested in #creativecoding and #generativeart, especially in the space of #gamedev and #alife. His biggest challenge is going beyond dabbling in private to actually following through with a project. I think this comes down to finding good tools for "quick and dirty" work, as well as finding venues or communities for sharing his output and getting feedback.

    Does anyone have advice or pointers I could share with him? Boosts appreciated!

    Uncategorized creativecoding generativeart gamedev alife

  • This is a fun one: https://arxiv.org/abs/2305.04388
    ngaylinn@tech.lgbtN ngaylinn@tech.lgbt

    This paper also illustrates a small exception: if the agent knows of a systematic bias it is susceptible to (ie, racial stereotypes) it can correct (or even overcorrect) its responses.

    This is fascinating to me, because it's so similar to human cognitive bias. Unlike an LLM, we have some degree of introspection, but we often can't see our own bias. Remembering that a bias exists, assuming you are susceptible to it, and correcting yourself even when you don't think you need to is often the best strategy.

    Unfortunately, our stereotypes around AI (mostly from SciFi) are that they are more rational and reliable than human beings. LLMs can only be less rational and reliable, because they are trained to mimic human performance, and they do so unreliably. They have access to more information, so in theory they could have better answers. But they also have more conflicting, incorrect, and fictional information, and this all gets blended together without in the training process.

    (3/3)

    #science #llm #ai

    Uncategorized science llm

  • This is a fun one: https://arxiv.org/abs/2305.04388
    ngaylinn@tech.lgbtN ngaylinn@tech.lgbt

    One potential problem with this study is that the sample explanations they used to train the model never mentioned bias. So, perhaps they were "priming the LLM to lie" by not showing it how to fess up to bad influences.

    But there's a deeper point that I wish the paper had discussed. An LLM does not have the ability to introspect. It can't know what factors led it to give a particular answer. All it can see is the text it generated for its own "chain of thought." If that text was in an objective, proof-like setting, then each statement would follow logically from the previous one, and the LLM could judge its own reasoning. But the LLM simply can't in a setting where its output is influenced by information outside the CoT, which is... most of them.

    (2/3)

    #science #llm #ai

    Uncategorized science llm

  • This is a fun one: https://arxiv.org/abs/2305.04388
    ngaylinn@tech.lgbtN ngaylinn@tech.lgbt

    This is a fun one: https://arxiv.org/abs/2305.04388

    One more way LLMs appear human like: they faithfully reproduce cognitive bias, and give plausible, seemingly unbiased justifications for their biased answers.

    In this case, the biases they looked at were embedded in the structure of the dataset, in the prompt from the user, and from social stereotypes. They used "chain of thought" reasoning, which is supposed to force the LLM into a more rational, transparent "thought process" when generating its answers. They found they could systematically bias the LLM's output, and the LLM would never own up to that bias.

    (1/3)

    #science #llm #ai

    Uncategorized science llm

  • I joined a relatively "open minded" lab because I worried my research wouldn't be accepted by the mainstream.
    ngaylinn@tech.lgbtN ngaylinn@tech.lgbt

    @talisyn Well, yes. In this case, I was talking about lab mates sharing "wow, look at what they can do now!" kinds of findings without asking: "did they actually do what it looks like they're doing?"

    If it was us proposing our wild ideas for others to critique, that would be one thing. But when we see exciting ideas from others, I think it's important that we critique them.

    Uncategorized academicchatter

  • I joined a relatively "open minded" lab because I worried my research wouldn't be accepted by the mainstream.
    ngaylinn@tech.lgbtN ngaylinn@tech.lgbt

    I joined a relatively "open minded" lab because I worried my research wouldn't be accepted by the mainstream. Now I find myself frequently playing the skeptic in that lab, pointing out where there are claims that surpass evidence, conflicts of interest, and wishful thinking.

    I think it's important to entertain alternative models of biology and intelligence. We definitely don't have it all figured out yet. And the weird promises of AI and the future of biotech are tantalizing. We see big claims and shocking articles and videos all the time!

    But science is about evidence and explanation. It doesn't matter how cool these ideas are, what we could do with them, or how much we want them to be true. We have to test them. We should appreciate science that challenges and interrogates these ideas, rather than demoing or promoting them.

    #academicchatter

    Uncategorized academicchatter

  • My lab mate, Jackson Dean, has been doing some really fun research into image generation.
    ngaylinn@tech.lgbtN ngaylinn@tech.lgbt

    @FishFace Another way of looking at this is a diffusion model is trying to make an image that resembles known images for a prompt. That's its loss function: minimize deviation from the target image distribution.

    In this experiment, we're just asking "what would you call this thing?" without concern for how much it resembles other images with the same description. The fitness function is to get a confident response. You're evolving a Rorschach test where the model always sees a bird, even though it looks nothing like a picture of a bird.

    Uncategorized science generativeart

  • My lab mate, Jackson Dean, has been doing some really fun research into image generation.
    ngaylinn@tech.lgbtN ngaylinn@tech.lgbt

    @FishFace Yes, and it is a subtle difference. I wish I could share the images, since I think that would make it more apparent. 🙂

    In a diffusion model, you iteratively tweak an image of some static until the result is statistically similar to the images used in training.

    In this experiment, you generate "random" images, but with the unique bias of CPPN networks, so they look more like "organic shapes" than static. You treat them like Rorschach tests, asking the CV model what it sees. Then, for each different answer, you iterate the image so the CV model is even more confident. Except, you aren't tweaking pixels to approach the target distribution, you're just giving hot / cold feedback to an evolutionary search.

    The resulting images are far outside the distribution of the original dataset and look like abstract art, but still stimulate the CV model to be very confident about what it's seeing.

    Uncategorized science generativeart

  • My lab mate, Jackson Dean, has been doing some really fun research into image generation.
    ngaylinn@tech.lgbtN ngaylinn@tech.lgbt

    @FishFace Also, the model isn't guided towards any particular prompt. The prompts are discovered through random search, then used to refine those starting points.

    Uncategorized science generativeart

  • My lab mate, Jackson Dean, has been doing some really fun research into image generation.
    ngaylinn@tech.lgbtN ngaylinn@tech.lgbt

    @FishFace That's true! What's different here, though, is that the generation procedure isn't attempting to sample from the distribution of "all natural images" learned from its training data. Instead, a CPPN is used to generate a "random" image with spatially coherent structure from scratch.

    This is nice, because it means the images are novel, not remixes of stolen data. Also, it allows us to explore the limitations of computer vision, since we're straying far from the distribution of images the model was trained on.

    Uncategorized science generativeart

  • My lab mate, Jackson Dean, has been doing some really fun research into image generation.
    ngaylinn@tech.lgbtN ngaylinn@tech.lgbt

    @lilacperegrine Alas, this is still very early work in progress! I'll share this once Jackson does! If you look him up on Google scholar, though, you can see some of his other image generation projects, like this one: https://direct.mit.edu/isal/proceedings/isal2024/36/86/123507

    Uncategorized science generativeart

  • My lab mate, Jackson Dean, has been doing some really fun research into image generation.
    ngaylinn@tech.lgbtN ngaylinn@tech.lgbt

    @kevinrns Just don't poison alt text for humans! It serves an important purpose.

    Uncategorized science generativeart

  • My lab mate, Jackson Dean, has been doing some really fun research into image generation.
    ngaylinn@tech.lgbtN ngaylinn@tech.lgbt

    Jackson's research is great for exploring this, because we get to see abstract synthetic images that very strongly stimulate the AI to see... whatever it "wants" to see.

    Often, the results are recognizable. The image with oddly shaped pink blobs does sorta resemble flamingos. But there are also many examples where the AI fixates on some small detail of color or texture, and becomes convinced it's seeing something totally implausible.

    This is relates to "adversarial examples", another great way to see this.

    With real images, it seems like the AI "sees" like we do. But as soon as we venture beyond its training data, the illusion is broken, and it feels a bit like a parlor trick. Clearly AI doesn't see like we do.

    This is a great practice for AI generally: seek out the edge cases where the model fails. This breaks the spell of "general intelligence" and gives us a clearer idea of what's actually happening inside the black box.

    (3/3)
    #science #ai #generativeart

    Uncategorized science generativeart

  • My lab mate, Jackson Dean, has been doing some really fun research into image generation.
    ngaylinn@tech.lgbtN ngaylinn@tech.lgbt

    LLM image generators can make a picture of anything you ask for. The results often look pretty good at first glance. They're generic and the details are usually off, but folks overlook that easily.

    This hides something important: these models can't see like we do. The main limitation is how they're trained. We show them millions of pictures, paired with text descriptions.

    The problem is, humans don't describe images literally. We might say "a picture of a dog playing frisbee" but we didn't mention the setting, the composition, or the squirrel in the background.

    Most of what's there visually is unsaid. The model sees those pixels, but they're just "stuff that goes along" with the text. Dogs play in parks, so the AI learns that dogs have green backgrounds.

    This is why it's so hard to control an image generator. It isn't intentionally placing all those objects and choosing their attributes, it's just extra fluff that seems to "go with" what you asked for.

    (2/3)
    #science #ai #generativeart

    Uncategorized science generativeart

  • My lab mate, Jackson Dean, has been doing some really fun research into image generation.
    ngaylinn@tech.lgbtN ngaylinn@tech.lgbt

    My lab mate, Jackson Dean, has been doing some really fun research into image generation.

    Unlike the common AI-generated images that mash together stolen artwork to make something sorta photo realistic, he's producing abstract art that's entirely novel. The general idea (inspired by innovation engines) is to generate an image from scratch, then ask a vision / language model what it sees. He generates lots of images with different descriptions, and refines those images to more closely resemble their descriptions.

    Not only is he making some really cool generative art, but he's learning something about what "novelty" is and how to produce it in a computer.

    Beyond that, though, I'm fascinated because it gives a window into the strange way computers "see" images.

    (1/3)
    #science #ai #generativeart

    Uncategorized science generativeart

  • When I was a boy, I struggled with why good people do bad things.
    ngaylinn@tech.lgbtN ngaylinn@tech.lgbt

    When I was a boy, I struggled with why good people do bad things.

    I saw that most people were smart, kind, and reasonable--or, at least they tried to be. Yet, I lived in an unjust society full of ignorance, superstition, and prejudice.

    I came to understand that humans are bad at examining whether our beliefs and habits align with our values, and we're bad at disentangling our personal experience and thinking from what we absorb culturally. That's who we are as a species, but it's not the story we tell ourselves about who we are and how to live a good life.

    That's a problem. I want people to understand this better, live more examined lives, and build a better society.

    This must be part of why LLMs bother me so much. They have our cultural intelligence, but not our personal intelligence. Their work is fundamentally thoughtless and unexamined, the kind of thing I wish people wouldn't do. Yet, we can't tell the difference, and we're pushing people to stop thinking and just use the damn machine.

    Uncategorized

  • Ugh, LinkedIn is the worst!
    ngaylinn@tech.lgbtN ngaylinn@tech.lgbt

    Ugh, LinkedIn is the worst!

    I try to disable all the notifications that aren't directly relevant to me, but they keep inventing new kinds of notifications that I have to opt out of! So annoying.

    Trying to do that this morning, I see they have "simplified and regrouped" their notification settings, which is hilarious, because they're showing me a list of 14 top-level notification categories, each with its own tree of sub-categories beneath it. There must be a few dozen different categories of notification, each with multiple options within.

    I think I found the one I needed to turn off? I have no idea. This is actively hostile UX.

    #linkedin #ux #ui

    Link Preview Image
    Uncategorized linkedin
  • Login

  • Login or register to search.
  • First post
    Last post
0
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups