• nil@piefed.ca
    link
    fedilink
    English
    arrow-up
    1
    ·
    34 minutes ago

    She’s a bit redundant but powerful. Need a lot of money to maintain the relationship + she asks for more accessories.

  • sp3ctr4l@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    2
    ·
    7 hours ago

    I mean, you can run an LLM locally, its not that hard.

    And you can run such a local machine off of solar power, if you have an energy efficient setup.

    It is possible to use this tech in a way that is not horrendously evil, and instead merely somewhat questionable, lol.

    Hell, I guess you could arguably literally warm a room of your home with your conversations.

    • nandeEbisu@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      2 hours ago

      As far as energy goes, its a matter of degree. LLMs are mainly bad emissions-wise because of the volume of calls being made. If you’re running it on your GPU, you could have been playing a game or something similarly emitting.

      The issue is more image generation models which are 1000 times worse https://www.technologyreview.com/2023/12/01/1084189/making-an-image-with-generative-ai-uses-as-much-energy-as-charging-your-phone/

      Original Paper: https://arxiv.org/pdf/2311.16863

      A moderately sized text-to-text model that you would run locally is about 10g of carbon for 1000 inferences which is driving a car about 1/40th of a mile. Even assuming your model is running in some kind of agentic loop, maybe 5 inferences / actual response (though it could be dozens depending on the architecture) that gets to you, that’s 10gcarbon / 200 messages to your model which is at least 2-3 sessions on the heavy end I would think. You could use it for a year and its equivalent to driving 3 miles if you do that every day.

      Image generation, however, is 1000-1500x that so just chatting with your GF isn’t that bad. Generating images is where it really adds up.

      I wouldn’t trust these numbers exactly, they’re more ball-park. There’s optimizations that they don’t include and there’s a million other variables that could make it more expensive. I doubt it would be more than 10-20 miles in a car / year for really heavy usage though.

    • Wildmimic@anarchist.nexus
      link
      fedilink
      English
      arrow-up
      2
      ·
      5 hours ago

      I run my LLM locally, and still have to turn the heating on because it’s not enough power. A high end card is normally rated for about 300W - and it’s only running in short bursts to answer questions. so if you are really pushing it, over time you will probably reach around 150W/h - that’s not enough at all. You would for sure use more power playing a game using Unreal Engine 5.

      Power consumption of LLM’s is a lot lower than people think. And running it in a data center will surely be more energy efficient than my aging AM4 platform.

      • sp3ctr4l@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 hours ago

        I run mine on a Steam Deck.

        Fairly low power draw on that lol.

        Though I’m using it as a coding assistant… not a digital girlfriend.

        … though I have modded my Deck a bit, so… I guess I already know what ‘she’ looks like on the inside, hahaha!

  • Reygle@lemmy.world
    link
    fedilink
    arrow-up
    5
    arrow-down
    2
    ·
    7 hours ago

    I get such a kick out of “run a local model!” comments.
    I recommend you do not run any models, since they’re all built exclusively using stolen data no matter what hardware they’re running on.