• toothbrush@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    129
    ·
    edit-2
    3 days ago

    One of those rare lucid moments by the stock market? Is this the market correction that everyone knew was coming, or is some famous techbro going to technobabble some more about AI overlords and they return to their fantasy values?

    • themoonisacheese@sh.itjust.works
      link
      fedilink
      arrow-up
      102
      arrow-down
      1
      ·
      3 days ago

      It’s quite lucid. The new thing uses a fraction of compute compared to the old thing for the same results, so Nvidia cards for example are going to be in way less demand. That being said Nvidia stock was way too high surfing on the AI hype for the last like 2 years, and despite it plunging it’s not even back to normal.

      • jacksilver@lemmy.world
        link
        fedilink
        arrow-up
        32
        arrow-down
        6
        ·
        3 days ago

        My understanding is it’s just an LLM (not multimodal) and the train time/cost looks the same for most of these.

        I feel like the world’s gone crazy, but OpenAI (and others) is pursing more complex model designs with multimodal. Those are going to be more expensive due to image/video/audio processing. Unless I’m missing something that would probably account for the cost difference in current vs previous iterations.

        • will_a113@lemmy.ml
          link
          fedilink
          English
          arrow-up
          39
          ·
          3 days ago

          The thing is that R1 is being compared to gpt4 or in some cases gpt4o. That model cost OpenAI something like $80M to train, so saying it has roughly equivalent performance for an order of magnitude less cost is not for nothing. DeepSeek also says the model is much cheaper to run for inferencing as well, though I can’t find any figures on that.

          • jacksilver@lemmy.world
            link
            fedilink
            arrow-up
            5
            arrow-down
            3
            ·
            3 days ago

            My main point is that gpt4o and other models it’s being compared to are multimodal, R1 is only a LLM from what I can find.

            Something trained on audio/pictures/videos/text is probably going to cost more than just text.

            But maybe I’m missing something.

            • will_a113@lemmy.ml
              link
              fedilink
              English
              arrow-up
              24
              ·
              3 days ago

              The original gpt4 is just an LLM though, not multimodal, and the training cost for that is still estimated to be over 10x R1’s if you believe the numbers. I think where R 1 is compared to 4o is in so-called reasoning, where you can see the chain of though or internal prompt paths that the model uses to (expensively) produce an output.

              • jacksilver@lemmy.world
                link
                fedilink
                arrow-up
                5
                arrow-down
                2
                ·
                edit-2
                3 days ago

                I’m not sure how good a source it is, but Wikipedia says it was multimodal and came out about two years ago - https://en.m.wikipedia.org/wiki/GPT-4. That being said.

                The comparisons though are comparing the LLM benchmarks against gpt4o, so maybe a valid arguement for the LLM capabilites.

                However, I think a lot of the more recent models are pursing architectures with the ability to act on their own like Claude’s computer use - https://docs.anthropic.com/en/docs/build-with-claude/computer-use, which DeepSeek R1 is not attempting.

                Edit: and I think the real money will be in the more complex models focused on workflows automation.

              • veroxii@aussie.zone
                link
                fedilink
                arrow-up
                4
                ·
                2 days ago

                Holy smoke balls. I wonder what else they have ready to release over the next few weeks. They might have a whole suite of things just waiting to strategically deploy

          • Zaktor@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            9
            arrow-down
            1
            ·
            3 days ago

            And the data is not available. Knowing the weights of a model doesn’t really tell us much about its training costs.

      • davel [he/him]@lemmy.ml
        link
        fedilink
        English
        arrow-up
        6
        ·
        3 days ago

        If AI is cheaper, then we may use even more of it, and that would soak up at least some of the slack, though I have no idea how much.