• redcalcium@lemmy.institute
    link
    fedilink
    arrow-up
    8
    ·
    1 year ago

    So, you’ll have to use the same LLM to decompress the data? For example, if your friend send you an archive compressed with this LLM, then you won’t be able to decompress it without downloading the same LLM?

    • snargledorf@lemm.ee
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 year ago

      This is not dissimilar to regular compression algorithms. If I compress a folder using the 7zip format (.7z) the end user needs to use 7zip to decompress it since it is a proprietary algorithm. (I know Windows 11 is getting 7zip support)

      • redcalcium@lemmy.institute
        link
        fedilink
        arrow-up
        8
        arrow-down
        1
        ·
        edit-2
        1 year ago

        Except LLMs tend to be very big compared to standard decompression programs and often requires GPU with adequate VRAM in order to work reasonably fast enough. This is a very big usability issue IMO. If decompression can be done with a smaller and faster program (maybe also generated by the LLM?), it can be very useful and see pretty wide adoption (e.g. for future game devs who want to reduce their game size from 150GB to 130GB).

        • andruid
          link
          fedilink
          arrow-up
          3
          ·
          1 year ago

          Training tends to be more compute intensive while inference is more likely to be able to be ran on a smaller hardware foot print.

          The neater idea would be a standard model or set of models, so that a 30G program can be used on ~80% of target case, games and video seem good canidates for this.

        • falkerie71@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          I don’t know how this would apply to decompression models in actuality, but in general, deep learning is VRAM intensive only during the training process, that’s because they train multiple batches of data at once for generalization, and all those batches of data need to be stored in ram.
          But once the model is trained, the end user is only going to input data one by one, so VRAM usually is not an issue. There are also light weight models that are designed to be run on lower end hardware.