• breadsmasher@lemmy.world
    link
    fedilink
    English
    arrow-up
    36
    ·
    8 hours ago

    The source didn’t have this detail - google training gemini “cloud” vs “own hardware”. Does Google Cloud not count as “own hardware” for google?

    • General_Effort@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 hours ago

      From the source:

      Our primary approach calculates training costs based on hardware depreciation and energy consumption over the duration of model training. Hardware costs include AI accelerator chips (GPUs or TPUs), servers, and interconnection hardware. We use either disclosures from the developer or credible third-party reporting to identify or estimate the hardware type and quantity and training run duration for a given model. We also estimate the energy consumption of the hardware during the final training run of each model.

      As an alternative approach, we also calculate the cost to train these models in the cloud using rented hardware. This method is very simple to calculate because cloud providers charge a flat rate per chip-hour, and energy and interconnection costs are factored into the prices. However, it overestimates the cost of many frontier models, which are often trained on hardware owned by the developer rather than on rented cloud hardware.

      https://epochai.org/blog/how-much-does-it-cost-to-train-frontier-ai-models

    • bjorney@lemmy.ca
      link
      fedilink
      English
      arrow-up
      18
      ·
      7 hours ago

      Does Google Cloud not count as “own hardware” for google?

      That’s why the bars are so different. The “cloud” price is MSRP