hey folks, here’s another meta-post. this one isn’t specifically in response to the massive surge of users, but the surge is fortuitously timed because i’ve been intending to give a good idea of what our financial stability is like. as a reminder, we’re 100%-user funded. everything you donate to us specifically goes to the website, or any outside labor we pay to do something for us.

thanks to your generous support, we’re pretty confident we have passed our current break-even point for this month, at which we wouldn’t eventually need to pay out of our own pockets to keep the site running. that point in our estimation is about $26 a month or $312 a year. (please ignore OC’s estimated yearly budget–we don’t determine it lol)

our expenses are currently:

  • $18/mo toward our host, Digital Ocean. (yesterday we upgraded from DO’s $12 tier to its $18 tier to mitigate traffic issues and lag, and it’s really worked out!)
  • $2/mo for weekly backups
  • $4/mo for daily snapshots of the website, which would allow us to restore the website in between the weekly backups if need be.

for a total of at least $26/mo in expenses. this may vary from month to month though, so we’re baking in a bit of uncertainty with our estimation.

we currently have, for the month of June:

  • $70/mo in recurring donations (at least for June)
  • $200 this month in one-time donations

for a total of $270 this month. our total balance now stands at $331.31.

that balance means we now have about a year months of reserves currently, if we received no other donations and have no unexpected expenses.[1] the recurring donations put us well into the green at this point.

this is good! everything past our break-even point each month is, to be clear, money we can save and put toward scaling up our infrastructure. there is no downside to donating after we’ve already met our “goal” of basic financial stability. doing so will have pretty straightforward practical implications for you: fewer 500s, 503s, better image support (this takes a lot of space!), and the website generally being run on more than potato hardware.[2] if you’d like to do so in light of this information, our OpenCollective page is this post’s link. thanks folks!


  1. we will have at least one upcoming expense but its size is TBD, and so is how we’ll pay for it ↩︎

  2. especially during times like now, where we’ve likely been getting thousands or tens of thousands of hits an hour ↩︎

  • TheTrueLinuxDev@beehaw.org
    link
    fedilink
    arrow-up
    15
    ·
    1 year ago

    Scaling up can become quickly cost-prohibitive with large-scale servers. I’ve noted that the most affordable option with Digital Ocean, at $12/month, offers only a basic droplet with 1 vCPU, 2GB of RAM, and 50GB of SSD storage. When you consider a higher-end configuration with 16GB of RAM, 8 vCPUs, and 320GB of SSD storage at $96/month, it may not seem economical, especially as storage and backup needs increase with server scaling.

    As an alternative approach to minimize costs while scaling, consider purchasing used servers from platforms like eBay and setting up a small-scale hosting operation in your garage. While this route does introduce overheads like business internet services and electricity costs, along with regular maintenance such as HDD replacements, it could be more cost-effective in the long run.

    For instance, you could acquire a server on eBay for about $300, offering 20 CPU cores, 64GB of RAM, and 8TB of SAS HDD storage. Comparatively, a similar setup on Digital Ocean would cost around $544/month or $6528/year, making the used server a strong competitor against cloud services.

    Just some food for thought if you’re contemplating scaling in the future.

    • nutomicA
      link
      fedilink
      arrow-up
      11
      ·
      1 year ago

      Digitalocean is pretty expensive. For example on hetzner.com you can get 8 vCPU for 15 euros per month. OVH should be similar. So thats really affordable if you have hundreds or thousands of users and a couple of donors.

    • Gaywallet (they/it)@beehaw.orgM
      link
      fedilink
      English
      arrow-up
      10
      ·
      1 year ago

      While I’d be more than happy to purchase and use actual server space for doing this kind of thing, I’m not nearly tech saavy enough to actually run that. I would also worry about bandwidth considerations and other issues. Perhaps there are people willing to contribute to the cause to find affordable ways to run a website like this, but being accessible by multiple people to do things like power cycle the server, not being concerned about our personal IPs being attacked, and having access to support I think is worth the extra cost.

      There is a point at which something akin to this needs to be done for financial reasons, but I think we’re pretty far from that being a reality and while it relies on slightly more donations from our lovely userbase, even at $96/mo that’s not that much money to collect when we have the thousands of users that would require that kind of hardware.

      • TheTrueLinuxDev@beehaw.org
        link
        fedilink
        English
        arrow-up
        7
        ·
        edit-2
        1 year ago

        I appreciate your thoughtful response to my comment. There are indeed several strategies that can be employed to decrease bandwidth and storage costs. Leveraging a Content Delivery Network, such as the free service provided by Cloudflare, can help mitigate these costs by caching your webpages and images. As for the cost of internet service, it greatly varies based on your location. If you’re located closer to the internet backbone, the likelihood of finding a more reasonably priced business internet plan increases.

        While it may seem premature at this stage, I firmly believe in the success of this website, even in the face of numerous failures in this space.

        • Edited to add -

        You’ve correctly highlighted the potential threat of attacks on your server. Cloudflare is known for its prowess in mitigating substantial distributed denial-of-service (DDoS) attacks and could be an excellent security asset in this context.

        Regarding the value of professional support, I acknowledge that the cost can often be justified. My suggestions are merely alternatives, providing you with additional options should you require them.

        • Gaywallet (they/it)@beehaw.orgM
          link
          fedilink
          English
          arrow-up
          5
          ·
          1 year ago

          Very much appreciate the knowledge and responses! We’re definitely in need of systems administrators who have time and energy to expend in terms of helping us reach something sustainable both financially and technically speaking.

          • PenguinCoder@beehaw.orgM
            link
            fedilink
            English
            arrow-up
            7
            ·
            1 year ago

            Echoing thetruelinuxdev, I’d be willing to help however I can. I also have a personal server just sitting mostly idle I could try to use for this stuff, but that of course depends on you all as the admins of the instance trusting some random on the internet vs a VPS provider like DO. I do have Linux Sysadmin and other technology experience. Giving money is easier but might not be as impactful as helping with infrastructure, to me.

              • PenguinCoder@beehaw.orgM
                link
                fedilink
                English
                arrow-up
                7
                ·
                edit-2
                1 year ago

                Just PM for resume 🤣 but seriously, while I know I can’t donate as much time as some of you already do, for what I can do, I’d rather support a community effort like this with my labor rather than that other-site.

              • spaghetti_carbanana@beehaw.org
                link
                fedilink
                English
                arrow-up
                5
                ·
                1 year ago

                I’ll throw my hat in here too, I have capacity on my in-home infrastructure and will also be colocating a couple servers in a proper datacentre over the coming months. I back up everything to disks, minimum 2 week on-disk retention, then duplicate it onto tapes (don’t laugh - they’re rated for 30 years cold storage). I keep yearly backups infinitely, monthlys for 1-2 years, weeklys for 6 months iirc. Based in Australia if that helps. 10+ years experience as a sysadmin/cyber sec professional. Feel free to DM if I can help the cause as I’m happy to donate capacity

          • TheTrueLinuxDev@beehaw.org
            link
            fedilink
            English
            arrow-up
            4
            ·
            1 year ago

            While I may not be able to provide financial assistance, I’m more than willing to lend my expertise in system administration and development, should you find it beneficial.

        • Parsnip8904@beehaw.org
          link
          fedilink
          English
          arrow-up
          5
          ·
          1 year ago

          Using cloudflare as proxy would essentially mean letting them MITM all the traffic though right? All things considered they’re one of the trustworthy companies but is there some other alternative that you can basically self host?

          • TheTrueLinuxDev@beehaw.org
            link
            fedilink
            English
            arrow-up
            5
            ·
            1 year ago

            Cloudflare can’t be accurately labeled as a Man-in-the-Middle (MitM), given its integral role in the service stack. The same logic would falsely accuse platforms like Linode, AWS, and Azure of the same. Moreover, self-hosting is entirely feasible. The main challenge arises from Internet Service Providers, which often restrict upload speeds unjustifiably. I highlight this to explain why it typically becomes more economical to locate closer to the internet backbone, where the cost and the plan tend to be more reasonable.

            • Parsnip8904@beehaw.org
              link
              fedilink
              English
              arrow-up
              4
              ·
              1 year ago

              Hey person :) I didn’t accuse anyone of anything. Just pointing out that if you use clouddlare as a proxy specifically, they are technically decrypting your traffic? AWS/Azure/Linode are primarily hosts for webapps and VPSs not proxy providers as far as I’m aware.

              • TheTrueLinuxDev@beehaw.org
                link
                fedilink
                English
                arrow-up
                0
                ·
                1 year ago

                It’s more apt to say that when you observe what it offers and what it needs, you’re basically coming to a decision that you could choose to forgo some of the security by giving them the TLS cert in exchange for CDN to alleviate server load as well as preventing DDOS attacks. And Linode does allows proxy to be run on their infrastructure when I last contacted them, so they are somewhat a proxy provider although not directly.

                And I didn’t mean to say you’re accusing those providers, but only pointing out that when you voluntarily give the providers your configuration/certificate, there isn’t any malice in this case for it to be attributed to Man in the Middle attack, and there were consent involved.

                • Parsnip8904@beehaw.org
                  link
                  fedilink
                  English
                  arrow-up
                  4
                  ·
                  1 year ago

                  I agree that you could host your own proxy on any provider if you wanted to which is nice :)

                  My problem with Cloudflare is that they aren’t that transparent about what they’re doing.

                  What I’ve usually seen is this: people switch to cloudflare DNS because frankly it’s one of the best services available. They see the little cloud next to their A records which says it uses proxy to make your websites load faster and think this is great. At no point there is a warning saying that by clicking this you’re essentially letting us manage TLS on your website.

                  I do use cloudflare proxy because it is pretty neat but definitely not on all content I use.

                  I also have to say my concern is not that cloudflare is going to read my passwords or info in my databases but that a) I wouldn’t like to put all eggs in one basket and b) dedicated state actors like NSA might have access into cloudflare.

    • CobolSailor@beehaw.org
      link
      fedilink
      arrow-up
      6
      ·
      1 year ago

      I did that in my garage a few years back to tinker around with linux servers. Just don’t make the same mistake I did and buy the cheapest SSD on eBay. My friend group still hasn’t forgiven me for letting our minecraft server get corrupted lol.

      • TheTrueLinuxDev@beehaw.org
        link
        fedilink
        arrow-up
        6
        ·
        1 year ago

        Absolutely, and at a minimum, I’d suggest implementing the 3-2-1 backup strategy, widely regarded as the “gold standard” for maintaining data integrity. This involves keeping a total of three backups: two stored locally, but on different devices, and one off-site backup (ideally kept in a secure location such as a bank safe deposit box, with hard drives rotated every two weeks).

        • CobolSailor@beehaw.org
          link
          fedilink
          arrow-up
          5
          ·
          1 year ago

          That’s definitely where I went wrong. The sad part is I did make an effort to backup the mc server weekly. I had a cron job that would bring the minecraft server down, export the SQL db’s for plugins, and copy the world / other config files into a tar.gz file and copy it to a hard drive raid also running on the server. I knew having raid wasn’t a backup in itself, and also running it on the same server wasn’t doing me any favors. The problem was the archive files were incomplete, but I didn’t realize that until the SSD died and I went to restore the archive file. Still not really sure why they weren’t complete archive files. I still have them, just can’t open them. It’s weird though, when I first set it up they totally worked.

          • TheTrueLinuxDev@beehaw.org
            link
            fedilink
            arrow-up
            6
            ·
            1 year ago

            Indeed, one challenging aspect of data backup is the necessity to confirm that the backup is restorable. This adds another layer of complexity to system administration. However, with the aid of a well-crafted script to handle backup verification, reliability of backup restoration can be ensured, potentially through a method such as hash comparison.

            Backups can sometimes give you a deceptive feeling of safety. If you neglect regular verification, you might find that your backup fails just when you need it most for a restoration.

            • CobolSailor@beehaw.org
              link
              fedilink
              arrow-up
              5
              ·
              1 year ago

              Hashing! Why didn’t I think of that when I created the original backup script haha. Low key might go write a new backup script and incorporate that. I’m running another private mc server for some friends, this time in the cloud cause I didn’t want to deal with another crash. I would like to bring it back to running in my garage sometime.

              • nutomicA
                link
                fedilink
                arrow-up
                5
                ·
                1 year ago

                I really recommend you to use Borg Backup instead. It has incremental encryption which is really useful, and generally does its job well.

                • PenguinCoder@beehaw.orgM
                  link
                  fedilink
                  arrow-up
                  5
                  ·
                  1 year ago

                  For what it’s worth, I’ve had issues with data corruption restoring with Borg. Defeating the purpose as a backup. I’ve used Restic happily and restores very well.

                  • TheTrueLinuxDev@beehaw.org
                    link
                    fedilink
                    arrow-up
                    4
                    ·
                    1 year ago

                    This underscores why I often advocate for simplicity. It’s preferable to use trusted tools, like LUKS for filesystem-level encryption and straightforward solutions like the Ext4 filesystem. This aligns with the mentality of “If it’s not broken, don’t fix it”, which is particularly relevant in a backup context. The fewer modifications you make to the backup software, the greater your confidence that it won’t let you down in a critical moment.

                  • leetnewb@beehaw.org
                    link
                    fedilink
                    arrow-up
                    2
                    ·
                    1 year ago

                    Fwiw, I’ve used restic as well. But there was a discussion a while back about very large backups corrupting. I don’t know what the right answer is for large backups where duplication is prohibitively costly, but for personal use I do a restic/borg type chunking repo plus a vanilla rsync to a copy on write filesystem.

              • TheTrueLinuxDev@beehaw.org
                link
                fedilink
                arrow-up
                5
                ·
                1 year ago

                In your situation, I’d recommend creating a script that performs the following steps sequentially:

                • Accumulate the hashes (ideally SHA-256 or SHA-512) of all files you plan to backup.
                • Create a text file that lists the file paths and their corresponding hashes.
                • Initiate the backup process.
                • Upon completion of the backup, restore the data to an auxiliary drive.
                • Compare the hashes from the text file to those of the restored data.
                • Alert you about whether the process was successful or not.

                Regarding Borg backup, while it offers robust verification and security mechanisms, I’ve personally found its performance lacking, though your experience might differ. I would recommend thoroughly reviewing its documentation and conducting some trial runs prior to integrating it into a production environment.

                • CobolSailor@beehaw.org
                  link
                  fedilink
                  arrow-up
                  4
                  ·
                  1 year ago

                  Ahh okay. I do enjoy writing programs myself where I can so I’ll probably still do it myself, but good to have options. Since it’s just the MC server I’m truly worried about, I don’t care too much about the performance within reason. I like the idea of a text file with corresponding hashes. Keep it simple! Or as my old shop teacher used to say, KISS, keep it simple, stupid.

      • TheTrueLinuxDev@beehaw.org
        link
        fedilink
        arrow-up
        8
        ·
        1 year ago

        Auto-scaling comes with its trade-offs. It can lull users into a false sense of security, leading them to believe that all configurations will seamlessly scale up. However, even with managed databases, there are inherent limitations and the need for architectural changes to support a larger-scale infrastructure. While auto-scaling provides a robust system for managing overhead and mitigating the efforts involved in scaling, there are potential risks and hidden costs.

        One significant risk is the misjudgment of traffic or the inadvertent scaling up caused by a software bug. This could escalate the demand for more computing servers indefinitely, limited only by your budget. In contrast, self-hosting, despite its higher administrative complexity, offers more opportunities to cut costs associated with cloud usage.

        As with most things, different approaches have their pros and cons. I am simply offering alternative ideas for those interested in exploring other options.