I like the Ansible approach, mainly because I have a wide variety of self-hosted services. I’ve used Docker, but the problem is Docker is a pig when it comes to disk space. I end up having a 20-30 GB partition just to manage the packages and dependencies and spent half of my time clearing it out (via script of course) just to rebuild packages. That’s pretty much what pushed me toward Ansible, which I run from my laptop but keep up in Git.
Oh, I want three of these so badly now (my child’s laptop is dead and they kept stealing my computer to play steam games).
One of my favorite books and inspiration for my own writing. I love the entire series.
Also, it is interesting to see Le Guin’s comments about Ged not being white and how all the movies insisted on making him white. Overall, I got the impression she disliked almost every visual representation of the novel (the anime and the miniseries).
“All projects start simple.”
Actually, for trusted teams, I would have just everyone push to master. If you don’t need gatekeeping, then go direct to master. Branches in a Dockerfile repo? Probably doesn’t need it.
Now, autosync. I have wasted so much time in my life on triggers, things that work automatically in the background. On paper, it seems like a great idea, and they usually are as long as they work perfectly. But, the automatic nature also makes it hard when it fails part of the way in. This includes the fixture setups of NUnit, database triggers, and everything else.
It also depends on how people work. I use task branches because I frequently commit broken code on my local branch because I’m at a stopping point, I need to look at something else, I need to switch branches, or simple just feel that it is time to commit and push. As I said earlier, that happens about every five minutes or so when I’m in the groove.
Having autosync in the middle of that causes problems for me because I have an environment set up correctly and really didn’t need Bob’s database change in the middle of my work, or Bob has trouble because I won’t stop committing changes when I’m in the “knock out all the small items” mode. In those cases, I found it more disruptive when things just “magically work” just happen to magically break code before and after lunch. It gets really bad when someone changes a NPM or NuGet dependency since those aren’t always picked up automatically when you hit Run.
I like that merge to be explicit because I can say “I’m ready now” and make work on those changes instead of interrupting my train of thought and having to go through a context switch.
That all said, James White had a wonderful short story about how everyone works different. Larry Wall’s “there is more than one way to do it”. Not everyone has those limitations or encounters them. I have, so I avoid situations where I know it will be a problem. I also don’t handle context switching that well :P and like to know one set of tools well. Since I live in Git for so much of my life (writing, household, accounting, work), that is my tool of choice. Would I contribute to a project that doesn’t use Git? Well… probably not. Same as I don’t contribute to Go, Ruby, and Python projects. It isn’t for me.
But for others, it may be. I think that’s great. Work with the tools you want and the ones that work the way you want to work.
I just have experienced so many times when those major points selling points of Fossil have caused me problems. I know how I work and how I think, so I’m pretty sure I’ll encounter those same things and be just as frustrated. As it happens, Git works like I think, so it fits me better.
I’ll start with the PR review process, that isn’t needed. If you have multiple people with permission to push to master, then you don’t need a PR. They just push. We have two projects on my current team that are doing that plus our DBA and a developer frequently work off the same branch and just merge/push code together.
The only problems comes when someone pushes up code that is later than what you have. Well, Git has protections to say you need to fetch or pull, resolve any conflicts, and then push up. The pull process covers most of the conflicts for you, it’s only when you have modified the same file that it insists you manually resolve the issue. Of the tools, the auto-merging Git does on a pull is pretty solid and gets you 90% of the way. As it happens, we use BeyondCompare and that gets us to 99% of the way with our team.
A more formal PR is good when you don’t have developers at the same level and you want a gatekeeper. This is where small teams of highly-skilled people have more trouble, in my experience, because they grow and improve by themselves but that also makes it much harder to bring someone new in. The new person has to be almost as good to mesh. Using the PR process allows more of a gradual breaking in of developers (“good job but… don’t reformat everything”). But, as I said, you don’t have to have it and it can be introduced later (we did manual reviews over Webex for years before we decided to implement a PR process).
Their example of “when did this task come in” is a matter of philosophy. Git allows merge commits and squashes. A merge basically grafts the entire development process into the main branch but still lets you see the history until the end of time. Squash basically takes all of that task work, squishes it into a single commit, and just inserts that into the main tree. I’ve seen teams that do both. As such, our team does squash commits because I do a local commit about every five minutes because that’s the way I work. There are other team members who write it entirely before doing a single commit.
You can also track task branches by commenting on the item. We use the “#3 - Fix this issue” a lot which automatically links up to the work item/issue/task. It can be done in Gitlab, Github, Jira, and Azure without a problem. It also gives you that linkage of a specific item to a commit in history.
As for smaller OSS projects, I think the PR process is actually more important. It lets someone contribute, have the more knowledge people comment or make suggestions, and work them through getting them into the project and handle the knowledge that the highly skilled person knows because of history but no new person would be able to guess.
I’ll use my dad for an example. He was on a small team of four scientists working on a control system for various experiments. All four of them were very skilled and capable. However, every time they tried to bring on a fifth, it didn’t work because the new person didn’t have that institutional knowledge and kept breaking things that the other four instinctively knew from the ten years they had worked on it. The fact they didn’t use unit tests didn’t help, but there is a difference when your control system causes a 4 GW capacitor to blow at the wrong time so… they just decided not to include others because it was easier. In this case, having a PR process would have been better because then the other four could have done the “what about…?” or the “oh, you probably didn’t know, but putting this across the line causes the magnet control system on the accelerator ring to over-correct and then it loses the photon.” (using a real example here.)
Going direct to main (no PR) also requires a high level of trust. We already have examples of people who have put in Bitcoin miners or malicious code into critical packages. In theory, a PR process gives one more barrier to prevent the “you say this is to fix tabs, but you seem to have an entire Etherum library included too…” In small teams, you can have that trust, but that adds another barrier to overcome in bringing a new person on.
I think these scale to even smaller packages. I’ve had PRs that didn’t fit my naming conventions or style guide (increases maintenance cost if not managed), weren’t aware of side conditions, or would prevent a future plan. If there wasn’t a PR process, it would have made me, as a maintainer, have more work to either clean it up or to work around it.
Finally (sorry, essay answer), I work on four different machines and one is rarely connected to the Internet. Having the push/pull infrastructure has significantly helped me not lose work. And I use Git… a lot being that I have a few hundred repositories these days (every novel and short story is a Git repo, my accounting, my OSS projects, etc.).
A lot of the selling points of Fossil were the reasons I didn’t go with Mercurial.
I do agree, seeing every branch at once in a single repository, would be nice online. That is something that GitExtensions or GitAhead does much better than Github or Gitlab.
As much as the small team/cathedral development goes, I consider those non-desirable features since I’m fond of faster releases and potential of bringing someone else online. But, Git doesn’t really prevent small tests, that’s what access control is for, and many projects do cathedral using Git.
I don’t really see myself switching, but it’s always cool to see what others are doing.
Lately, that’s been my favorite game and I haven’t even gotten to update 4 content yet (mainly because I don’t know if my preferred mods on Linux work).
The game won’t have naughty bits, but the fan art will. Rule 34 is always there.
Oh, I’d love if it GOG (and Itch) shipped AppImages in general. I’ve gotten really appreciative of them in the recent months and have a lot of fun with Satisfactory Mod Manager which ships those.
GOG’s installer story on Linux is a bit poor from my experience. Or at least it is on my laptop. I’ve been playing with Lutris a bit, but mostly, I just want a single system to run and install games because I have a general philosophy not to use the market leader (shop local, shop small) so I always get on Itch or GOG when possible.
Well, that seems less than optimal.
There are a couple fantasy books that always pull me in and seem to resist the erosion of time. Recently I read, Villains by Necessity by Eve Forward after not touching it for about fifteen years and the story still sucked me in, mainly because it’s about balance in the world and how even so-called evil characters are fully capable of being heroes, just for different reasons. Simon Green’s Guards of Haven (and at least the first related book to that series, Blue Moon Rising) also have help up pretty well over the years.
A lot of what I think makes a book survive time are my favorite books deal with general beats of life (invasion, fear, the strive for perfection at the exclusion of all else) as opposed to gimmicks, twists, and reveals. You can only be surprised one time when Senator Palatine ends up being a Sith but it is easier to have an empathic feel for the fight against a sense of self or saving one’s love one.
Books can have both. In the above example, Villains by Necessity has a twist. I remember the twist, even after not reading it for fifteen years. Even as I went through the book, in the back of my head, I’m trying to anticipate it and that sense of wonder will never come back. As such, the book was grand because of the other foundations were solid that even though I knew what was happening, I came back for the struggle.
The other is perfection. I dislike when a novel is basically “we’re awesome, now we’re going over there to be more awesome, and then we’ll be awesome again”. I feel that about the protagonist in Pyromancer by Don Callander. (And the Exalted RPG, but that was the point.) I want to see failures, idiocy, and brain dead decisions in the reader’s eye that make sense in the character’s. Those stories pull me in a lot more and keep over time.