Referring more to smaller places like my own - few hundred employees with ~20 person IT team (~10 developers).

I read enough about testing that it seems industry standard. But whenever I talk to coworkers and my EM, it’s generally, “That would be nice, but it’s not practical for our size and the business would allow us to slow down for that.” We have ~5 manual testers, so things aren’t considered “untested”, but issues still frequently slip through. It’s insurance software so at least bugs aren’t killing people, but our quality still freaks me out a bit.

I try to write automated tests for my own code, since it seems valuable, but I avoid it whenever it’s not straightforward. I’ve read books on testing, but they generally feel like either toy examples or far more effort than my company would be willing to spend. Over time I’m wondering if I’m just overly idealistic, and automated testing is more of a FAANG / bigger company thing.

  • cbarrick@lemmy.world
    link
    fedilink
    English
    arrow-up
    40
    ·
    6 months ago

    Wow 😲

    It’s not that hard to setup GitHub or GitLab to make sure all the unit tests run for each PR.

    If you use something else for version control, check if they offer a similar CI feature. If not, setup Jenkins.

    I’m an SRE at a big tech company, so part of my job is to make sure CI infrastructure is readily available to our Dev partners. But I’ve worked at smaller companies before (10 or less SWEs) and even they had a Jenkins instance.

    This is a bright red flag to me. If I worked for a company that didn’t have CI, the first thing I would do is set it up. If I wasn’t allowed to take the time required to do that, I would quit…

    • yournameplease@programming.devOP
      link
      fedilink
      English
      arrow-up
      10
      ·
      6 months ago

      We do have CI (Azure DevOps), we aren’t that insane. Though to be fair, it’s relatively recent. The legacy app has a build pipeline but no tests. We got automated deployments to lower environments set up about a year back.

      My main project has build pipelines as well, Spring Boot “microservices” (probably a red flag given our size and infrastructure) with code coverage around 40-60% mostly unit tests. But I’m the only dev that really writes tests these days. No deployment pipelines there though as the SysAdmin is against it (and only really let us do the legacy app reluctantly).

      • cbarrick@lemmy.world
        link
        fedilink
        English
        arrow-up
        15
        ·
        edit-2
        6 months ago

        Ok. So if you have the infra already, it’s really just a matter of actually writing the tests. That can be done incrementally.

        40%-60% unit test coverage is honestly not too bad. But if the company’s bottom line rests on this code, you probably want to get that up. 100% though isn’t really worth it for application code, but it is definitely worth it for library code.

        One thing where I work is that all commits must be reviewed before being merged. A great way to improve coverage is to be that guy when people send you PRs.

        • yournameplease@programming.devOP
          link
          fedilink
          English
          arrow-up
          6
          ·
          6 months ago

          Ehh to be fair, none of the code with coverage is in use by anyone. It’s a constantly delayed project that I kind of doubt will last more than a few months in production if it ever gets there. The primary app has no tests, and the structure probably would require dedicated effort to make testable. Most logic goes through this sort of “god object” that couples huge models very tightly with the database. It’s probably something that can be worked around in a week or so, but I never spend much time on that project.

          I’m not sure if I want to be that guy though, slowing everyone down when scrum master and managers are already constantly complaining about everything going over estimates. (Even if poor testing is part of the problem…) I could maybe get a couple devs to buy in on requiring tests on new code, definitely not QA or my EM though. Last time tried to grandstand over testing, I got “XYZ needs this ready now, I’ll create a story for next sprint to write tests.” … 4+ sprints ago, and still sitting there. I just don’t really know how to advocate for this without looking like an annoying asshole, after trying for so long.

          • ericjmorey@programming.dev
            link
            fedilink
            arrow-up
            5
            ·
            6 months ago

            scrum master and managers are already constantly complaining about everything going over estimates

            This is a bigger problem than tests.

            I just don’t really know how to advocate for this without looking like an annoying asshole, after trying for so long.

            You’re presenting a solution for a problem that the team either does not see as important or doesn’t think exists at all.

            You need to demonstrate the value the solution can bring to them on their terms.

            • yournameplease@programming.devOP
              link
              fedilink
              English
              arrow-up
              1
              ·
              6 months ago

              This is a bigger problem than tests.

              You mean things going over estimates or SM/EM complaining about it?

              You’re presenting a solution for a problem that the team either does not see as important or doesn’t think exists at all.

              Definitely it’s a known issue, and I think people think it’s semi-important. Feels like every other standup has a spiel from the EM about “we need to test things, stop breaking things, etc.”.

              Whenever I argue on their terms though, I quickly “lose”, because business terms seem to be, “agree to everything from the business, look busy, and we will have time for IT concerns (i.e., testing) when we are done with business projects for the year (i.e., never).”

              If I want any meaningful change, I think it will need to be be something I work around management on.

              • ericjmorey@programming.dev
                link
                fedilink
                arrow-up
                2
                ·
                6 months ago

                You mean things going over estimates or SM/EM complaining about it?

                The combination is bad.

                Whenever I argue … , I quickly “lose”,

                If you see it as an argument, you’re not going to make headway. I would also question your assumption that you are correct about what their terms are. By this, I mean are you sure you understand what they value and prioritize? People often say that something is important, but show that something else is even more important.

                If I want any meaningful change, I think it will need to be be something I work around management on.

                It may need to start that way, but getting the team to buy in will take building trust. Which might be eroded down due to the consistent failure to meet estimates.

                • yournameplease@programming.devOP
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  6 months ago

                  The combination is bad.

                  I’m not really sure what there is to do about that, then. My own project is already is about to hit 3 years on something that was intended to be <1 year total, due to constant scope creep. Nothing bad seems to ever come out of the delays though, so I tend to ignore most of the complaints.

                  If you see it as an argument

                  I don’t really see it as that. “Discussion” is more what I try to do. But you are correct that I don’t think I can argue on their terms.

                  are you sure you understand what they value and prioritize

                  Probably not exactly, but my point is that the priorities technical leadership says we value (quality, scalability, fast iterations), run counter to what we actually prioritize. I often ask why we prioritize Project X over Project Y and the answer is almost always a variation of:

                  • “We can’t let IT be the reason the Project X is late.”
                  • “The business thinks we’ve been working on Project X a long time (often not true) so we need to show progress.”
                  • “Project X was promised for Release Z so it needs to get done over anything else.”

                  Which is why I said our priorities are more about appearing busy and important than anything else. (My own project isn’t even wanted by most business users. It was spearheaded by the VP of IT as a huge technical modernization effort despite doing almost nothing to improve or get away from the legacy system it is “replacing”.) So I think the reason I have such trouble getting buy-in is that better testing runs counter to IT’s true priorities, even if it provides business value.

                  [Trust] might be eroded down due to the consistent failure to meet estimates.

                  Perhaps. But trust is already pretty darn low for that very reason.

  • Pumpkin Escobar@lemmy.world
    link
    fedilink
    English
    arrow-up
    20
    ·
    6 months ago

    Automated testing is often more cost effective than manual testing. Not to say 100% automated testing is a reasonable goal. But I’ve never worked anywhere without some automated testing (unit, integration or end-to-end).

  • HubertManne@kbin.social
    link
    fedilink
    arrow-up
    15
    ·
    6 months ago

    automate everything is the standard practice. You can’t get a pull request in my company without automated code review including unit tests and selenium style practical tests plus two human reviewers.

  • BehindTheBarrier@programming.dev
    link
    fedilink
    arrow-up
    15
    ·
    edit-2
    6 months ago

    I’m on a similarly sized team, and we have put more effort into automated testing lately. We got an experienced person in on the team that knows his shit, and is engaged in improving our testing. But it’s definiely worth it. Manual testing tests the code now, automated testing checks the code later. That’s very important, because when 5 people test things, they aren’t going to test everything every time as well as all the new stuff. It’s too boring.

    So yes, you really REALLY should have automated testing. If you have 20 people, I’d guess you’re developing on something that is too large for a single person to have in-depth knowldge of all parts.

    Any team should have automated test. More specifically, you should have/write tests that test “business functionality”, not that your function does exactly what it is supposed to do. Our test expert made a test for something that said “ThisCompentsDisplayValueShouldBeZeroWhenUndefined” (Here component is something the users see, and always exepct to have a value. There is other components that might not show a value).

    And when I had to interact with the data processing because another “component” did not show zero in an edge case. I fixed the edge case, but I also broke the test for that other component. Now it was very clear to me that I also broke something that worked. A manual tester would maybe have noticed, but these were seperate components, and they might still see 0 on the thing that broke becase they had the value 0. Or simply did not know that was a requirement!

    We just recently started enforcing unit tests to be green to merge features. It brings a lot more comfort, especially since you can put more trusting changing systems that deal with caluclations, when you know tests check that results are unchanged.

    • yournameplease@programming.devOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      6 months ago

      Was there any event that prompted more investment into testing? I feel like something catastrophic would need to happen before anyone would consider serious testing investment. In the past (before I joined) there were apparently people who tried to get Selenium suites but nothing ever stuck.

      I think nobody sees value in improving something that is more or less “good enough” for so long. In our legacy software, most major development is copy+paste and change things, which I guess reduces the chance of regressions (at the cost of making big changes much, much slower). I think we have close to 100 4k line java files copied from the same original, plus another 20-30 scripts and configs for each…

      We are doing a “microservices rewrite” that interfaces with the legacy app (which feels like a death march project by now), and I think it inherited much of the testing difficulties of the old system, in part due to my inexperience when we started. Less code duplication, but now lots of enormous JSONs being thrown all over the network.

      I agree that manual testing is not enough, but I can’t seem to get much agreement. I think I do get value when I write unit tests, but I feel like I can’t point to concrete value because there’s not an obvious metric I’m gaining. I like that when I test code, I know that nobody will revert or break that area (unless they remove the tests, I suppose), but our coverage is low enough that I don’t trust them to mean the system actually works.

      • BehindTheBarrier@programming.dev
        link
        fedilink
        arrow-up
        6
        ·
        edit-2
        6 months ago

        Our main motivator was, and is, that manual testing is very time consuming and uninteresting for devs. Spending upwards of a week before a release because the teams has to setup, pick and perform all featue tests again on the release candidate, is both time and money. And we saw things slip through now and then.

        Our application is time critical, legacy code of about 30 years, spread between C# and database code, running in different variations with different requirements. So a single page may show differently depending on where it’s running. Changing one thing can often affect others, so for us it is sometimes very tiresome to verify even the smalles changes since they may affect different variants. Since there is no automated tests, especially for GUI (which we also do not unit test much, because that is complicated and prone to breaking), we have to not only test changes, but often check for regression by comparing by hand to the old version.

        We have a complicated system with a few intergrations, setting up all test scenarios not only takes time during testing, but also time for the dev to prepare the instructions for. And I mentioned calculations, going through all motions to verify that a calculated result is the same between two version is a awfully boring experience when that is exaclty something automated tests can just completely take over for you.

        As our application is projected to grow, so does all of this manual testing required for a single change. So putting all that effort into manual testing and preparation can intsead often just be put on making tests that check requirements. And once our coverage is good enough, we can only manuall test interfaces, and leave a lot of the complicated edge cases and calculcation tests to automated tests. It’s a bit idealistic to say automated tests can do everything, but they can certainly remove the most boring parts.

        • yournameplease@programming.devOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          6 months ago

          I guess since we have manual QAs, there’s less motivation to get away from manual testing as it’s literally their job description. Not to say we aren’t wasting time and money still. I do find other devs and I still need to spend a lot of time ourselves manually sanity checking things.

          That all does sound like my dream end goal, though, thanks for the responses.

  • apotheotic (she/her)@beehaw.org
    link
    fedilink
    English
    arrow-up
    14
    ·
    6 months ago

    My team follows test driven development, so I write a test before writing the feature that the test, well, tests.

    This leads to cleaner code in general because it tends to be the case that easy to test code is also easy to read.

    On top of this fact, the test suite acts as a sort of “contract” for the code behaviour. If I tweak the code and a test no longer works, then my code is doing something fundamentally different. This “contract” ensures that changes to one codebase aren’t going to break downstream applications, and makes us very aware of when we are making breaking changes so we can inform downstream teams.

    Writing tests and having them running at PR time(or, before its deployed to production, if you’re not using some sort of VCS and CI/CD) should absolutely be a part of your dev cycle. Its better for everyone involved!

      • OhNoMoreLemmy
        link
        fedilink
        arrow-up
        7
        ·
        6 months ago

        Yeah, debugging tests is an important part of test driven development.

        You also have to be careful. Some tests are for me to debug my code and aren’t part of the ‘contract’.

        But on the other hand, it’s really nice. If I spend a couple of hours debugging actual code and come out of the process with internal tests, the next time it breaks, the new tests make it much easier to identify what broke. Previously, that would have been almost wasted effort, you fix it and just hope it never breaks again.

      • apotheotic (she/her)@beehaw.org
        link
        fedilink
        English
        arrow-up
        5
        ·
        6 months ago

        Yeah, but it isn’t usually very difficult to write a test correctly, unit tests especially.

        If you can’t write a test to validate the behaviour that you know your application needs to exhibit, then you probably can’t write the code to create that behaviour in the first place. Or, in a less binary sense, if you would write a test which isn’t “right”, you’re probably just as likely to have written code that isn’t “right”.

        At least in the case with the test, you write the test and the code, and when the test fails (or, doesn’t fail when it should) you’re tipped off to something being funky.

        I’m sure you could end up writing a test that’s bad in just the right way to end up doing more harm than good, but I do think that’s the exception(heh).

        • yournameplease@programming.devOP
          link
          fedilink
          English
          arrow-up
          3
          ·
          6 months ago

          We’ve definitely written lots of tests that felt like net negative, and I think that’s part of what burned some devs out on testing. When I joined, the few tests we had were “read a huge JSON file, run it through everything, and assert seemingly random numbers match.” Not random, but the logic was so complex that the only sane way to update the tests when code changed was to rerun and copy the new output. (I suppose this is pretty similar to approval testing, which I do find useful for code areas that shouldn’t change frequently.)

          Similar issue with integration tests mocking huge network requests. Either you assert the request body matches an expected one, and need to update that whenever the signature changes (fairly common). Or you ignore the body, but that feels much less useful of a test.

          I agree unit tests are hard to mess up, which is why I mostly gravitate to them. And TDD is fun when I actually do it properly.

          • apotheotic (she/her)@beehaw.org
            link
            fedilink
            English
            arrow-up
            2
            ·
            6 months ago

            I hear you. When you’re trying to write one big test that verifies the whole code flow or whatever, it can be HELL, especially if the code has been written in a way that makes it difficult to write a robust test.

            God, big mocks are the WORST. It might not be applicable in your case, but I far prefer doing some setup and teardown so that I’m actually making the network request, against some test endpoint that I set up in the setup stage. That way you know the issues aren’t cropping up due to some mocking nonsense going wrong.

            Asserting that some arbitrary numbers match can be quite fragile, as I’m sure you’ve experienced. But if the code itsself had been written in such a way that you had an easier assertion to make, well, winner!

            Its all easier said than done, of course, but your colleagues having given up on testing because they’re bad at it is kinda disheartening I bet. How are you gonna get good at it if you don’t do it! :D

            • yournameplease@programming.devOP
              link
              fedilink
              English
              arrow-up
              1
              ·
              6 months ago

              especially if the code has been written in a way that makes it difficult to write a robust test.

              I definitely deserve a lot of blame for designing my primary project in making hard to test. So, word to the wise (though it doesn’t take a genius to figure this out), don’t tell two fresh grads and a 1 YoE junior to “break the legacy app into microservices” with minimal oversight. If I did things again, I still think the only sane decision would be to cancel the project as soon as possible. x.x

              I actually was using a mock webserver with the expected request/response, which sounds like what you’re getting at. Still felt fiddly though and doesn’t solve the huge mock data problem which is more an architecture design failing.

              I’ve mostly gotten away from testing huge methods with a seemingly arbitrary numbers in favor of testing small methods with slightly less arbitrary numbers, which feels like a pretty big improvement.

              How are you gonna get good at it if you don’t do it! :D

              True. :)

              • apotheotic (she/her)@beehaw.org
                link
                fedilink
                English
                arrow-up
                2
                ·
                6 months ago

                Hahahahaha I feel that re: just kill the project!

                Ah I thought you were just mocking the response, as opposed to having some real webserver so you don’t have to faff with mocking stuff. Sounds like you did what I would have :P

                That does sound like a big improvement! Anything you can do to make your own job easier

        • hollyberries@programming.dev
          link
          fedilink
          arrow-up
          3
          ·
          6 months ago

          I’m sure you could end up writing a test that’s bad in just the right way to end up doing more harm than good, but I do think that’s the exception(heh).

          That’s exactly why I’ve asked. That is where I’ve gone wrong with TDD in the past, especially where any sort of math is involved due to being absolutely horrible at it (and I do game dev these days!). I can problem solve and write the code, I just can’t manually proof the math without external help and have spent countless hours looking for where my issue was due to being 100% certain that the formula or algorithm was correct >.<

          Nowadays anytime numbers are involved I write the tests after doing manual tests multiple times and getting the expected response, and/or having an LLM check the work and make suggestions. That in itself introduces more issues sometimes since that can also be wrong. Probably should have paid attention in school all those years ago lol

          • yournameplease@programming.devOP
            link
            fedilink
            English
            arrow-up
            3
            ·
            6 months ago

            Game dev seems like a place where testing is a bit less common due to need for fast iterations and prototyping, not to say it isn’t valuable.

            I’ve seen a good talk (I think GDC?) on how the Talos Principle devs developed a tool to replay inputs for acceptance testing. I can’t seem to find the talk, but here is a demo of the tool.

            The Factorio devs also have some testing discussions in their blog somewhere.

            • hollyberries@programming.dev
              link
              fedilink
              arrow-up
              1
              ·
              6 months ago

              The Talos Principle video was interesting to watch, thanks for the link! It shined a little bit of light on automated testing.

              Theres also someone on YouTube who has been teaching an AI on how to walk and solve puzzles on its own, the channel name escapes me and I’m nowhere near a working computer to look it up at the moment :(

          • apotheotic (she/her)@beehaw.org
            link
            fedilink
            English
            arrow-up
            3
            ·
            6 months ago

            Aw man, I can empathise. I don’t personally have any issues with mathsy stuff but I can imagine it being a huge brick wall at times, especially in game dev. I wish I had advice for that but its not a problem I’ve had to solve!

      • Ephera
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        6 months ago

        You should think of an automated test as a specification. If you’ve got the wrong requirements or simply make a mistake while formulating it, then yeah, it can’t protect you from that.
        But you’d likely make a similar or worse mistake, if you implemented the production code without a specification.

        The advantage of automated tests compared to a specification document, is that you get continuous checks that your production code matches its specification. So, at least it can’t be wrong in that sense.

      • RonSijm@programming.dev
        link
        fedilink
        arrow-up
        1
        ·
        6 months ago

        Sure, but testing usually purely relies whether your assumptions are right or not - whether you do it automatically or manually.

        Like if you’re manually testing a login form for example, and you assume that you’ve filled in the correct credentials, but you didn’t and the form still lets you continue, you’ve failed the testing because your assumption is wrong.

        Like even if the specs are wrong, and you make a test for it, lets say in a calculator Assert(Calculate(2+2).Should().Equal(5) - if this is your assumption based on the specs or something, you can start up the calculator, manually click through the UI of the calculator, code something that returns 5, and deliver it.

        Then once someone corrects you, you have to start the whole thing over, open up the calculator, click through the UI, do the input, now it’s 4, yay!

        If you had just written a test - even relying on a spec that was wrong, it’s still very easy to change the test and fix the assumption.

        Also, lets say next sprint you’ll have to build a deduct function in the calculator, which broke the + operation. Now you have to re-test all operations manually to check you didn’t break anything else. If there were unittests with like 100 different operations, you just run them all, see they’re all still good, and you’re done

  • FizzyOrange@programming.dev
    link
    fedilink
    arrow-up
    14
    ·
    6 months ago

    Very common. Your coworkers are either idiots, or more likely they’re just being lazy, can’t be bothered to set it up and are coming up with excuses.

    The one exception I will allow is for GUI programs. It’s extremely difficult to do automatically tests for them, and in my experience it’s such a pain that manual testing is often less annoying. For example VSCode has no automatic UI tests as far as I know.

    That will probably change once AI-based GUI testing becomes common but it isn’t yet.

    For anything else, you should 100% have automated tests running in CI and if you don’t you are doing it wrong.

    • yournameplease@programming.devOP
      link
      fedilink
      English
      arrow-up
      5
      ·
      6 months ago

      Leadership may be idiots, but devs are mostly just burnt out and recognized that quality isn’t a very high priority and know not to take too much pride in the product. I think it’s my own problem that I have a hard time separating my pride from my work.

      Thanks for the response. It’s good to know that my experience here isn’t super common.

    • Ephera
      link
      fedilink
      arrow-up
      1
      ·
      6 months ago

      Our standard practice is to introduce a thin layer in front of any I/O code, so that we can mock/simulate that part in tests.

      So, if your database-library has an insert()-function, you’d introduce a interface/trait with an insert()-function, which’s default implementation just calls that database-library and nothing else. And then in the test, you stick your assertions behind that trait.

      So, we don’t actually test the interaction with outside systems most of the time, because well:

      • that database-library is tested,
      • the compiler ensures we’re calling that library correctly (assuming no use of a scripting language), and
      • it’s often easier to simulate the behavior of the outside system correctly, than to set it up for each test case.

      We do usually aim to get integration tests with all outside systems going, too, to ensure that we’re not completely off the mark with the behavior that we’re simulating, but those are then often reduced to just the happy flow.

  • mozz@mbin.grits.dev
    link
    fedilink
    arrow-up
    12
    ·
    6 months ago

    I’ve never worked (recently) at a shop that didn’t do some level of automated testing. In terms of having a bunch of people working on a big codebase without stuff being randomly broken most of the time, I’d say it’s an absolute requirement to do it to at least some passable level.

    In my experience it’s, if anything, sometimes the opposite way – like they insist on having testing even when the value of it the way it’s being implemented is a little debatable. But yes I think it’s important enough in terms of keeping things productive and detecting when something is totally-broken that you need to.

    (Especially now when you can literally just paste a module into GPT and ask it to generate some sorta-stupid-but-maybe-good-enough test cases for it and with minimal tweaking you can get the whole thing in in like 10 minutes.)

    • yournameplease@programming.devOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 months ago

      like they insist on having testing even when the value of it the way it’s being implemented is a little debatable

      I started to feel like I was this guy when I asked someone to test their code after multiple sprints of being sent back from QA. Good to hear I’m not the crazy one, I guess.

  • Lodra@programming.dev
    link
    fedilink
    English
    arrow-up
    11
    ·
    6 months ago

    Here’s my random collection of thoughts on the subject.

    I have no idea how common it is in general. Seems like some devs build tests while others don’t. This varies plenty on a team level as well as organization wide. I’ve observed this at small to very large companies, though not FAANG where I generally hope and expect that tests are a stronger standard.

    I will say that test are consistently and heavily used in every large, open source project that I’ve reviewed. At some point, I think quality test cases become a requirement.

    Here’s the big thing. Building automated tests is almost always a wise investment, regardless of the size of the org. Manual testing is dramatically more expensive and less effective than running unit and integration tests. I’ve never written unit tests and not found issues.

    More importantly, writing unit tests forces you to write code that can be tested. This is important. IMO, code that can be tested is 1) structured differently and 2) almost always better.

    Unit tests protect you from your own mistakes. Frequently. Integration tests protect you from other people. E.g when your code depends on an api and that api unexpectedly introduces a breaking change.

    Everybody likes having quality tests. Nobody likes writing tests.

    Quality tests are basically a strict requirement for fully automating ci/cd to production. Sure, you can skip tests and automate prior deploys, but I certainly don’t recommend it. I would expect people to be fired for doing this.

    Chasing 100% test coverage is a fools game. Think about your code, what matters, and what doesn’t. Test the parts that add value and skip the rest. This is highly related to how writing unit tests change your code.

    Building front end tests is inherently hard. It’s practically impossible to fully test front end code. Not even close.

    Personally, I like the idea of skipping tests when you’re building a POC. Before the POC is done, you may not know if your solution is viable or what needs to be tested. The POC helps you understand. Builds tests for MVP and further iterations.

    Quality ci/cd tests are complimented by quality observability, which is a large and independent topic.

    / ramblings of a tired mind

    • yournameplease@programming.devOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      6 months ago

      This is more or less the thoughts I typically hear online, and all makes sense. What I tend to notice interviewing people from big(ger) companies than mine (mostly banks), it sounds like testing for them is mostly about hitting some minimum coverage number on the CI/CD. Probably still has big benefits but it doesn’t seem super thoughtful? Or is testing just so important that even testing on autopilot has decent value?

      I get that same feeling with frontend testing. Unit testing makes sense to me. Integration testing makes sense but I find it hard to do in the time I have. But frontend testing is very daunting. Now I will only test our data models we keep in the frontend, if I test anything frontend.

      • Lodra@programming.dev
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        6 months ago

        Test coverage is useful to measure simply because it’s a metric. You can set standards. You can codify the number into ci/cd. You can observe if the number goes up or down over time. You can argue if these things are valuable but quantifying test coverage just makes it simpler or possible to discuss testing. As people discuss test coverage and building tests becomes normalized, the topic becomes boring. You’ll only get thoughtful discussions on automated testing when somebody establishes a new method, pattern, etc. After that, most tests are very simple. That’s often the point.

        Even “testing on autopilot” has high value.

        You can build lots of useful front end tests. There are tools for it. But it’s just not possible to test everything because you can’t codify every requirement. E.g. ensure that this ui element is 5 pixels below some other element, except when the window shrinks, and …

        I haven’t seen great front end tests. But the ones I’ve seen mostly focus on functionality and flow rather than aiming to cover all possible scenarios. Unit tests are different in this regard.

        Integration testing makes sense but I find it hard to do in the time I have.

        This is a red flag. Building tests should be a planned part of your work, usually described as acceptance criteria. If you need 4 hours to write a code change, then plan for 8 or whatever so you can build tests. Engineering leaders should encourage this. If they don’t, I would consider that a cultural problem. One that indicates a lack of focus on quality and all of the problems that follow.

        Edit: I want to soften my “red flag” comment. That’s a red flag for me. That job isn’t necessary bad. But I would personally not be interested. It’s ok to accept things like, “we don’t write tests and sometimes we deal with issues”. Especially if it’s a good job otherwise.

        • yournameplease@programming.devOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          6 months ago

          Nah, red flag is certainly accurate in my case.

          We really don’t have a strong hierarchy of engineering leaders. Devs are all pretty much equal. EM is extremely hands-off, but also prefers to hire inexperienced developers to “train them up” (which seem like contradictory ideas…). So we we have a very free-for-all development process after work is assigned. And of course very few (zero?) devs really want to start doubling estimates for quality that no one seems to care strongly about.

          (The saving grace here, if you can call it that, is that it’s very easy to go around leadership and do whatever you want with the dev process, so long as you can do it yourself. So perhaps what I should do is add stricter code coverage checks on the services primarily worked by me as a way to protect me from myself, and maybe convince some others to join in.)

  • Piatro@programming.dev
    link
    fedilink
    English
    arrow-up
    10
    ·
    6 months ago

    I’m in a team of 4 developers and we demand automated testing. Ok that’s part of a slightly bigger development team but even our QC team have automated tests that they run for integration testing.

  • anti-idpol action@programming.dev
    link
    fedilink
    arrow-up
    8
    arrow-down
    1
    ·
    edit-2
    6 months ago

    Sometimes you’d use defensive programming (type checker, exception handling, null safeguards, fallback/optional values) which can be argued as a sort of in-place testing, so testing can be not as beneficial to your projects’ robustness as the readability of their core business logic. And some languages would lean more heavily towards defensive programming (e.g. Go, Scala or well written Typescript) and some would rely more on tests but also be designed in a way that makes testing really easy as they seek to keep things loosely coupled (Elixir or Clojure).

    Also if your language doesn’t have a quality REPL to reliably test things manually, there is a relatively high chance you debugging process is causing you to waste more time than having a good test coverage.

    • FizzyOrange@programming.dev
      link
      fedilink
      arrow-up
      7
      ·
      6 months ago

      I think even in languages that do a lot at compile time (Rust, Haskell, etc.) it’s still standard practice to write tests. Maybe not as many tests as e.g. Python or JavaScript or Ruby. But still some.

      I work in silicon verification and even where things are fully formally verified we still have some tests. (Generally because the formal verification might have mistakes or omissions, and occasionally there are subtle differences between formal and simulation.)

  • henfredemars@infosec.pub
    link
    fedilink
    English
    arrow-up
    6
    ·
    6 months ago

    We use automated testing, not for full coverage, but smoke tests so we can detect problems more quickly and avoid potential embarrassment.

  • qevlarr@lemmy.world
    link
    fedilink
    arrow-up
    6
    ·
    6 months ago

    I worked at 8 different companies as a contractor, so hopefully my sample size is big enough to be meaningful. I’d say it’s 50-50. The companies that don’t, usually know that they should but they need a little help. Companies that don’t do it and they think they don’t need it, are becoming more and more rare (fortunately).

    Stick with it. If you’re a junior, don’t go evangelizing automated testing because it will fall on deaf ears until you’re a little more experienced. Keep practicing and offer to set things up if they haven’t already.

  • TheHarpyEagle@lemmy.world
    link
    fedilink
    arrow-up
    6
    ·
    edit-2
    6 months ago

    We started focusing in on automated testing when we had 3 manual QAs (not including me), and since then every new project has started with plans for automated testing.

    It’s important to note that we don’t do automated tests instead of manual testing. Manual testing is still important for focused review of new features/bugs, but automated tests make sure code changes aren’t breaking anything elsewhere.

    Also this is all about end-to-end tests (with Selenium, in our case). If you’re talking about a lack of unit/integration tests within the codebase itself, that’s a huge red flag. Even if quality issues aren’t the end of the world, they will definitely make people reconsider using your product. Who wants to trust their financial information with unstable software? It’s also making your QA team less efficient since they’re having to chase down issues that would be better recognized by the dev who wrote them.

  • Kissaki@programming.dev
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    6 months ago

    My context: I’m in a small ~30 software company. We do various projects for various customers. We’re close to the machine sector, although my team is not. I’m lead in a small 3-person developer team/continuous project.

    I write unit tests when I want to verify things. When I’m in somewhat low, algorithmic, coding behavior, interfacing areas.

    I would write more and against our interfaces if those were exposed to someone or something. If it needs that stability and verification.

    Our tests are mainly manually (mostly user-/UI-/use-interface-centric), and we have data restrictions and automated reporting data consistency validations. (Our project is very data-centric.)

    it’s not practical for our size and the business would allow us to slow down for that

    Tests are an investment. A slowdown in implementing tests will increase maintainability and stability down the line. Which could already be before delivering (reviews, before merge or before delivery issues being noticed).

    It may very well be that they wouldn’t even slow you down, because they could lead you to a more thought out implementation and interfacing. Or noticing issues before they hit review, test, or production.

    If you have a project that will be maintained then it’s not a question of slowing down but of are you willing to pay more (effort, complexity, money, instability, consequential dissatisfaction) down the line for possibly earlier deliverables?

    If tests would make sense and you don’t implement them then it’s technical debt you are incurring. It’s not sound development or engineering practice. Which should require a conscious decision about that fact, and awareness on the cost of not adding tests.

    How common automated testing is - I don’t know. I think many developers will take shortcuts when they can. Many are not thorough in that way. And give in to short-sighted time pressure and fallacy.

    • yournameplease@programming.devOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      6 months ago

      Perhaps it’s just part of being somewhere where tech is seen as a cost center? Technical leadership loves to talk big about how we need to invest in our software and make it more scalable for future growth. But when push comes to shove, they simply say yes to nearly every business request, tell us to fix things later, and we end up making things less scalable and harder to test.

      It feels terrible and burns me out, but we never seem to seriously suffer for poor quality, so I thought this could be all in my head. I guess I’ve just been gaslit by my EM into thinking this lack of testing is a common occurrence.

      (A programming lemmy may not be a terribly representative sample, but I don’t see anyone here anywhere close to as wild west as my place.)

      • Ephera
        link
        fedilink
        arrow-up
        2
        ·
        6 months ago

        It feels terrible and burns me out, but we never seem to seriously suffer for poor quality, so I thought this could be all in my head.

        The way you suffer for it, is in a loss of agility.

        When I’m in a project with excellent unit test coverage, I often have no qualms with typing up a hot fix, running it through our automated tests and then rolling it out, in less than an hour.
        Obviously, if it’s a critical target system, you might want to get someone’s review anyways, but you don’t have to wait multiple days for your manual testers to get around to it.

        Another way in which it reduces agility is in terms of moving people between projects.
        If all the intended behavior is specified in automated tests, then the intern or someone, who just got added to the project, can go ham on your codebase without much worry that they’ll break something.
        And if someone needs to be pulled out from your project, then they don’t leave a massive hole, where only they knew the intended behavior for certain parts of the code.

        Your management wants this, they just don’t yet understand why.

        • yournameplease@programming.devOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          6 months ago

          We haveused to have a scrum master so we’re already agile! /s

          They want those things, sure, but I think it would take multiple weeks of dedicated work for me to set up tests on our primary system that would cover much of anything. Big investment that might enable faster future development is what I find hard to sell. I am already seen as the “automated testing guy” on my (separate) project, and it doesn’t really look like I’m that much faster than anyone else.

          What I’ve been meaning to do is start underloading my own sprint items by a day or two and try to set up some test infrastructure in my spare Fridays to show some practical use. But boy is that a hard thing to actually hold myself to.

          • Ephera
            link
            fedilink
            arrow-up
            2
            ·
            6 months ago

            If we end up in a project with too little test coverage, our strategy is usually to then formulate unit tests before touching old code.

            So, first you figure out what the hell that old code does, then you formulate a unit test until it’s green, then you make a commit. And then you tweak your unit test to include the new requirements and make the production code match it (i.e. make the unit test green again).

            I am already seen as the “automated testing guy” on my (separate) project, and it doesn’t really look like I’m that much faster than anyone else.

            This isn’t about you being faster, as you write a feature. I mean, it often does help, even during first implementation, because you can iterate much quicker than starting up the whole application. But especially for small applications, it will slow you down when you first write a feature.

            Who’s sped up by your automated tests are your team members and you-in-three-months.
            You should definitely push for automated tests, but you need to make it clear that this needs to be a team effort for it to succeed. You’re doing it as a service to everyone else.

            If it’s only you who’s writing automated tests, then that doesn’t diminish the value of your automated tests, but it will make it look like you’re slower at actually completing a feature, and it will make everyone else look faster at tweaking the features you implemented. You want your management to understand that and be on board with it, so that they don’t rate you badly.

            • yournameplease@programming.devOP
              link
              fedilink
              English
              arrow-up
              2
              ·
              6 months ago

              Who’s sped up by your automated tests are your team members and you-in-three-months.

              Definitely true. I am very thankful when I fail a test and know I broke something and need to clean up after myself. Also very nice as insurance against our more “chaotic” developer(s).

              I’ve advocated for tests as a team effort. Problem is just that we don’t really have any technical leadership, just a hands-off EM and hands-off CTO. Best I get from them is “Yes, you should test your code.” …Doesn’t really help when some developers just aren’t interested in testing. I am warming another developer on my team up to testing, so at least I may get another developer or two on the testing kick for a bit.

              And as for management rating me… I don’t really worry too much. As I mentioned, hands off management. Heck, we didn’t even get performance reviews last year.