• gwilikers
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    7
    ·
    23 days ago

    This is your brain when you OD on OOP.

      • davidagain@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        23 days ago

        You’re right, this is just not oop AT ALL.

        For the correct OOP solution, you would need consider whether this can be thought of as a kind of stateless leaf method, and therefore implement it as an abstract (singleton) factory, but on the other hand, if you’re using a context object with the registry pattern, you should probably do this properly with IoC containers. Of course, if your object graph isn’t too complex or entangled, you could always just do constructor injection but you risk manging your unit tests, so I would go cautiously if I were looking in that direction.

        • collapse_already
          link
          fedilink
          English
          arrow-up
          5
          ·
          23 days ago

          Shouldn’t there be a call to the boolean comparison microservices server in there somewhere? Also, we should consider the possibility that booleans and their operators could be overloaded to do something else entirely. We might need a server farm to handle all of the boolean comparison service requests.

          • davidagain@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            edit-2
            23 days ago

            SOLVED. On reflection, @collapse_already@lemmy.ml has come up with the perfect solution - let me explain,

            Parallelism

            YES. We should utilise a microservices architecture so that we can leverage a fundamental distributed interconnected parallelism to these boolean comparisons which is bound to beat naive single-thread, single-core calculation hands down. Already. But it gets better.

            Load balancing

            Of course a load balancing microservice would be useful because you don’t want one of the boolean comparison microservices accidentally taking too great a share of the computation, making the whole topology more brittle than it needs to be.

            Heuristics

            A boolean comparison request-comparing analytics microservice could evaluate different request distribution heuristics to the individual microservice nodes (for example targetting similar requests resolving to true/true or false/true etc versus fair-balancing-oriented server targetting versus pseudo-random distribution etc etc), and do so for randomly selected proportions of the uptime.

            Analysis

            The incoming boolean comparison requests would be tagged and logged for cross-reference and analysis, together with the computation times, the then-current request-distribution heuristic and the selected server, so that each heuristic can be analysed for effectiveness in different circumstances.

            Non-generative AI

            In fact, the simplest way of evaluating the different heuristic pragmas would be to input the aforementioned boolean comparison request logs, together with some general data on time of day/week/year and general performance metrics, into a neural network with a straightforward faster-is-better training programme, and pretty soon you’ll ORGANICALLY find the MOST EFFICIENT way of managing the boolean comparison requests.

            Executive summary:

            Organically evaluated stochastically-selected heuristics leverage AI for a monotonically-improving service metric, reducing costs and upscaling customer productivity on an ongoing basis without unnecessary unbillable human-led code improvement costs. Neural networks can be marketed under separate brands both as AI solutions and as LLM-free solutions, leveraging well-understood market segmentation techniques to drive revenues from disparate customer bases. Upgrade routes between the different marketing pathways can of course be monetised, but applying a 3%-above inflation mid-term customer inertia fee allows for prima-facia discounts when customers seek cost reduction-inspired pathway transfers, whilst ensuring underlying income increases that can be modelled as pervasive and overriding lower bounds for the two SAAS branches, independent of any customer churn, whilst well-placed marketing strategies can reasonably be expected to drive billable customer “upgrades” between pathways, mitigating any prima-facia discounts even before the underlying monotonicity price-structuring schemas.

          • davidagain@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            23 days ago

            What about a factory for the factories! There’s nothing more efficient than a tool making tool making tool.

      • gwilikers
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        23 days ago

        I know. I didn’t say this was OOP, I said this was your brain when you OD on OOP. While we are not dealing with objects, I’d argue that the kind of approach that would lead one to needlessly overcompartmentalise code like this is the product of having a little too much OOP.