• Ephera
    link
    12 years ago

    Oh, I can see that scenario. Mine was a rhetoric question, as I’ve been working in the data-shipping field for the past few years.

    Thing is, if there’s a thousand power stations, there may well be a thousand different implementations + error codes, because for decades there was no need for a common method of error reporting.

    The only common interface was humans. That’s why all of these implementations describe errors in human-readable text. And I would bet a lot of money that they’ve already had to extraxt those error codes from text logs.

    Writing them out in e.g. a standardized JSON format, requires standardization efforts, which no one is going to push for while individually building these power stations.

    That’s how you end up with a huge mess of different errors and differently described+formatted error codes, which only a human or human-imitating AI can attempt to read.

    I mean, there’s definitely things they could have done that are less artificially intelligent, like keyword matching or even just counting how many error codes a power station produces. And I’m not sure you necessarily want a blackbox-AI deciding what gets power and what not. But realistically, companies around the planet will adopt similar approaches.

    • @bc3114
      link
      22 years ago

      Ah ha, that explains a lot actually. I just realize how ignorant I’m about how power plants(and many other factories) were built before I was born and still running well today. And how costly it could be to upgrade them altogether.

    • ☆ Yσɠƚԋσʂ ☆OP
      link
      22 years ago

      Yeah that’s a good point, if you have lots of disparate systems that don’t have standard coding then the codes wouldn’t be of much use. I can see how standardizing that sort of things would be a huge effort, so in that context the approach makes sense.

      I also assume that the humans have the final say, but in most cases I imagine having the computer do the initial routing will get better results than doing nothing at all while humans figure out what the overall picture is.

    • @Vlyn@lemmy.zip
      link
      fedilink
      English
      0
      edit-2
      8 months ago

      It would still be cheaper and safer to sit a few people down, go over every error code and map them to the correct issue.

      Yes, you have to do this for every implementation, but what do you think other businesses do? There are software developers (I got offered a job like that once and declined) who do nothing else than map action x to machine signals y and z. You only have to do this once per machine of course, but it’s still shitty work.

      A thousand times more reliable than letting text processing AI try and make a best guess based on the error message (which is honestly insanity).

      This is probably marketing crap anyway. Either their solution is brittle and breaks on a real error, or they already hardcoded the most important error codes and the rest is fluff.

      • Ephera
        link
        English
        18 months ago

        Problem is, you don’t always get error codes. An error code is only useful, if you can output it to somewhere, so lots of machine manufacturers or programmers save themselves the trouble.

        And error codes are an incredibly limiting interface. You can’t provide dynamic information with them. Or, programmers may choose to include an error into another error code, because it’s “close enough” for them to not want to update the manual.

        Meanwhile, text logs get all the detailed information you actually need for debugging, with the downside of being an entirely unstable interface.

        I’m not happy about this state of the industry either. I’m just saying, many companies will gladly take 95% accuracy over having to upgrade their machines or investing time to ingest the various signals.