• computerscientistII@lemm.ee
    link
    fedilink
    arrow-up
    11
    arrow-down
    1
    ·
    11 hours ago

    I haven’t been in development for nearly 20 years now, but I assumed it worked like that:

    You generate unit tests for a very specific function of rather limited magnitude, then you let AI generate the function. How could this work otherwise?

    Bonus points if you let the AI divide your overall problem into smaller problems of manageable magnitudes. That wouldn’t involve code generation as such…

    Am I wrong with this approach?

    • ☆ Yσɠƚԋσʂ ☆OP
      link
      fedilink
      arrow-up
      5
      ·
      9 hours ago

      The complexity here lies in having to craft a comprehensive enough spec. Correctness is one aspect, but another is performance. If the AI craps out code that passes your tests, but does it in really inefficient way then it’s still a problem.

      Also worth noting that you don’t actually need AI to do such things. For example, Barliman is a tool that can do program synthesis. Given a set of tests to pass, it attempts to complete the program for you. Synthesis is performed using logic programming. Not only is it capable of generating code, but it can also reuse code it’s already come up with as basis for solving bigger problems.

      https://github.com/webyrd/Barliman

      here’s a talk about how it works https://www.youtube.com/watch?v=er_lLvkklsk

    • Kairos@lemmy.today
      link
      fedilink
      arrow-up
      6
      ·
      10 hours ago

      At that point you should be able to just write the code yourself.

      The A"I" will either make mistakes even under defined bounds, or it will never make any mistakes ever in which case it’s not an autocomplete, it’s a compiler and we’ve just gone full circle.