Less than a month after New York Attorney General Letitia James said she would be willing to seize former Republican President Donald Trump’s assets if he is unable to pay the $464 million required by last month’s judgment in his civil fraud case, Trump’s lawyers disclosed in court filings Monday that he had failed to secure a bond for the amount.

In the nearly 5,000-page filing, lawyers for Trump said it has proven a “practical impossibility” for Trump to secure a bond from any financial institutions in the state, as “about 30 surety companies” have refused to accept assets including real estate as collateral and have demanded cash and other liquid assets instead.

To get the institutions to agree to cover that $464 million judgment if Trump loses his appeal and fails to pay the state, he would have to pledge more than $550 million as collateral—“a sum he simply does not have,” reportedThe New York Times, despite his frequent boasting of his wealth and business prowess.

  • MagicShel@programming.dev
    link
    fedilink
    arrow-up
    1
    arrow-down
    5
    ·
    edit-2
    8 months ago

    LLMs are still pretty limited, but I would agree with you that if there was a single task at which they can excel, it’s translating and summarizing. They also have much bigger contexts than 500 words. I think ChatGPT has a 32k token context which is certainly enough to summarize entire chapters at a time.

    You’d definitely need to review the result by hand, but AI could suggest certain key things to look for.

    • TropicalDingdong@lemmy.world
      link
      fedilink
      arrow-up
      2
      arrow-down
      5
      ·
      8 months ago

      LLMs are still pretty limited,

      People were doing this somewhat effectively with garbage Markov chains and it was ‘ok’. There is research going on right now doing precisely what I described. I know because I wrote a demo for the researcher whose team wanted to do this, and we’re not even using fine tuned LLMs. You can overcome much of the issues around ‘hallucinations’ by just repeating the same thing several times to get to a probability. There are teams funded in the hundreds of millions to build the engineering around these things. Wrap calls in enough engineering and get the bumper rails into place and the current generation of LLM’s are completely capable of what I described.

      This current generation of AI revolution is just getting started. We’re in the ‘deep blue’ phase where people are shocked that an AI can even do the thing as good or better than humans. We’ll be at alpha-go in a few years, and we simply won’t recognize the world we live in. In a decade, it will be the AI as the authority and people will be questioning allowing humans to do certain things.

      • MagicShel@programming.dev
        link
        fedilink
        arrow-up
        4
        ·
        edit-2
        8 months ago

        Read a little further. I might disagree with you about the overall capability/potential of AI, but I agree this is a great task to highlight its strengths.

        • TropicalDingdong@lemmy.world
          link
          fedilink
          arrow-up
          2
          arrow-down
          3
          ·
          8 months ago

          Sure. and yes I think we largely agree, but on the differences, I seen that they can effectively be overcome by making the same call repeatedly and looking at the distribution of results. Its probably not as good as just having a better underlying model, but even then the same approach might be necessary.