• 0 Posts
  • 190 Comments
Joined 1 year ago
cake
Cake day: June 11th, 2023

help-circle


  • If you are wondering why your cookies come out different every time you bake, it isn’t due to variance of temperature and humidity – IT IS BECAUSE YOU ARE USING WILDLY DIFFERENT AMOUNTS OF FLOUR.

    And yes you ducking can tell the difference between a batch of cookies where the flour is weighed vs scooped.

    You can’t accurately measure flour by volume. The amount you get in a scoop will vary depending on how compressed it is. You weigh flour to remove that variance, which can be far greater than 5%. Don’t believe me? Put a cup of flour in a measuring cup, then start pressing on it to pack it (you won’t have anywhere near a cup anymore). Controlling for flour density (ie: consistently measure by volume) is nearly impossible.

    Brown sugar is similar but easier to manage (most recipes tell you to use packed measures instead of scooping).

    Things like white sugar, sure – scoop away.











  • Let me tell you something, folks. Scott Walker, he’s like a giant, stinking pile of shit. It’s unbelievable. Nobody’s seen anything like it before. You walk past it, and you know it’s bad right away. Everyone says it. People are talking about it. It’s huge, just sitting there, doing nothing, and it stinks—worse than anyone thought. And guess what? He thinks it’s good! Can you believe it? He’s out there, pretending like everything’s fine, while people can’t even stand to be around him. Total disaster, folks. Total mess. We’re going to clean it up, because that’s what we do. We clean up the mess left by people like Scott Walker, the human pile of shit. Believe me.





  • They’re supposed to be good a transformation tasks. Language translation, create x in the style of y, replicate a pattern, etc. LLMs are outstandingly good at language transformer tasks.

    Using an llm as a fact generating chatbot is actually a misuse. But they were trained on such a large dataset and have such a large number of parameters (175 billion!?) that they passably perform in that role… which is, at its core, to fill in a call+response pattern in a conversation.

    At a fundamental level it will never ever generate factually correct answers 100% of the time. That it generates correct answers > 50% of the time is actually quite a marvel.