nice! I didn’t know this plant. I’ll try to find some.
it’s impressive! How does your infrastructure looks like? Is it 100% on prem?
I like basil. At some point I i got tired of killing all the plants and started learning how to properly grow and care greens with basil.
It has plenty of uses and it requires the right amount of care, not too simple not too complex.
I’ve grown it from seeds, cuttings, in pots, outside and in hydroponics.
nice instance!
ahah thank you, we shall all yell together then
This stuff is fascinating to think about.
What if prompt injection is not really solvable? I still see jailbreaks for chatgpt4 from time to time.
Let’s say we can’t validate and sanitize user input to the LLM, so that also the LLM output is considered untrusted.
In that case security could only sit in front of the connected APIs the LLM is allowed to orchestrate. Would that even scale? How? It feels we will have to reduce the nondeterministic nature of LLM outputs to a deterministic set of allowed possible inputs to the APIs… which is a castration of the whole AI vision?
I am also curious to understand what is the state of the art in protecting from prompt injection, do you have any pointers?
👋 infra sec blue team lead for a large tech company
You build a derivation yourself… which I never do. I am on mac so I brew install and orchestrate brew from home manager. I find it works good as a compromise.