It’s a trend lately, that potentially sensitive things will be said or output from the models, so you can see an increasingly crazier set of guardrails getting put around the LLM’s so that they don’t offend someone by mistake. I’ve seen their usefulness decrease significantly, but their coding assistance is still somewhat good, but their capabilities otherwise decrease significantly.
It’s a trend lately, that potentially sensitive things will be said or output from the models, so you can see an increasingly crazier set of guardrails getting put around the LLM’s so that they don’t offend someone by mistake. I’ve seen their usefulness decrease significantly, but their coding assistance is still somewhat good, but their capabilities otherwise decrease significantly.