In October, New York City announced a plan to harness the power of artificial intelligence to improve the business of government. The announcement included a surprising centerpiece: an AI-powered chatbot that would provide New Yorkers with information on starting and operating a business in the city.
The problem, however, is that the city’s chatbot is telling businesses to break the law.
This is some of the most corporate-brained reasoning I’ve ever seen. To recap:
- NYC elects a cop as mayor
- Cop-mayor decrees that NYC will be great again, because of businesses
- Cops and other oinkers get extra cash even though they aren’t business
- Commercial real estate is still cratering and cops can’t find anybody to stop/frisk/arrest/blame for it
- Folks over in New Jersey are giggling at the cop-mayor, something must be done
- NYC invites folks to become small-business owners, landlords, realtors, etc.
- Cop-mayor doesn’t understand how to fund it (whaddaya mean, I can’t hire cops to give accounting advice!?)
- Cop-mayor’s CTO (yes, the city has corporate officers) suggests a fancy chatbot instead of hiring people
It’s a fucking pattern, ain’t it.
The number of people who trust these LLMs is too damn high.
Nice, I wonder if this thing can help fill out my restaurant/tissue bank paperwork now.
Ars’ example gave me some hope
deleted by creator
Finally, I can start a lawsuit against that escape room for not allowing me and my emotional support crowbar in.
How big of a boulder can you have as a met rock?
A large boulder the size of a small boulder.
“Spicy autocomplete” is a great way of describing an LLM