- Loading dishwashers properly requires an official government license and a test.
- Putting a flat plate in front of a bowl means a year of hard labor.
- Loading any of the good kitchen knives is an automatic 10 years.
VW announced a $5B joint venture with Rivian a couple months ago. Wonder how all this will affect that deal?
https://www.theverge.com/2024/6/25/24185946/vw-rivian-joint-venture-investment-software-r2
Where you rotate so far right you end up at the left.
Depends on how cheap they get the tags. On AliExpress, you can get current ones for around $5 USD (https://www.aliexpress.us/w/wholesale-ibeacon.htm). Wiliot makes battery-free ones with a 10 meter range, but you have to use their service (https://www.wiliot.com/product/iot-pixels). By comparison, UWB runs $20-50 per tag and ~$0.50 for NFC.
If the new tags get down to the $5 range, you can attach them to individual items, crates, or palettes. The amount of time saved in locating and getting the items to a shipping or manufacturing line can be easily measured to see if worth the expense.
Criminalizing use of Click Farms. Be interesting if the charges stick, and how it will impact the SEO market.
Bluetooth tags are usually a lot cheaper than UWB for accurate tracking. But they do require special ‘routers’ for AoA and AoD support. This will supposedly make it cheaper to deploy indoor location applications.
The main use-case isn’t Airtags. It’s warehouse and large retail applications, where a roving scanner can store precise item locations and count inventory. That way, if you need a specific part, it would be easy and faster to find. And if running low on items, the inventory system could instantly order more, instead of catching it at the next weekly/monthly/quarterly inventory count. If the tags cost $ instead of $$$ for UWB, you slap one on every item that comes in the door.
NFC tags are also cheap, but you need to get within cm to pick up signal, or have a powerful, expensive scanner at a portal. Even then, it doesn’t help telling you which shelf in the warehouse the item ended on. The BT ones will.
Gotta hand it to the Bluetooth SIG. They’re rapidly pushing out useful features with each revision.
Back in college, a local doughnut shop would open at 3am to start making the run for the day. If you were stoned or pulling an all-nighter, it was worth the 15m walk. They would sell you fresh, warm doughnuts out the back.
Nowadays, a lot of places do the fresh doughnut thing. But it’s not the same, getting it at 3am.
https://www.espressif.com/en/news/ESP32-S3-BOX-3
There’s a model with a more expensive dock, or one without. The one without worked fine. But it had to be the Box 3 not Box 2. It worked pretty well and you could create custom images to indicate whether it was listening, thinking, etc.
Instructions here: https://www.home-assistant.io/voice_control/s3_box_voice_assistant/
The box isn’t powerful enough to run an LLM itself. It’s just good enough as an audio conduit. You can either use their cloud integration with ChatGPT, or now, Anthropic Claude. But if you had a powerful Home Assistant server, say an Nvidia Jetson or a PC with a beefy Nvidia GPU, you could run local models like Llama and have better privacy.
This is from earlier this year. I imagine they’ve advanced more since then.
Their LLM integration is super cool. I messed with it for a previous job. Way better than Alexa or Google Home.
One Docker env variable and one line of code. Not a heavy lift, really. And next time I shell into the container I don’t need to remind everyone to activate the venv.
Creating a venv in Docker just for the hell of it is like creating a symlink to something that never changes or moves.
NEW, automated children’s bicycle. Guaranteed to teach the little tyke how to ride! *
I can think of only two reasons to have a venv inside a container:
If you’re running third-party services inside a container, pinned to different Python versions.
If you do local development without docker and scripts that have to activate the venv from inside the script. If you move the scripts inside the container, now you don’t have a venv. But then it’s easy to just check an environment variable and skip, if inside Docker.
For most applications, it seems like an unnecessary extra step.
Pretty damned impressive they kept the lights on with 2M new users. Old Twitter would Blue-whale if you sneezed at it.
I’ve been using ChatGPT, specialized ones on Huggingface, and a bunch of local ones using ollama. A colleague who is into this deep says Claude is giving him best results.
Thing is, depends on the task. For coding, I’ve found all suck. ChatGPT gets you up to a point, then puts out completely wrong stuff. Gemini, Microsoft, and CodeWhisperer put out half-baked rubbish. If you don’t already know the domain, it will be frustrating finding the bugs.
For images, I’ve tried DALL-E for placeholder graphics. Problem is, if you change a single prompt element to refine the output, it will generate completely different images with no way to go back. Same with Adobe generators. Folks have recommended Stability for related images. Will be trying that next.
Most LLMs are just barely acceptable. Good for casual messing around, but I wouldn’t bet the business on any of them. Once the novelty wears off, and the CFOs tally up the costs, my prediction is a lot of these are going away.
Just spent an hour following all the links to the original obits. The man sure could write.