If you’re using the same UI and metadata, you should be able to reproduce images with only slight differences and then upscale them with hires fix or something else.
They tried to make video game rentals illegal in the US. They’ve always been a shitty, anti-consumer company.
Let me know if this kind of post isn’t allowed here. It looked alright to post per the rules.
Nintendo has always been an underhanded bully. This isn’t new.
Doesn’t seem like it.
That’s kind of unbelievable given what they say it can do.
They said they would be open sourcing it.
I looked her up to write the description, and the wiki calls the thing on her head a huge cowlick.
In moderation I hope.
I wanna believe you, but the JPEG artifacts on an image that small make it extremely difficult to even notice the distortions you’re referring to, especially at a glance. You’ve made it obvious you’re replying in bad faith, so I’m gonna leave it here. Have a good one.
That’s a lot of things to infer off of just scrolling past a 512×768 JPEG. If the image was in another context and the text had been different, no one would have batted an eye.
That isn’t extremely obvious though, especially with the JPEG compression. If you didn’t know to look, you wouldn’t have noticed it. No one scrutinizes Jeopardy text.
Yeah, but the point isn’t to look like a legit Jeopardy clue, it just has to not look generated. You can respect the height limit if you want, or break it.
Your reply also wasn’t in the form of a question. No points.
I think something innocuous or inoffensive enough to most people qualify as “good looking”. I mean, that’s how marketing works.
A generated image could be so good you’d never be able to tell. Like this one:
That was really cool.
I think the real complaint here is about bad looking art. Not a lot of people have an eye for picking out good-looking images. Or this person is just a huge snob.
That’s low right? I just thought the slogan was funny.
Those might just be LoRA merged models, not full fine-tuning. From what I heard, fine-tuning doesn’t work because the models are distilled. You’d have to find a way to undistill them to train them.
What are his feelings on open source? That’s my question.