- cross-posted to:
- technology@lemmy.zip
- cross-posted to:
- technology@lemmy.zip
finally, some good fucking AI
Now I have to go try it, so that I can go through this article’s paywall.
Did it work?
I don’t know if I had to do anything special for the prompt, but it just gave me a short summary starting with “Unfortunately I cannot provide the full text of the article you requested, as that would likely infringe on the copyright of the content. However, I can summarize the key points from the article.”
Maybe they asked Quora if it was legal.
In all seriousness, though, I don’t get that site’s popularity. I only ever visit Quora by accident (because Google ranks it highly) and it’s basically always garbage answers. And speaking as a developer, the UI/UX causes my eyes to roll back in my head and say, “REDRUM” in a demonic voice. It’s hard to even tell where the answer is because there’s so much superfluous shit on the page.
Agreed on the UI/UX. Really awful and unintuitive
We would be happy to connect with your technical team to help them make sure your paywalled content isn’t served to people using Poe.
What a joke, Quora needs to reevaluate whose responsibility that is.
Basic reasoning time: was it an accident?
- If not, then it was at least immoral.
- If so, then it was incompetence.
What a surprise, both possibilities seem to point towards the project being a pile of crap.
- putting paywalled content onto the internet, where you let bots look at it but try to prevent humans from being able to see it, is plain evil.
- NYT lied to get us into Iraq, and countless other times, they are pure evil
- NYT did not contribute to the building of the internet in any way. but they see it as their god given right to take the hard work of the nerds they hate and use to make millions of dollars for themselves, while giving nothing back
- nobody here seems to understand the difference between a USER AGENT and a BOT. If i ask my web browser to fetch me a web page, that browser is my user agent. of course it does not respect the robots policy. same thing if i ask an LLM to fetch a page for me. that LLM is my user agent, not a bot in this case. NYT is mad because they let all bot-like user agents in, they want to be indexed after all. of course here again we see where NYT wants the benefit of internet resources like being in the search index, but they want to give nothing back and make the actual human people suffer by degrading their experience on the web
I’m sure you’re one of the ones that will complain when all journalism is replaced by AI, while lacking the basic understanding of why that had to happen.
Does Microsoft think things behind paywalls are fair game for LLMs too? (I know this isn’t Microsoft, but I bet OpenAI got around paywalls toooo…)
So long as it isn’t their own, yeah, probably.