you can find your user info in the /api/v3/site
response. the /api/v3/user
endpoint requires a name or person id.
i recommend checking out https://join-lemmy.org/api/classes/LemmyHttp.html
you can find your user info in the /api/v3/site
response. the /api/v3/user
endpoint requires a name or person id.
i recommend checking out https://join-lemmy.org/api/classes/LemmyHttp.html
you can only set a community to only allow local users, not prevent users from interacting with remote communities.
you’d have to either disable federation or set up a script to automatically remove all remote communities, but that also won’t be a per user thing, just a per instance thing.
unfortunately restarting fediverse software on the same domain doesn’t really work well with all other software and without local fixes.
activitypub uses cryptography for authentication and there isn’t a standard for changing keys / reusing identities, so different software will deal with this in different ways.
reusing URLs for posts and comments is even more problematic and will definitely cause broken federation with other lemmy instances as well.
if it was decided to recreate an instance from scratch in the future that’d best be done on a new (sub-)domain.
if the database was saved it should work just fine to rebuild the instance from that, it’ll probably just take a few days to restart federation everywhere.
I’m not suggesting either way, just providing some technical background.
You basically can’t if your instance was set up before 0.19.4, as there won’t be any association between users and uploads for older uploads. You also can’t do this without breaking thumbnails everywhere unfortunately.
The latest Lemmy version has a fix where thumbnails now are actually stored at a reasonable resolution for thumbnails, but old thumbnails may be quite large, and this does not retroactively shrink older thumbnails.
It’s possible to pull image aliases from the DB and ignore them when iterating over aliases within pict-rs, but you these will only be manual uploads, not automatic uploads like generated thumbnails. For posts by local users, deleting thumbnails will also end up breaking them for 0.19.5+ instances, as they should reuse the original thumbnail url.
fwiw, the estimate number only states the max amount of activities behind. the real number can be lower, but not higher (unless sending is entirely broken on the instance being checked).
each activity being sent has a numeric id in the database. lemmy has an api that returns the id of the last activity that was either successfully sent to an instance or skipped when it didn’t need to get sent (e.g. pm to a user on a different instance). there may also be holes in activity ids due to postgres implementation details for auto-incrementing sequence ids.
for determining the highest known activity id to compare it with the last activity id sent to a specific instance, you can just go through the successfully sent ids for all instances in the response and find the highest number across them all. then you can calculate the difference between the highest number and the number for the specific instance.
depending on the lemmy version and timing of the action, it can take up to 30 seconds for the activity queue to deal with new activities, so on a somewhat busy instance the delta is likely rarely going to be zero.
while this is generally what most people talk about when speaking of defederation, admins can also decide to remove communities locally without blocking the entire instance.
you might find some inspiration from https://breezewiki.com/ - either its codebase directly or using it as an intermediary while scraping
@fmstrat@lemmy.nowsci.com there’s also rss feeds for communities
lemmy.ml doesn’t use cloudflare, that’s strange.
i’ve also never had issues with this when looking at instances that do use cloudflare.
pretty much, yeah. lemmy has a persistent federation queue instead of fire and forget requests when activities get generated. this means activities can be retried if they fail. this allows for (theoretically) lossless federation even if an instance is down for maintenance or other reasons. if mbin has a similar system maybe they could expose that as well, but unless the system is fairly similar in the way it represents this data it will be challenging to integrate it in a view like this without having to create dedicated mbin dashboard.
lemmy has a public api that shows the federation queue state for all linked instances.
it provides the internal numeric id of the last activity that was successfully sent to an instance, as well as the timestamp of the activity that was sent, and also when it was sent. it also includes data like how many times sending was unsuccessful since the last successful send. each instance only knows about its own outbound federation, but you can just collect this information from both sides to get the full picture.
there is also https://phiresky.github.io/lemmy-federation-state/site to look at the details provided by a specific instance.
it’s not just lemmy.world.
of the larger instances, the following have trouble sending activities to lemm.ee currently:
i pinged @sunaurus@lemm.ee on matrix about 30h ago already about the issues with federation from lemmynsfw.com, as it was the first one i noticed, but I haven’t heard back yet.
das ist nur ein guter troll :)
@Gullible@sh.itjust.works hat als anzeigenamen WolfdadCigarette@threads.net
at least the image resizing topic has recently been fixed in lemmy, thumbnails sizes are limited (at the time of thumbnail creation) in the latest release. I’d have to look closer at the other stuff, the api part is unlikely to have changed and will affect all frontends, but js part should differ depending on the front end. some instances already use other frontends by default and there is also a replacement for lemmy-ui being worked on (lemmy-ui-leptos), but I don’t know how they compare. either.
it should be taken into account though how much of this is cacheable as well, as it will then typically only affect the first load for the static files.
I can totally understand the issues in general though, I’ve been living with a 64kbps uplink for several years in the past.
requires sending ~25-fold less data per post
what are you referring to with this? AP traffic?
do you have some more information about this?
since you’re on programming.dev, you may be affected by https://programming.dev/post/20515601
this doesn’t just affect lemmy.ml.
it seems that lemmy.ml -> lemm.ee was somehow fixed yesterday, but there are several other instances that also have issues sending to lemm.ee:
you’ll probably want to wait for 0.19.7, which will fix at least https://github.com/LemmyNet/lemmy/issues/5182.
https://github.com/LemmyNet/lemmy/issues/5196 is also something to keep an eye on.
peertube embeds are supposed to be fixed in lemmy 0.19.6, so when updating to that lemmy version they should start working again
i assume this was done after updating the other tables referencing this table, such as comments, votes, saved posts, as previously discussed on matrix?
while it may be omitted here for simplicity, it can be dangerous to not mention that for others that might find this in the future if they experience index corruptions on their own if they don’t fix all references, as that would result in data loss.