LGPL says the user needs to be able to replace the LGPL’ed code with their own modified version. You can’t do that on an iPhone. I think that’s the issue there.
Also the SDL devs are themselves game developers/porters (I think that came out of Loki games), so it’s not so hard to understand why they would relicense their stuff, it basically makes their own life easier also.
It’s so commercial game devs can ship it on iOS and consoles. I think Apple makes it impossible to comply with the LGPL. Probably same thing for Sony, XBox and Nintendo.
I’m not too mad even though I’m a GPL fanboy. If it helps SDL adoption among game devs then that will help portability and Linux support in the long run. It will also help games run better on Linux: plenty of games that don’t use SDL have annoying issues with my tiling WM for example.
With SDL 2, they even thought about the future compatibility issues due to static linking: Even if a game statically links SDL, you can still override the static copy with your own dynamic library. The static SDL will forward all function calls to the user supplied SDL.
I’ve seen hot takes about this on Reddit where people bring up these points:
To the first point I want to look at the evidence, which clearly suggests malware and shit like never make it into any Linux distro. This probably has less to do with security audits and expertise and such, but rather the desire of the packagers to actually package useful and legit software. It acts more like a general heuristic spam filter that throws out sketchy shit as part of the assessment of any software being useful and trustworthy by culturally aware people. These people can’t be tricked like a shitty spam filter would.
To the second point I think some bleeding-edgelords undervalue stability and ignore the amount of work this actually causes everyday. Updating too often creates more work in many cases, though updating very rarely clearly also causes problems. There’s probably a middle ground here.
Plus this whole argument is arguably kinda tangential to the actual point: There are rolling release distros that are only days or weeks behind upstream, and they still don’t suffer that spam problem where random strangers are allowed to basically upload any crapware without human supervision.
First of all, this is probably a bad sign for the health of your drive, you should look at the dmesg output and SMART diagnostics of the drive. There’s a package called smartmontools or something like that. Also make a backup now if you haven’t yet.
If it’s just the filesystem that’s borked try fsck like that other person said.
I like Taler, but it’s specifically designed to not give anonymity to sellers. In order to verify that you got a valid token (meaning it was issued by an accredited bank and was not already spent) you need to deposit it with the bank.
Now if you get money from a friend you trust has given you a valid token, then that’s not an issue. You can just use it to pay for stuff and nobody will know how you got that token. A seller however will want to verify the token immediately or they might end up giving stuff away for free.
I guess for donations it could work since you’re not giving anything in return and so can’t be scammed, but it’s obviously a bit of a problem since you could be sitting on worthless tokens. You don’t know what funds you have until you try to spend them, and people will probably get pretty annoyed with you if most of your tokens end up being fake. Not sure how one might protect themselves against this sort of spam.
As I said, every file read from disk, be it an executable, image or whatever gets cached in RAM automatically and always.
Having said that, if you read a file using read(2) (or like any API that uses read() internally, which is most), then you end up with two copies of the file in RAM, the version your OS put in the disk cache, and the copy you created in your process’s memory. You can avoid this second copy by using mmap(2). In this case the copy of the file in the disk cache gets mapped into your process’s memory, so the RAM is shared between your copy and the disk cache copy.
You can also give hints to the disk cache subsystem in the kernel using fadvise(2). Don’t though unless you know what you’re doing.
This doesn’t explain what a disk cache (afaik often refered to as page cache) is, so here goes: When any modern OS reads a file from the disk, it (or the parts of it that are actually read) gets copied to RAM. These “pages” only get thrown out (“dropped”) when RAM is needed for something else, usually starting with the one that hasn’t been accessed for the longest time.
You can see the effect of this by opening a program (or large file), closing it, and opening it again. The second time it will start faster because nothing needs to get read from disk.
Since the pages in RAM are identical to the sectors on disk (unless the file has been modified in RAM by writing to it), they can be dropped immediately and the RAM can be used for something else if needed. The downside being obviously that the dropped file needs to be read again from disk when it is needed the next time.
Not an expert, but from my understanding of how this worked for the Mario 64 project:
I like how that material design stuff looks on my Android. Bite me.
To the people complaining they can’t figure out what is or isn’t a button: How often really does that happen to you? Even the computer illiterate somehow manage on their Androids and websites.
Windows 9x, which gets brought up in these discussions as an example of some sort of perfection, had plenty of “buttons” that had no 3D effect, including menus, icons in icon bars, systray icons and desktop icons. WordPad originally had all the icons up in the icon bar with a 3D effect. It looked like shit and Microsoft stopped doing that in 97 at the latest.
There are considerations other than clarity of intent, like not distracting people from the content with all that ornamentation.
My suggestion would be to put a picture of a baroque church as your background to even out the minimalism if it bothers you so much.
I don’t understand what the endgame is supposed to be here. In order to achieve its stated goals (demilitarization, denazification, catching their list of enemies), Russia needs to occupy (which they said they won’t, which makes no sense). There’s no way Ukraine’s govt can agree to or implement these demands.
So Russia will in fact need to occupy Ukraine and install a puppet govt. But how the are they ever going to get out of there ever again? That new government will get removed as soon as they leave.
So long term occupation it has to be, which will be a total disaster and that’s on top of the sanctions. Russia just fucked itself, no? How does any of this make sense?
Of course they are going to do that. They are not viable as independent states, and integration is probably more practical than trying to administer them as separate puppet states. It’s also more permanent.
The question is more like: How fast are they going to do that and what territory do they claim?
Calling Ukraine neutral after the Western-backed coup of 2014, in which the US candidate became PM and Russian-hating Neonazis got into government is some twisting of facts. The whole Maidan (which was supported massively by Western governments, press and NGOs) was about aligning with the West for fuck’s sake! The invasion only happened after that.
Debian allows setting up disk encryption during install for example. Never heard of VeraCrypt, I don’t think that’s popular on Linux. The relevant under the hood parts if you’re interested in how it works on Linux are:
Yes, yes and yes. And Mozilla have been selling out their user’s data since the day they took money from Google.
This is honestly what annoys me more than anything about Mozilla: they pretend to be champions for privacy, but they aren’t. And people fall for it. They are controlled opposition. They are the social democrats of the privacy world: channeling privacy supporters into their compromise (and compromised) position and painting the radicals as unreasonable dreamers.
If they were to finally die, that would probably be good for online privacy. A real non-corrupt free software fork of chromium could take off with built-in ad blocking and actually good privacy defaults. Firefox is sucking the oxygen out of the room right now.
Ultimately all tracking and data collecting besides what’s absolutely necessary needs to be declared 100% illegal. I have no hope Mozilla will help in this fight at all.
Linux accepting patches from Facebook that improves btrfs isn’t the same thing as working with Facebook on ad tech tracking proposals.
I haven’t read the thing and I’ll be waiting for someone decently trustworthy to explain this to me, but I consider tracking and advertising to be immoral, so this doesn’t bode well.
Yeah, looking at this list, there are multiple things that could cause worse outcomes in practice.
The most problematic IMHO are the I/O prio and niceness. This is because other processes like xorg and the compositor (and maybe even kernel threads, though not sure if you can starve those) need I/O and CPU too when running a game, and they get starved by the game hogging these resources. I remember when running the original StarCraft through wine you needed to de-nice it to avoid starving the X server of CPU.
Of course some of the other stuff is also sketchy since it could increase temps and therefore induce throttling. Depends on how good the cooling is I suppose.
A low latency scheduler can increase overhead and thus reduce average fps, but should not really increase stuttering (probably the opposite), so I don’t think that’s the problem really.