• 122 Posts
Joined 1Y ago
Cake day: Nov 03, 2021


Many thanks and congrats !!!


The best part, at least for me:

Loongson has publicly said that it was adopting the RISC-V architecture for its next projects. While it was disclosed in the context of CPUs, the architecture is flexible enough for GPUs, too. So, who knows?

I have mix feelings with blog post…

While decentralization is good for many reasons, and distributed systems are even better than decentralized ones, saying fascism, providing nazis a place to gather, and giving voice to extremists, are not a decentralization problem or weakness, and the post sort of accepts that as given and sort of provides workarounds through quarantining and the like.

People will always find a place to gather, and make themselves feel. Before there was centralized social media places, and even before internet, there were fascists and nazis. And when big corps started to van people they consider unacceptable on their spaces, then they built and found other places for them, so whether centralized or not, people will find ways to gather and share their thinking, whether that’s something big corps, big media, or common sense find those thoughts acceptable or not.

The real problems on centralized services are giving too much control and power to a few ones, who have their own agenda, their own interests, their own criteria, without giving a damn on what’s true or not, what’s common sense or no, what benefits majority or not. Big media has always concentrated too much power, and now big tech corps do the same, and they interests are not there to represent everybody’s interest. The vanning culture of Today, the censoring culture of Today, the dislike of different opinions of Today, are not just because of centralization, but sure are empowered and accelerated by centralization, and those are as corrosive to society as fascism and extremism, actually that mono-thinking and mono-culture might actually be a sort of those things culturally thought as bad. Not to count how centralization affects privacy, and adds way more risk to data we would like to preserve, given single points of control, single points of failure, and single points of risk.

To finish, I hoped much more about such title, but oh well…



Well risc-v is a nice open ISA, not open source HW. Even when some companies post the chisel code, or the rtl one on github, doesn’t necessarily make it open source. There are vendors with proprietary risc-v based IP already.

On the other hand really open HW is really hard to happen. The Fabs recipes for different technologies, is like secret sauce. And securing the produced HW vs the chisel/rtl code is really hard, starting form the fact the business ecosystem is mostly fabless (except for Intel and adn a few IDS companies around), and companies designing usually include proprietary IP from vendors (who knows what they add into their IP), so no way to verify that code, and it’s impossible for every company to design everything (too much complexity and too much expertise on a diverse ecosystem of different technologies). Then when having the rtl and gatesims somehow verified, companies send to a 3rd party for the “physical” design, including tracing and routing plus “mask” design, and as long as there’s equivalence between the original design (which already incorporated proprietary IP) and the one send back by the 3rd party company, then it goes to the Fab (though the 3rd party company could have introduced something, obscured somehow so equivalence is not broken), and then finally, even if the Fab doesn’t introduce anything the original company is not aware of, in the end the recipe for whether 14nm, 7nm, 2nm, and everything related to how the company achieves what it does, is not open either (there are of course theory, papers and talks about the processes, but what the Fab finally does, has to be protected against competitors). All the original company can do, is to verify the functionality of the resulting Si, starting from postSi verification all the way down to product verification. But several key pieces were even proprietary IP and libraries to start with, the thing goes through several hands until it lands to the Fab, and what ended in the final Si is not fully known, all you can do is verify under certain scenarios the thing does what it was intended for, :)

So in the end fully open HW is really hard to get. But an open ISA is better than current status quo, and hopefully it might motivate for more openness on the HW industry…

cross-posted from: https://lemmy.ml/post/685110 > LineageOS 20 also introduces a new camera app called Aperture, which is based on Google’s CameraX library and provides a closer-to-stock camera experience on many devices. Aperture was developed by SebaUbuntu, LuK1337, and luca020400, and has been integrated into LineageOS and adapted to a wide range of supported devices. https://lineageos.org/Changelog-27/

cross-posted from: https://lemmy.ml/post/680798 > [r/electricvehicles](https://libreddit.spike.codes/r/electricvehicles/comments/zxe7ga/kia_gives_us_a_closer_look_at_the_ev9_its_first)



Solving grammar’s greatest puzzle
> “If the solution is complicated, you are probably wrong.”





In general, I don’t like the idea of having flatpak, snapcraft and appimage, packages. First, and this is different between them, but in the end they all suffer the same one way or another, there are huge binary dependencies blobs, whether coming with the same app, or having to be installed from the package provider. At some point I tried to install Liri from Flatpak, and it was a nightmare of things having to be installed, when I already had most things natively built by the distro I used.

As opposed to opinions from Linus himself, I prefer SW to be built from the same system libraries and dependencies, rather than each app coming along with their own set of binary dependencies. Getting GNU/Linux to behave like MS-Win, where you can install whatever binary, from whatever source, and perhaps even duplicating a bunch of stuff you already have on your system is crazy. One thing that gets solves easier, when having contained apps, is when depending on same things but from different versions, and that to me is not ideal either. To me, as done by distros, one should avoid as much as possible having different versions of same SW, but if in need, then rename one including the version as part of the name, or something, but mainly avoid having a bunch the same thing with multiple versions all over. Guix handles that more elegantly of course, but I haven’t had the time, neither the guts to go for Guix yet (still in my ist of pending stuff).

The other thing, is that although now a days everything comes with a signature to check, on distros provided packages, which are built from source, besides minimizing the amount of stuff needed, one can always look at how the packages are built (arch and derivatives through the PKGBUILs and companions), tweak and build oneself (eg, currently fluxbox head of master, and from a while back, doesn’t play nice with lxqt, then with the help of fluxbox devs I found the culprit commit, revert it, and still apply the same distro recipe with my own patch, and moved on). No matter being signed, binary packages are not as flexible, that besides the fact several just proprietary, and one might not even be aware, since the move is to be more MS-Win like, even with auto updates and such…

Building having in mind minimal systems and ecosystems, and have mostly free/libre or at least open source SW, makes thing much better for me. One can end up with bloated huge systems, if wanted, but at least not with bunch of duplicates unnecessarily.

Ohh well, if only TOR exit nodes were not able to be used to spy on users, if government agencies would refrain to host exit nodes (and other nodes), and if exit noes wouldn’t allow for some exit nodes monopolizing (at times single user had been found controlling good amount of exit nodes), neither hijacking.

I’m really glad people investigates on how to protect privacy better, and TOR has contributed to that. But although the efforts to sanitize the TOR network, as a user, one really doesn’t know if one is getting the opposite effect. Agencies with resources are also able to deanonimize TOR users… So it’s really hard now a days, as a user, to go trust any internet mechanism intended to protect one’s privacy. VPNs are no better, perhaps even worse, since now it’s about trusting a centralized service which has it easier to spy on users.

Privacy on the net is near impossible now a days, :(

Easily removed through config. BTW, I don’t remember if at some point I configured that myself, but I don’t get any. On settings -> homepage, you can clean the box for “sponsored shortcuts”, and even clean further the homepage…

I use silence, :) With it I might block SMS from phone numbers, once I got them, but I was looking for something which would have prevented the SMS to reach me in the 1st place, hehe. Like what “Yet Another Call Blocker” is meant to do for phone calls. Yet, “Yet Another Call Blocker” still allows some phone calls I really would have preferred to be blocked, but it’s better than nothing I guess, it has blocked some calls…

DuckDuckGo? No thanks, its search engine based on blink, and now a browser app…

Better use Mull, or any other FF based browser…

@kixiktoOpen Source...

$1500 the laptop: tomshardwate reference. Well, it’s been a while I don’t buy any laptop for personal use, but I’m wondering if there are $500 laptops, which are worth acquiring.

That said, please remember risc-v doesn’t mean open source CPU, it just means open source ISA. Actually there are many vendors now offering risc-v IP CPUs and others, such as Cadence, Siemens (in the past Mentor), and others, but those are not open source. And even if the CPU was open source, there are other components which might not be. And there’s the thing about firmware binaries requirements…

Looking for a fully open source, both HW and SW, without binary blobs also, it’s sort of hard this days. Hopefully that’s not way farther away…

If looking for risc-v though, Roma it is though, since there’s nothing close to it yet, :)

SerenityOS author: “Rust is a neat language, but without inheritance and virtual dispatch, it’s extremely cumbersome to build GUI applications”
Just sharing from [r/rust](https://libreddit.spike.codes/r/rust/comments/yuxb8a/serenityos_author_rust_is_a_neat_language_but)

Seems like /e/, right? Perhaps less LOS based, but still AOSP based…

AFAIK, Manjaro arm is the current default distro, and it’s a rolling release distro, so you shouldn’t bother about releases…

@kixiktoOpen Source...

Octave is a drop-in replacement for matlab, which is different than mathematica, but might be consider an alternative. The “m” scripts one might code for matlab are supposed to work fine on octave. I’m all for octave, :)

I’ve been looking for a p2p alternative, which would allow a simple workflow. So I had some hope when noticing radicle. But it builds on top of the blockchain hype, I’m afraid. This cryptopedia post shows things I really don’t like.

It’s true git itself is sort of distributed, but trying to develop a workflow on top of pure git is not as easy. Email ones have been worked on, but not everyone is comfortable with them.

A p2p using openDHT would have been my preferred approach. But any ways, I thought radicle could be it. But so far I don’t like what I’m reading, even less with whom they are partnering:

Radicle has already partnered with numerous projects that share its vision via its network-promoting Seeders Program (a Radicle fund), including: Aave, Uniswap, Synthetix, The Graph, Gitcoin, and the Web3 Foundation. The Radicle crypto roadmap includes plans to implement decentralized finance (DeFi) tools and offer support for non-fungible tokens (NFTs). With over a thousand Radicle coding projects completed, this RAD crypto platform has shown that it’s a viable P2P code collaboration platform, one that has the ability to integrate with blockchain-based protocols.

Perhaps I’m just too biased. But if there’s another p2p, hopefully free/libre SW, and non blockchain, then I’d be pretty interested on it…

well, it seems soucehut will have a web based work flow, or so it seems from this postmarketos post:

We talked to Drew DeVault (the main developer of SourceHut) and he told us that having the whole review process in the web UI available is one of the top priorities for SourceHut

SourceHut is prioritising to implement an entirely web-based flow for contributors.

This things don’t happen in one day, so don’t hold your breath yet, but it seems it’s coming at some point…

Got it. Many thanks !

All ways of syncing bookmarks are done through a server, that I’m aware of. I’m not aware of a distributed, or p2p way to sync to date. At least, they all encrypt the contents prior to sending them to the server.

And being that case, I see no reason for not using the FF syncing mechanism, so no need for extensions at all. Just comment out:


on librewolf.cfg, or set it to true on librrewolf.overrides.cfg, or edit about:config. Set a FF account, make sure you ONLY sync bookmarks, which might be better than syncing additional stuff such as history…

Up to you. I really don’t like any of the options, just exporting/importing seems fine to me, but that’s not really syncing, and this is the sort of things I usually forget to do, so the FF syncing it is to me.

I’d be really interested to know of a p2p one. I don’t sync with mobile, which I have no bookmarks for, so p2p would be just great, :)


From a technical stand point you’re right, particularly because of limitations imposed on cisc x86 instructions decoding, which given the complexity don’t tolerate as many concurrent decoders as opposed to risc, besides many other advantages of risc. But also, from a technical stand point, take into account that the current intel x86 implementations are really risc ones wrapped around to support cisc. However there are several limitations still, particularly while decoding.

I believe apple architecture and design decision, go just beyond risc, like sharing the same memory among different processors (cpus, gpus, and so on). That gains M1, M2, and coming SoCs an edge… So it’s not just about risc…

But my opinion was more about current sanctions, technology banning, and all that sort of artifacts used to restrict and constrain Chinese technology. SoCs are not as short term as one might think, since it’s not cheap investment. So to me, since several years back, they should have focused on risc-v, to avoid such non technical huge problems. They have all resources necessary to pursue a different path, than the more costly and limiting one. Of course changing takes time, but again, they have everything they need to do so. That’s why it was a surprise to see investment on x86 compatible CPUs. But hey, they know their business better than anyone else, :)

why developing a x86 compatible arch? Wouldn’t it be better for China to focus on Risc-V? They even had Loongson, but it’s MIPS based…

Not sure if intel will demand something, or if it actually licensed something (not their business model), so short term might help keeping some x86 SW, but mid term and long term, this doesn’t make much sense, does it?

@fishonthenet@lemmy.ml I just noticed on “preferences -> general”:

CanvasBlocker requires Container Tabs

And then an option selected and grayed out (I can’t deselect it):

Enable Container Tabs

Which also comes with a Learn more URL from the Librewolf FAQ, stating:

Why isn’t First Party Isolate enabled by default?

FPI is not enabled by default as we use dFPI, and the two do not work well together. dFPI is a newer implementation and it causes less breakage, plus it is included by default when using Tracking Protection in strict mode.

Please also notice that dFPI makes containers and containers extensions redudant, unless you want to protect your privacy when visiting the same website multiple times, during the same browsing sessions.

I use RFP, but I have webgl enabled. Does it still make sense to use CanvasBlocker extension, disabling all APIs except by the webgl one? From the FAQ, I don’t read it’ll break anything, since I believe containers don’t enable FPI by default, so I believe there’s no conflict with dFPI, but 1st, I’m not sure, and 2nd the container comment of being redundant makes me wonder.

Can you pls comment on it, so perhaps I understand better if what I have is OK or I should change something? Thx !



how about skip redirect add-on?
I'm wondering if what [Skip Redirect](https://addons.mozilla.org/en-US/firefox/addon/skip-redirect) provides, is something Librewolf, thorugh some settings already provide, or if not, if uBlockOrigin does and how. And if not having such settings on Librewolf, or uBlockOrigin, does it make sense the add-on? I installed it as recommended by [arkenfox](https://github.com/arkenfox/user.js/wiki/4.1-Extensions). I know Librewolf doesn't exactly follow arkenfox, but it's any ways a reference, :) It's not one of the [Librewolf](https://librewolf.net/docs/addons) recommendations though... Thanks !


Well, so far, DHT doesn’t seem to be avoidable on p2p (distributed) mechanisms… At any rate, there’s gnunet, which also depends on DHT, but it’s not developed by “Ethereum enthusiasts”.

Mark Zuckerberg has a $10 billion plan to make it impossible for remote workers to hide from their bosses
[r/technology](https://libreddit.spike.codes/r/technology/comments/ybrqoo/mark_zuckerberg_has_a_10_billion_plan_to_make_it) [r/StallmanWasRight](https://libreddit.spike.codes/r/StallmanWasRight/comments/ybybs6/mark_zuckerberg_has_a_10_billion_plan_to_make_it)

Privacy and freedom should be the legacy we leave, not the opposite — Free Software Foundation — Working together for free software






True, but here it’s the thing which is somehow concerning… GNU has attempted to bring sel4, and other interesting ukernels to hurd, with so few hands, and no one really getting interested. And now that Google tries it, oh, how innovative…

I like the move to ukernels, and focus on clean by design. However as mentioned by others, it’s still Google… We’ll see. It’ll be interesting to see where this new effort leads. Will this take over Google’s mobile OS eventually, or chromeOS, or would there be a PC kind on Google OS?

Google rust OSs: Announcing KataOS and Sparrow

at [r/PINE64official](https://libreddit.spike.codes/r/PINE64official/comments/y4xff6/october_update_an_ox_no_bull_pine64)

cross-posted from: https://lemmy.ml/post/542052 > Also [New semiconductor structure involving exciton pairs has implications for microchip technology](https://cosmosmagazine.com/technology/semiconductor-exciton-pairs)

Also [New semiconductor structure involving exciton pairs has implications for microchip technology](https://cosmosmagazine.com/technology/semiconductor-exciton-pairs)

OK, I think I got it, from this google forum:

  • In the foreground: The app can use your location only when the app is open on your screen or when you ask the app to do something.
  • In the background: The app can use location info at any time, even if you aren’t using it.

So to google, having an app running in the background doesn’t mean “using it”, oh well…


“allow” permission vs “allow only while being used” on background apps
Hello ! I just had to change phone, and the move was from LOS 16 (android 9) to LOS 19 (android 12), and I just found about the permission "allow only while being used"... It seems natural to give that permission rather than just "allow"... I'm wondering what exactly "while being used" mean. If it's while interacting with the app with an app front end only, then that might not work when apps run on the background, with no front end interaction whatsoever. So for apps running in the background most probably I'll need to give "allow" sort of permissions. If it really means while the app is running, no matter if on the background or interactively with a front end, then in the end "while being used" would serve as well as "allow", since the app is always in use, right? What's the real difference between the 2, and in particular what does that mean for apps running in the background?

I believe there’s a lot of misunderstanding of what’s freeSW, what’s openSW, and what debian repos have been providing all along.

Debian has been providing a “non-free” repo for all versions they keep in their repo servers (experimental, unstable, testing, stable) since I can remember.

And to me it’s important to make a difference of what’s freeSW vs. what’s not freeSW, and I prefer to use freeSW, unless I’m forced to use something it’s not freeSW and there’s no way to overcome that.

This is one of the things openSW movements (remember, IBM, MS, Google, and several other corps all are part of, or contribute to openSW fundations, but never had supported the idea of freeSW) have influenced to, and convinced most into. Now the value of freeSW means almost nothing, and most are just happy with openSW. I can’t judge anyone, but just say, this is really sad. And once again I see people treating those defending principles as 2nd class citizens, :(

Calibre has CLI as well, if the GUI is really offensive. On Artix, part of pacman -Ql calibre:

calibre /usr/bin/calibre
calibre /usr/bin/calibre-complete
calibre /usr/bin/calibre-customize
calibre /usr/bin/calibre-debug
calibre /usr/bin/calibre-parallel
calibre /usr/bin/calibre-server
calibre /usr/bin/calibre-smtp
calibre /usr/bin/calibredb
calibre /usr/bin/ebook-convert
calibre /usr/bin/ebook-device
calibre /usr/bin/ebook-edit
calibre /usr/bin/ebook-meta
calibre /usr/bin/ebook-polish
calibre /usr/bin/ebook-viewer
calibre /usr/bin/fetch-ebook-metadata
calibre /usr/bin/lrf2lrs
calibre /usr/bin/lrfviewer
calibre /usr/bin/lrs2lrf
calibre /usr/bin/markdown-calibre
calibre /usr/bin/web2disk
calibre /usr/lib/
calibre /usr/lib/calibre/
calibre /usr/lib/calibre/calibre/

Although ebook-convert shows on its man page as converting from one epub format to another, it can as well convert to pdf. I don’t know if it’s possible to convert from pdf to epub, but if the calibre GUI’s does it, perhaps some of calibre’s CLIs can do it as well.

Pandoc can also convert from epub to pdf, though my experience with pandoc as a very basic user is that the results, are not of the quality I’d expect, but again, that is without using special arguments, css, and stuff, perhaps advanced users can get the best out of pandoc…

Thanks a lot, I’m all for 1st option, I guess that means setting “expert mode” in the “general” section, disabling all APIs in the APIs section, except for the canvas one, and enabling only the 3 webGL protected canvas API features:

  • readPixels
  • getParameter
  • getExtension

I’m not sure if I should leave the canvas API set on “readout”.

I have webGL enabled, and I have RFP enabled…

@fishonthenet@lemmy.ml please let me know if that sounds fine, or if I should still enable something else. I wouldn’t like to have canvasblocker getting in the way of RFP, hehe…

Thanks !

Well sourcehut can be self hosted as well (ain’t it OS anyways?):

https://sr.ht/~sircmpwn/sourcehut https://man.sr.ht/installation.md

That said, sourcehut has privacy features, and libre oriented features gitlab doesn’t. But I understand, as of now, without webUI, as it is, it’s pretty hard to adopt sourcehut, and even when it finally does, having invested on gitlab (and even majority on github), which implies time and resources, might not be an easy thing to try sourcehut any ways.

While the sr.ht irc is for paid accounts only, there’s the community libera.chat sr.ht channel, see cummunity contacts.

The the central webUI would be key for major players adoption, and more time as well. It’s been not long ago that debian, xorg, and arch (still in progress), migrated to gitlab, for example. Those migrations are expensive in people resources, and time.

And for regular individuals adoption, besides enabling the webUI, it might be way harder, unless someone contributes to sr.ht resources to allow hosting projects, with no CI support, but for free. It’s hard to get individuals adoption at some cost, even if that’s really a low cost, when there are alternatives, which BTW violate SW licenses, for free, :(

I like ETP Strict Mode, :)

Perhaps part of ETP Strict Mode are now by default. Is hard to say from those blogs, Firefox rolls out Total Cookie Protection by default to all users worldwide, and Firefox 86 Introduces Total Cookie Protection.

It looks like ETP Strict Mode already covers for TCP, and that somehow the pieces providing TCP from Strict Mode have been made default on latest FF. That been said, I’m glad LW is sticking with Strict Mode. It still looks the safest, :) I do believe the new blog post should have clarified that people staying with Strict Mode still get TCP, but even more than that, staying even safer, to avoid confusions on Strict Mode users…

Thanks !