• 3 Posts
  • 858 Comments
Joined 1 year ago
cake
Cake day: June 15th, 2023

help-circle



  • Glad it’s working! Couple more quick ideas:

    Since you’re looping through the results of find, $file_path will be a single path name, so you don’t need to loop over it with for images in $file_path; anymore.

    I think you’re checking each field of the results in its own if statement, e.g. if [[ $(echo $ALL_DATES | awk '{print $1}')... then if [[ $(echo $ALL_DATES | awk '{print $2}')... etc. While I don’t think this is hurting performance significantly, it would make your code easier to read and maintain if you first found the correct date, and then did only one comparison operation on it.

    For example, exiftool -m -d '%Y%m%d' -T -DateTimeOriginal -CreateDate -FileModifyDate -DateAcquired -ModifyDate "$file_path" returns five columns, which contain either a date or “-”, and it looks like you’re using the first column that contains a valid date. You can try something like this to grab the first date more easily, then just use that from then on:

    FIRST_DATE=$(exiftool -m -d '%Y%m%d' -T -DateTimeOriginal -CreateDate -FileModifyDate -DateAcquired -ModifyDate "$file_path" | tr -d '-' | awk '{print $1}')

    tr -d '-' will delete all occurrences of ‘-’. That means the result will only contain whitespace and valid dates, so awk '{print $1}' will print the first valid date. Then you can simply have one if statement:

    if [[ "$FIRST_DATE" != '' ]] && [[ "$FIRST_DATE" -gt $start_range ]] && [[ "$FIRST_DATE" -lt $end_range ]]; then

    Hope this helps!


  • I have not tested this, but I have a couple ideas off the top of my head.

    #1 - Retrieve all fields with a single exiftool command. e.g. ALL_DATES=$(exiftool -m -d '%Y%m%d' -T -DateTimeOriginal -CreateDate -FileModifyDate -DateAcquired -ModifyDate "$filename")

    Then retrieve individual fields from $ALL_DATES with something like awk. e.g. echo $ALL_DATES | awk '{print $1}' will return the first field (DateTimeOriginal), and changing that to ‘{print $2}’ will return the second field (CreateDate).

    #2 - Perhaps process multiple files with a single exiftool call. e.g. exiftool -m -d '%Y%m%d' -T -DateTimeOriginal -CreateDate -FileModifyDate -DateAcquired -ModifyDate ~/Pictures/*. You might compare whether running just this exiftool query once vs running it in a loop takes a significantly different amount of time. If not, it’s probably simpler to use one call per file.

    Edit: I doubt the either find or globbing will use a significant amount of time, however, the issues you have with find and spaces in file names can be worked around by using find’s -print0 option. This prints out file paths separated by NUL bytes (i.e. ASCII value 0). You can then loop through them without needing to guess when whitespace is part of the path vs a delimiter. A common way of dealing with this is to pipe the output of find into xargs like so: find ~/Pictures -type f -print0 | xargs -0 -L 1 echo 'File path: '. That will execute echo 'File path: ' <file> for every file in your Pictures folder. It’s a little more complicated. You can also use a for loop like so:

    find ~/Pictures -type f -print0 | while IFS= read -r -d '' file_path; do
        echo "Processing: $file_path"
    done
    

    Note that when you pass a blank string with read -d '', it reads to a NUL char, as documented here: https://www.gnu.org/software/bash/manual/bash.html#index-read . I’m not 100% sure if this is true in older versions of Bash or other similar shells.


  • Looking over the Fastfox.js config, it looks like most settings fall into one of three categories:

    1. Subjective appearance of speed or responsiveness (perhaps at the expense of objectively-measurable load times)
    2. Experimental options that don’t apply to all hardware or OSes (e.g. GPU acceleration)
    3. Settings that optimize performance at the expense of memory, CPU, or network usage (e.g. cache sizes and connection limits)

    I don’t see anything that makes me think Mozilla’s defaults are unreasonable. It’s not like Mozilla is leaving performance on the table, but rather that they chose a different compromise here and there, and use highly-compatible defaults. That said, it does seem like there is room for individual users to improve on the defaults — particularly if they have fast internet connections and lots of RAM.

    For example:

    // [NOTE] Lowering the interval will increase responsiveness
    // but also increase the total load time.
    user_pref(“content.notify.interval”, 100000); // (.10s); default=120000 (.12s)

    This seems very much like a judgment call and I guess Firefox’s defaults would actually have better objective load times and better benchmark scores. That doesn’t mean it’s objectively better, but it seems reasonable, at least.

    // PREF: GPU-accelerated Canvas2D
    // Use gpu-canvas instead of to skia-canvas.
    // [WARNING] May cause issues on some Windows machines using integrated GPUs [2] [3]

    // [NOTE] Higher values will use more memory.

    Again, the defaults seem to make sense. Perhaps Mozilla could add an optimization wizard to detect appropriate settings for your hardware, and let the user select options like “maximize speed” vs “maximize memory efficiency”. These are not one-size-fits-all settings.

    Fastfox also disables a lot of prefetching options, which…seems counter to the goal of improving speed. Not really sure what to make of that.



  • Are there any that are cloud-hosted, secure, and private? My experience is limited, but I’ve never found an easy way in. I can’t imagine anyone who’s not tech-savvy getting started without walking through a minefield of scams.

    Every now and then I look at options for how I might actually use crypto, and everything looks either outrageously scammy or way too much trouble. Pretty much every exchange I’ve looked at holds the keys to your account, and several have gone under or outright stolen their users’ funds.

    The question is, when Proton embraces bitcoin, should it make me trust bitcoin more, or trust Proton less? I don’t know. I’m still skeptical. Their blog post is interesting, but also doesn’t answer a lot of questions. https://proton.me/blog/proton-wallet-launch

    I mean, look at this:

    Buy Bitcoin securely in 150+ countries

    If you are new to Bitcoin, Proton Wallet also has integrations that make it easy to buy Bitcoin in 150+ countries, and we have also put together a comprehensive Bitcoin guide for newcomers.

    That “comprehensive” guide spends three paragraphs talking about the “Blocksize War”, and makes absolutely no mention of how a user can actually buy bitcoin using Proton Wallet. WTF, Proton? Who is your target audience here exactly?





  • GenderNeutralBro@lemmy.sdf.orgtoLinuxLindowsOS, 2001
    link
    fedilink
    English
    arrow-up
    6
    ·
    9 days ago

    Short answer: Enterprise bullshit and Adobe.

    On the home computing side, I can’t think of much that has specific OS requirements besides gaming and DRM’d 4K streaming. For better or worse, most desktop apps nowadays are glorified web sites. It’s a different world today than it was 20 years ago.

    On the enterprise side, nah. Way too many vendors with either no Linux support or shitty Linux support.

    Microsoft is working hard to shove “New Outlook” down everyone’s throats despite still not having feature parity with old Outlook. Nobody in my company will want to use it until it is forced because we need delegated and shared calendars to actually work. And then there’s the “you can take my 80GB .pst files when you pry them from my cold dead hands” crowd. Advanced Excel users are not happy with the web version either, and I don’t blame them.


  • GenderNeutralBro@lemmy.sdf.orgtoLinuxLindowsOS, 2001
    link
    fedilink
    English
    arrow-up
    15
    ·
    9 days ago

    Its gimmick was that it was compatible with Windows apps, and an easy transition for Windows users. It didn’t really live up to that promise. Wine was not nearly as mature then as it is today, and even today it would be pretty bold to present any Linux distro as being Windows-compatible.







  • This is interesting work, but I don’t think it justifies the plain-English summary. If you’re going to claim that language is not a tool for thought, I would expect you to demonstrate that a difference in language does not lead to a difference in thought. To answer that, you shouldn’t just look at whether language-focused brain regions are activated during non-language-based activity, but also whether a lifetime of using Language A leads to differences outside of those regions compared to a lifetime of using Language B. Isn’t that the crux of linguistic relativity? That different languages encourage and train different modes of thought?

    Any chess player will tell you that they apply their “chess brain” to all sorts of things outside of chess. It’s not that we literally view life as a chessboard, but rather that a lifetime of playing chess has honed a set of mental skills that are broadly applicable. The fundamental logic applies everywhere.

    In particular, some deaf children who are born to hearing parents grow up with little or no exposure to language, sometimes for years, because they cannot hear speech and their parents or caregivers do not know sign language. Lack of access to language has harmful consequences for many aspects of cognition, which is to be expected given that language provides a critical source of information for learning about the world. Nevertheless, individuals who experience language deprivation unquestionably exhibit a capacity for complex cognitive function: they can still learn to do mathematics, to engage in relational reasoning, to build causal chains, and to acquire rich and sophisticated knowledge of the world

    It seems like they are using a narrower definition of “language” than is appropriate. e.g. I don’t think it’s controversial to include body language under the umbrella of “language”, so I am very skeptical of the claim that any of those deaf children had “no exposure to language”.