Hi, I am building the Linux kernel, with the following make command to enable multi core compilation: make -j. This ends up freezing the system and eventually some applications get killed (I am using KDE). If I pass an argument to -j, then system compiles normally. However, full multi-core compilation is enabled when no argument is specified.

I am on Debian stable, kernel 6.1.0-13-amd64. Is there a way make system do full multi-core compilation without freezing the system?

  • @mlfh
    link
    8
    edit-2
    5 months ago

    Make’s -j option specifies the number of concurrent jobs to run, and without an argument doesn’t limit that number at all. Usually you pass an argument to it with the number of cpu cores you want to utilize. Going over the number of cores you have available (like it does without an argument) will be slower or even freeze your system with all the context switching it has to do.

    • @jayaura@discuss.tchncs.deOP
      link
      fedilink
      35 months ago

      Using the exact number of cores I have as the argument (from /proc/cpuinfo) makes the compilation go smooth. Are you saying that without an argument, Make uses a very high default number that causes other processes to starve ? Nevertheless, kernel shouldnt allow such a situation as far as I understrand, and perhaps should be considered a bug in the kernel ?

      • @mlfh
        link
        45 months ago

        Without an argument, the -j option will start jobs with no limits - depending on the project, this could be thousands or tens of thousands of processes at once. The kernel will do its best to manage this, but when your interface is competing for cpu resources with 10,000 other cpu-intensive processes, it will appear frozen.

  • @inetknght
    link
    3
    edit-2
    5 months ago

    make -j will create unlimited jobs. If you have very few CPU cores you will likely overload your CPU with tens or hundreds (or maybe even thousands) of compilation jobs.

    Each one of those jobs will need RAM. If you eat up all of your RAM then your computer will hit swap and become unusable for hours (at best) or days. I’ve had a computer chug along for weeks in swap like that before OOMkiller decided to do things. And the worst part is the Out-Of-Memory killer decided to kill all the wrong things…

    The -j argument has an optional additional argument. The additional argument is the limit to the number of jobs. The best thing you can do is (roughly) use make -j$(nproc). That will give the number of processors as an optional argument to -j. If your build line gets parsed (as is common in an IDE) then you might replace “$(nproc)” with a hard number. So if you have 20 cores in your CPU then you might do -j20. If you only have 8GB of RAM with your 20 cores then maybe you give it -j8. I saw one guy try to get fancy with math to divide up CPU and RAM and … it was just way more complicated to get at the same number as nproc :)

    I, personally, just buy more RAM. Then with my 20 cores and 64GB of RAM, there’s plenty of headroom for compilation in the background for each one core and also room in RAM for a browser for documentation and IDE for all the editing. Developer machines are known for being resource hogs. Might as well lean in to that stereotype just a tiny bit.

    And one tiny edit: I highly suggest you start to read the manual. man make will tell you all about how -j works :) and man nproc and there’s tons of others too. I love git’s manual pages, they’re pretty awesome.

    man make tells:

       -j [jobs], --jobs[=jobs]
           Specifies the number of jobs (commands) to run simultaneously.  If there is more than one -j option, the last one is effective.  If the -j option is given without an argument, make will not limit the number of jobs that can run simultaneously.