First, some context.

I’ve written a program that starts running when I log on and produces data every two seconds. This daemon keeps all the collected data in memory until it gets terminated (usually when I shutdown the system), when it will dump the collected data to a file on disk.

(These are questionable design decisions, I know, but not the point of this post. Though feel free to comment on them, anyway).

I’ve written another program that reads the data file and graphs it. To get the most current data, I can send the USR1 signal to the daemon, which causes it to dump its data immediately. After restarting the renderer, I can analyze the most current data.

The tech (pregnant women and those faint of heart, be warned)

  • The daemon is written in TypeScript and executed through a on-the-fly transpiler in Node.
  • The data file is just a boring JSON dump.
  • systemd is in charge of starting and stopping the daemon
  • The renderer is a static web page served via a python3 server that uses compiled TypeScript to draw pretty lines on the screen via a charting library.
  • All runs on Linux. Mint, to be specific.

As I’m looking for general ideas for my problem, you are free to ignore the specifics of that tech stack and pretend everything was written in Rust.

Now to the question.

I would like to get rid of the manual sending of the signal and refreshing the web page. I would like your opinions on how to go about this. The aim is to start the web server serving the drawing code and have each data point appear as it is generated by the daemon.

My own idea (feel free to ignore)

My first intuition about this was to have the daemon send its data through a Unix pipe. Using a web server, I could then forward these messages through a WebSocket to the renderer frontend. However, it’s not guaranteed that the renderer will ever start, so a lot of messages would queue up in that pipe – if that is even possible; haven’t researched that yet.

I’d need a way for the web server to inform the daemon to start writing its data to a socket, and also a way to stop these messages. How do I do that?

I could include the web server that serves the renderer in the daemon process. That would eliminate the need for IPC. However, I’m not sure if that isn’t too much mixing of concerns. I would like to have the code that produces the data to be as small as possible, so I can be reasonably confident that it’s capable of running in the background for an extended period of time without crashing.

Another way would be to use signals like I did for the dumping of data. The web server could send, for instance, USR2 to make the daemon write its data to a pipe. But This scenario doesn’t scale well – what if I want to deliver different kinds of messages to the daemon? There are only so many signals.

  • wildbus8979@sh.itjust.works
    link
    fedilink
    arrow-up
    19
    arrow-down
    1
    ·
    3 days ago

    Have a look at systemd socket, they do exactly what you want, monitor the Unix socket and launch the service if it isn’t running when something is received on the socket! Very nifty!

      • dragonfly4933@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        6
        ·
        2 days ago

        Of the things people complain about that systemd brings in, this is among the least offensive. It makes sense for an init system to provide such functionality, the function of spawning new system processes.

        Additionally, in modern systems it doesn’t make sense to use such features. Spawning a new process per request or on demand doesn’t gain you much and does reduce performance.

        Spawning new processes on most OS is pretty slow compared to other operations. Additionally, there would also be an increase in latency as the new process needs to be loaded, whereas most software these days can handle the new request in more efficient ways.

        I think you can also try to reuse the same process for multiple requests, stopping it only once it has been quiet for a while. But this still doesn’t really help much.

        Historically, i think it was used to try to save memory. But today its a bigger nusance than it is worth. I just checked how much memory sshd is using, and i think it is less than 10mb.

        total kB 8508 6432 1160

        And to be clear, you theoretically can’t save much if any memory doing this because you must have enough memory available to be able to run the process, otherwise bad things will happen or some other process gets oomed.

        Additionally, spawning a new process per request can represent an availability violation. An attacker could launch a series of very slow connections to a server spawning a new process per request, causing a depletion of resources.

        With all that said, I wouldn’t say there are no uses at all for this, it can be useful to make very minimal network connected software that does some very basic stuff in a secure network.

        • drspod
          link
          fedilink
          arrow-up
          3
          arrow-down
          2
          ·
          2 days ago

          It’s faster to list the things that systemd isn’t re-implementing.

  • Tja@programming.dev
    link
    fedilink
    arrow-up
    11
    ·
    3 days ago

    All answers you got here are good, but it depends how simple do you want it.

    Your current solution doesn’t meet one of your criteria already. The code that produces the data cannot run for an extended period of time if it keeps all data in memory and only writes it at shutdown (or USR1).

    It would be trivial to modify the program to write on every new data point collected. Then you just tail that file from the Webserver process.

    You could use more performant solutions, like explicit pipes, Unix sockets, websockets, or even shared memory, but keeping it simple is often the best way. This makes your data collection better (no memory “leaks”) without really adding code, you could in fact remove code (USR1 handling).

  • MadhuGururajan@programming.dev
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 days ago

    might I suggest the following:

    1. Sensor Daemon and webserver can be in the same language/stack. Consider even making them the same program. your webserver needs to run always if you want users to be able to go to your site and get the output.
    2. Use a database like sqlite instead of a raw json dump as querying for data from your webserver becomes flexible (especially for charting purposes)
    3. Note: you can store json as a column in sqlite as well if you are concerned that you have too deep of nested data. just store them with keys that are required for your charts.
    4. you can achieve the circular buffer functionality naively by periodically checking number of rows > threshold and deleting the rows if it is exceeded.
  • Dave.@aussie.zone
    link
    fedilink
    arrow-up
    7
    ·
    3 days ago

    You don’t need an entire web server in your daemon, you just need the socket.

    Include a websocket listener in the daemon. Keep a ringbuffer of the last X data points, whatever nicely fills your client graph, for example. Wait for clients to connect, dump in the ringbuffer, then update clients as data comes in.

    The webserver can serve up the page with the client code that links to the websocket. After that it’s strictly a discussion between the end client and the daemon over websockets.

  • Olap@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    3 days ago

    I think unix signals are a bit lacking for your use case now. I’d consider having the daemon also have a web interface that you could then have the web server message. You mention systemd also, so could also consider MQ message queuing or D-Bus. Getting these to scale across computers isn’t as simple, hence my http suggestion initially. HTTP should also then be OS agnostic

  • mvirts@lemmy.world
    link
    fedilink
    arrow-up
    4
    ·
    3 days ago

    There are a bunch of options to do this.

    If you use the networking stack (tcp or http or zmq or something ) you can run the gui on a different machine.

    Unix sockets are a good option for running on the same machine.

    Have you considered having node serve the static web page? Then you can serve the data over http with the same server.