Three concurrent processes.
Process 1 is running mission-critical maintenance tasks – Making backups, defragmenting memory, checking hardware, updating software (…)
Process 2 is just in real time taking whatever Process 1 reads or writes and dumping it raw into a file without any interpretation.
Process 3 then takes whatever Process 2 has spat out, and attempts to interpret it through various AI models, generating images, sounds, text, whatever, based on the input from Process 2.
deleted by creator
it would probably be some markov chain generator or something.
Also known as an llm
step one, defragment ram to free up space
step two, read the now-unallocated ram directly to the screen
This is exactly what happens. Sometimes I think of comparisons between humans and machines as being too reductivist but sometimes there are cases like dreaming where it is uncanny.
step three, ask an llm what the image is and feed the result into an image generator.
while(true) { scenario = Scenarios.rand() time = DateTime.now() while DateTime.now() - time < DateTime.minutes(5) { scenario.continue() } }
I’ve been thinking of a dreaming-like algorithm for neural networks (NN) which I have wanted to try.
When training an NN, you have a large set of inputs and corresponding desired outputs. You make random subsets of this and for each subset you adjust the NN to correspond more to the outputs. You do this over and over and eventually your NN is close to the outputs (hopefully). This training takes a long time and will only be done this initial time. (This is very a simplified picture of the training)
Now for the dreaming. When the NN is “awake” it accumulates new input/output entries. We want to adjust the NN to also incorporate these entries. But if we use only these for training we will lose some of the information the NN has learned in the initial training. We might want to train on the original data + the new data, but that is a lot, so no. Lets assume we do no longer even have the original data. We want to train on what we know and what we have accumulated during the waking time. Here comes the dreaming:
- Get an “orthogonal” set of input/outputs of what the NN already knows (e.g. if the network outputs vectors, take some random input, save vector. Use a global optimization algorithm to find the next vector such that is orthogonal to the first. Do this until you have a spanning set).
- Repeat point 1 until you have maybe one set per newly accumulated input/output entry, or however much appears to not move you too far from the optimization extrema your NN is in – this set should still be a lot smaller than the original training set.
- Fine-tune train your NN on the accumulated data and this data we have generated. The generated data should act as an anchor, not allowing the NN deviate too much from the optimization extrema and the new data will also be invorporated.
I see this as a form of dreaming as we have a wake and sleep portion. During waking we accumulate new experiences. During sleeping we incorporate these experiences into what we already know by “dreaming”, that is make small training sessions on our NN.
Spaghetti code, race conditions, concurrency issues and random bit flips, all running on non ecc memory.
self.disconnect_motors(); self.disconnect_nonvital_senses(); let memory = self.neurons.dump(); while self.asleep { let scenario = load_real_scenario() .randomize_params(); self.neurons.apply(scenario); } self.neurons.load(memory); self.connect_senses(); self.connect_motors();
if my dreams are anything to go by, it be a fragmented mess
or that cannibalized AI slop where it makes weird things like Shrimp Jesus
Fuzzed Spaghetti Code
AI generated random youtube shorts that are just combined into one long video of total nonsense
yes