- cross-posted to:
- machinelearning
- series_of_tubes@sullen.social
- cross-posted to:
- machinelearning
- series_of_tubes@sullen.social
Researchers who recorded direct neural signals from people listening to “Another Brick in the Wall” have reproduced a recognizable version of the song from the neural data.
Here’s what you came for: the WAV downloads of the original audio, the version sampled from linear data, and the version sampled with nonlinear data. There are more of these in the OP which use less electrodes on less patients, you can find them quickly by using Control+F.
Original song waveform transformed into a magnitude-only auditory spectrogram, then transformed back into a waveform using an iterative phase-estimation algorithm.
Reconstructed song excerpt using linear models fed with all 347 significant electrodes from all 29 patients.
Reconstructed song excerpt using nonlinear models fed with all 347 significant electrodes from all 29 patients.
The nonlinear sampling sounds far more accurate than the linear sampling.