Thanks for the kind words!
I think what can be done with training a neural network to emulate an entire guitar amp is amazing, exceeding all my expectations. I've played with both NAM and ToneX, and in fact I've been experimenting with a NAM module inside S-Gear. I have to say also that I'm very happy with how the S-Gear amps stand up alongside. And why not? I'm often told by producers and engineers that they've replaced the real amps in a mix with S-Gear. Last week I posted a short article with Scott Sharrard and co-producer/engineer Charles Martinez, Charlie happily confesses that he'll often use S-Gear from the get-go, not just to fix troublesome recordings.
Here are a few of my thoughts on the new Neural Network based techniques…
Right across the spectrum of applied science and engineering, machine learning is coming of age and is set to have a huge impact. Stuff that for which previously we laboured over really complex mathematical representations, well now they can often be more easily solved by training a neural network on the data. This is a big game changer for many engineering disciplines. I have to admit, I had pretty much ignored this area up until a few months ago. Neural networks was more a data science thing, and it wasn't part of the mix when learning about audio engineering, where the focus was more on understanding the nature and theory of electronic and acoustic systems.
The technology in NAM (and probably most others) originates from WaveNet developed by DeepMind (a google company). WaveNet was developed with the goal of synthesizing realistic human voices using neural networks and machine learning. The model can also be trained to emulate amps (and some other audio systems) by using an input and output waveform and specifying a time period over which to consider historical data. For most guitar amps, probably you need a data window of up to 200msec to capture all the time constants, but perhaps you can capture most of the characteristics with a 50-100 msec window. This is a critical trade-off in terms of the processing power required in the final neural network model, and the amount of training time required.
This technology is largely open to everyone, so I'd expect to see rapid improvements and more optimised implementations come along soon. Right now, NAM has some limitations, a fixed sample rate, limited history window and some questions over aliasing. But again, I think improvements will come quickly. In time we might even see some clever machine learning of amp control knobs too.
I'm also interested in how this technology can be applied to create better component emulations, and possibly better software-based amp designs overall. It might also be used to model the power amp section of an amp (since this usually has only one variable parameter, presence), then drive that with a physically modelled pre-amp. There are lots of possibilities! I've always been interested in being able to design amps and effects, that's still what I hope to do, even if there are technologies to perfectly replicate physical devices.
Of course, I do also expect that the market for amp software will become even more difficult to operate in, especially given all the other economic and political factors we're living through currently. It is a concern, and we'll need to think carefully about where to focus the development effort, and how to best market what we already have.
It's an awesome prospect that there is now so much amazing technology at our fingertips. But the task of curating sounds isn't necessarily going to get easier when options are exploding. In some ways, I feel it is a huge distraction for anyone trying to learn to play guitar these days. Back when I first picked up the guitar, I had nothing but a cheap practice amp (and used the overdriven input of a tape deck to get my distortion). Once I could afford a half-decent amp, I was still stuck with a very narrow sound palette, you just make it work! If you learn a classical instrument like the violin or cello, its just you, the instrument and your practice time 😉