I’ve been looking at, reading up on, and watching people who know what they’re talking about, talk about EQs. Most EQs are designed upon the feedback/inverse-feedback principle, from the old analogue days. (you guys in this forum already probably know this, so I’ll keep it swift). Other kinds of EQs have been developed purely in the digital realm.. FIIR (?).. convolution.. (and others, I’m sure) but they all suffer from the same problem of messing with the phase, unless linear phase algos are used, buffer requirements… Linear phase, _half_ solves the phase problem, but adds its own potential artifacts.
At some point, some crazy coder guy/gal somewhere will ‘realise’ another way to solve the frequency adjustment issue without affecting the phase or introducing pre-ringing/smearing.
What I’m saying is, we haven’t found all the maths yet which could be used for, or repurposed into, tools for adjusting frequency content of audio material. We’ve got old feedback solutions reinterpreted into ‘real circuitry’ and fairly new digital solutions which have their own problems.
And since there’s no way it’s going to be me who comes up with anything as useful as that, I’ll just carry on being creative with the low-level things I fudge together.
But that’s just a problem with EQ, that’s not even being creative with the concept of oscillators.. Physical modelling will roll on forever, getting better, weirder.. as we try to more accurately model the space and universe around us.
Read more here: Source link