Vincent Burel is widely recognized for his flagship Windows products catering to audio enthusiasts, notably VB-CABLE and VoiceMeeter (a virtual audio cable akin to JACK on Linux), VB Audio Matrix (similar to ALSA mixer on Linux), VBAN, MacroButton, and numerous audio plugins known for their exceptional low-latency performance on both Intel and AMD processors. Together, these components form a comprehensive ecosystem for low-latency audio on Windows, comparable to ALSA/Jack on Linux but enhanced with additional features tailored for professional DAW users and real-time audio processing engineers and researchers.
Table of Contents
Back in 1998 you introduced low latency plugins QuickVerb and Aphro-V1, competing with hardware units like Lexicon and TC Reverb. In what IDE did you make those? Probably written in ‘C’, what was the compiler?
Very technical question! In the 90s i was compiling with Borland and Microsoft VC6 compiler before the year 2000 i guess, but all the processing was programmed in assembly language while the interface was developed in ’C’. The latency was not really a problem because computers were mainly used for audio post production (usually not live streaming) and typically audio plug-ins were mainly used in Mastering. Live production (studio recording, mixing) with computer was possible with dedicated solutions driven by a specific audio hardware (e.g. Protools, Pyramix…) where all the real time processing (mix, fx, bus) was made inside the audio board, the application was just remoting it (or getting or sending audio tracks).
What latency would you get with those plugins on a Pentium 3 computer for example? Running Windows 98 second edition I assume? DirectX I guess was the only viable option for audio streaming then?
In fact nothing has changed since Windows 2000 but before that, the latency was usually around 1024 samples (to also save CPU power). Anyway latency is not really pending on hardware but on the operating system and its capability to schedule execution threads in time (90s hardware audio units were made with smaller CPUs than Pentium but were running under RT-OS or simple interrupt processing). Windows is usually able to schedule tasks within 1ms (smallest timer unit). We practically measure 2.5 ms for a secure real time processing, that’s why we recommend using buffer size around 5ms under Windows. Below 5ms, you need to validate your PC/Windows configuration.
Direct -X is another subject. Using DirectSound to manage audio devices is mainly used by Video Game industry (some DJ or Broadcast applications too). Direct-X plug-in architecture has been used a lot in Music Software applications or any DAW until 2010 or around.
Do you think WASAPI has made significant progress since the WDM driver model? Has ASIO API changed much going from WDM to WASAPI?
Not really and it’s not clear how WASAPI has been developed (with VISTA). It sounds like an interface layer over KS and WaveRT. New driver architecture coming with WIN10 is maybe more direct to WASAPI but the initial problem remains: Windows audio stack is built around a server of asynchronous streams, not around a real time audio system. Steinberg understood the problem very early in the 90s and developed the ASIO driver architecture, which is a true audio driver with an audio callback based on double buffering, providing all channels in the same time cell, which is the condition to synchronize multiple I/O. ASIO is a rare audio component that we could consider made “state of the art”, that’s why it has never been changed.
What language and IDE do you use now, to support your main software? What is VoiceMeteer written in?
‘C’ of course. When you develop such applications, with low level programming (and CPU optimization), there is no other way. Assembly language could be used, but would take too much time. I stopped programming audio processing in ASM with x64 and multi platform development in the early 2000s. The ‘C’ is perfect to make cross platform programming (this is the only language developed by an engineer by the way). Sometimes we are obliged to use C++ or OBJ-C or Java when the interface or native API are unfortunately not in ‘C’, but I prefer classic ‘C’ programming anyway. I’ve a partner developer working on specific parts like macOs or Android porting or system specific development.
And your software VBAN streams audio over IP/LAN, correct? Can it also be done via WAN (external internet)?
The VBAN protocol is first made for your local network, but if you are a bit “network geek”, you can use it on the internet as well. You have to open the UDP port on your network router as it is explained in our forum topic: forum.vb-audio.com/viewtopic.php?t=479
What is the VBAN latency for 48 kHz 16 bit sound locally on LAN? And are there sync issues? LAN is probably not as reliable as PCI bus or is it?
We expect to have usually less than one video frame of latency (so below 30ms), so we can watch a movie and get the sound on an iPhone for example (to use it as wireless headphone). But if well configured we can go close to 10 ms on a wired network (WIFI is less reliable of course).
One of your products is Spectralissime, a spectrum analyser. Probably based on FFTW? Can you share any secrets about how FFT was implemented and any specific library used?
No, we are usually not using third party libraries or frameworks to develop our applications. In the case of Spectralissime we are not using FFT (not precise enough) but various band pass filters.
And you also have many Android and iOS apps that act as clients to your framework, essentially allowing mobile phones and desktop computers to use the same audio streams. In what IDE and language did you do Android and iOS apps and were there any additional difficulties compared to Windows apps?
We develop mobile applications with the tools provided by Apple (X-Code / Objective-C) and Google (Android Studio / Java), but we work in a way to use the maximum amount of common source code (‘in ‘C’) on every platform. Problems are usually coming with the platform specific code, and native API, often requesting a full validation process, sometimes infinite. I mean the number of different devices and running O/S version, and successive O/S updates, obliges us to re-validate the code and recompile continuously. This is the big difference between today and the 90s for example. Initial development can take 4 weeks like in the 90s but it may require years to be sure this is really working for everybody, every device, every use case.This is giving a very high price to the “validated source code”. Today, an engineer’s experience and his capitalized skills are the key to building a working piece of software, while it was not so crucial in the 90s.
Just four weeks to production? Well that sounds extremely agile, it is like a single scrum sprint.
Yes, that’s basically the time you need to create a new app from scratch. Of course you need to have enough software components to believe you can do it, but when you have an idea for an app and you think you have enough bricks to build it, in 4 weeks the app is there. Then you may find that you are missing some modules and technology to really finish it, and or you realize that you will not be able to make it a commercial product without adding this or that features, this or that user functions… But the application is created, it is born. This may or may not be the result of years of experience and accumulation of validated source code, but it is always the beginning of a new adventure that may require years of work to finalize it and make it alive as a product. And the real change between today (2024) and the 90s, is the time you need and the amount of work to achieve, before (preparation) and after (finalization / maintenance) these 4 weeks.
Nowadays most of your users are live streamers, using OBS and similar RTMP software. So are there any plans to move some of your processing to the cloud eventually?
Not for the moment.
Where can people follow your work? Any social media activity?
On our Discord server. On our forum… on our Facebook/Tweeter(X) . Get our different links on this page: VB-Audio Support page