Hey I have a question about rendering,
Why does turning the interpolation and sample rate in the input/output to 192000 and 512-point sync make everything sound infinitely better, and change specific sounds completely, even after rendering?
And why would I ever want to use anything lower? Does it just take too much memory for a huge project?
Also, would dragging samples into the file have any reaction to the sample rate? What about recordings?
Do underruns effect the rendering process of sounds?
Does any other property in the audio settings effect the overall rendering of sounds?(polling, hardware buffer, multigenerator\mixer processing, tick lengths, things meant to increase performance of CPU) or do they just effect real-time playback?
what's the difference between driver, hybrid, mixer playback? Does it effect rendering?
Can I render FL-Chan?
Please respond.