At 1/23/11 08:45 PM, T1LTED wrote:
I am fairly new to FL studio and creating music online in general. I was wondering what the 64 point sinc, linear 128 sinc, etc option is when exporting a song in FL as an Mp3. How does changing this option affect an exported song?
To understand this you'll need to understand a bit about how songs are digitized. Some of this may be SLIGHTLY wrong, it's been a while since I've worked with this process explicitly (I did a bunch of work on digitizing sound last year, this year has been nada but circuits and logic systems, so it's a bit rusty).
The sound wave is created as a series of bits (0's and 1's) using a process known as "sampling". Every X seconds (usually many hundreds or thousands of times per second) a sample point of the waveform on a graph is taken. These points are stored in a file as 0's and 1's. This is what is being referred to by a track's samplerate (audio CDs are standardized to 44.1kHz, or 44,100 samples every second - interesting fact is that it's been found in order to perfectly reconstruct a frequency spectrum, you need 2 samples per second per Hz wavelenth - a track that exists only in the frequency spectrum 0Hz to 100Hz would only need 200 samples per second, but for our spectrum of 0-20,000 Hz we need at least 40,000 samples/second, and the music industry uses 44,100 samples to be safe and/or give additional headroom frequency-wise).
When reconstructing the file into a soundwave this process is reversed (oversimplification warning - this part is not literally accurate, but the gist of it is close enough and looking it up would take too long), and the sample points are used to draw a graph (the sound wave form you are familiar with) by drawing lines between the points that are stored in the data.
This is what "interpolation" is talking about. It is an algorithm for determining how best to draw the lines between sample points. Linear interpolation would just be a straight line connection the two points, which sounds good, but not really perfect - there'll be a slight dip in audio quality since waves have many nuances that a series of straight lines, despite their length, cannot quite capture.
Something like 256 point interpolation is a method of reconstructing the wave between the two points that is much more complex, and gets much better results, than a straight line ever would. It is also a big fat CPU hog, which is why it warns you not to use it during live applications, but to save it for rendering a finished product.
Again, some of that is probably wrong, or much too oversimplified (I'll admit, I forget some of the process's specifics), but most of that is mostly close enough that you should be able to understand what it means.
TL;DR - It makes your song sound better but takes a big chunk of your processor to do it. Don't use it* for live shows, do use it for rendering your finished product.
*"it" being advanced interpolation algorithms above linear or 6 point hermite.