Guys, clock speed is not proportional to performance! If this was true, you could get a Pentium 4 factory clocked at around 4 GHz, and it would be faster than a 3.3GHz i7, right? Wrong. There are many things that dictate the actually processing speed, and clock speed isn't one of them. It's not even really proportional. The only instance where clock speed increase will definitely make a CPU faster is with two identical CPUs, that's why overclocking works. Multi cores also greatly improve processing ability, but still. most AMD processors have more cores and a faster clock rate than the leading intel processors, but they are still horribly inefficient and slow.
The only way to really gauge the true speed is by looking at its actual performance. I wrote a tutorial a while back on how to use linpack to benchmark an intel CPU. This will give you a tue value of how many mathematical operations it can perform a second (FLoating-point Operations Per Second, or FLOPs, or Giga-FLOPs). This gives a good near true value for speed, but they are hard to find for some CPUs, especially older ones or AMDs. As it also only gives a value for mathematical procesing speed, it doesn't really take into account other things a CPU is used for in everyday life. This is a good proprietary benchmarking site that includes most, if not all modern processors, and a proprietary benchmark based on 3D performance, manipulating objects in your desktop environment, running simple operations etc, which is more helpful when choosing a general use processor. It should still however correlate to the GFLOPS speed.
When I chose my CPU, I went for the slower 2.8GHz i5-760 rather than the 3.33GHz i5-661, as the 760 was newer and therefore benchmarked faster. If you want to compare benchmarks to mine, It gets 4580 CPUmarks (based on the baseline from the website as I don't have the software), and 40-42 GFLOPS from a self run linpack test, after optimization and overclocking to 3.33GHz. Baseline was around 35.
Another somewhat related fact, you can also benchmark GPUs in very similar ways. Wikipedia has the benchmarks for most of them. Graphics cards are effectively very basic computers dedicated for graphics rendering. They have a processor (GPU, or graphics processing unit), RAM, and a board (sometimes called a daughter-board, from motherboard). As the GPUs are specifically designed to render graphics, their GFLOP benchmarks can reach TFLOPS (terraflops. That's trillions of operations per second). When looking at these, most AMD/ATi GPUs are significantly higher than nVidias, but I still prefer nVidia though as AMD/ATi GPUs seem to be very unstable and from experience, most are faulty on arrival. Another thing, nVidia allows CUDA which allows you to actually put all that processing power to a useful purpose. I often run things on my GPU when not using my computer as it is significantly more efficient. That, and there is a lot more support for nVidia GPUs out there. Most games and things are optimised and developed for nVidia GPUs, and the build quality is higher. So ATi/AMD GPUs will be faster, assuming they work. nVidia GPUs are more versatile which is why I prefer them.
TL:DR: Don't be lazy and just read it. I spend the last half-hour writing this.