Quote:
Originally Posted by Trumpcard
I understand what you are saying, but switching over to a 3rd party, non open source compiler is risky, and ultimately against the opensource principles linux is built on. If you dont get the source code for it, then your linux distro shouldnt use it as a fundamental building block. I can see another SCO fiasco on the horizon...
|
I'm not following your argument at all. Where are the legal risks? At any rate, the freedom to recompile your source code is precisely what open source is all about. When a new chip or compiler or idea or whatever comes along, you can do your thing and reap the benefits, since you can do whatever you want with the source.
Quote:
Gentoo as a whole does not support icc in mass, even though icc is superior for INTEL compilation. I havent seen stats for other processors, but I would be willing to bet that the performance gains are much more minimal
|
Yeah. I'm positive that you're right-on in that regard. I mean, it _is_ an Intel compiler :) Besides, there are features on the newer Intel systems that just aren't existent on other chips.
Quote:
rewriting 1000's of ebuilds to work with a ultimately COMMERCIAL compiler for a single processor implemenation is alot of work and doesnt make much sense to me.
|
Your argument that builds would have to "support" icc in particular is not really appropriate, IMHO. The fact of the matter is that the Intel compiler (well, at least the parser) is fully ANSI standard compliant and has gone to great lengths to be command-line compatible with GCC. I don't think that anyone would argue against writing ANSI compatible C++ code, do you? Code that will compile under gcc but won't under icc is typically either making use of a nonstandard feature of gcc or abusing gcc's leniency in some way. This isn't just causing incompatibilities with commercial compilers; this is a portability problem that affects portability to other free platforms just as badly -- a problem that can most definitely be seen in even moving from one version of gcc to another (as can easily be seen in even eqemu's code). I think that the fact that many people are already running Linux kernels compiled with icc and the fact that 2.6 is definitely going to provide support for icc out of the box are pretty strong supporting facts for my case.
Code:
So your xmms opens 30% faster! Personally, I'd rather wait for the gcc folks to catch up. 3.5 is planning on including autovectorization which should drasticlly narrow the performance / optimization gap between icc.
Well, that's your prerogative. I don't fault you for it. OTOH, I doubt that everyone would agree with you. GCC has gradually gotten _slower_ over time, and I'm not sure that platform independent optimizations alone can ever approach the benefits of platform specific ones. Further, it is going to be very hard to beat Intel at their own game. I can't compare gcc3.5 against icc7, but all indicators seem to point to ~20-30 percent improvement in fixed-point code (maybe due to more aggressive memory management), and as much as double the performance or more in floating point intensive code by moving to icc now. That is a big deal. 30% is a big deal. Twice as fast is a very big deal -- especially for something as simple as a compiler change.
Quote:
So, overall, I see what youre saying, but I think it would be ultimately a bad idea to put the fate of a linux distro (and alot of initial work and possible rework) in the hands of a profit driven company...
|
Nobody is going to argue that putting "the fate of a linux distro ... in the hands of a profit driven company" is a good idea. I'm totally at a loss, however, as to how you suggest that refining the emerge build process and maybe making a few packages a bit more standards compliant is going to get us there. Again, I'd like to point out that the Intel compiler is ANSI compliant. Since the gcc compiler is striving towards compliance, it is reasonable to think that code that won't compile under icc will not compile under gcc much longer, either.
Quote:
Who knows when Intel is going to hit dire straits and decide to milk everyone they can....
|
Umm... I don't know about the "dire straits" part, but I think it is fair to assume that the milking part started a while ago! I remember seeing 486sx chips that had math copros onboard that were disabled. I've seen Celerons that had portions rendered unusable, too. All this because it is often cheaper to destroy functionality than to introduce a new manufacturing process. Nonetheless, you're going to have to provide some scenarios where using icc would backfire for me to understand your stance.
Ultimately, I think that the possibility of a 20-200% gain in performance is going to be very attractive to a lot of people. If compiling my ray tracer with icc means that all my images render _twice_ as fast, you'd better believe that I'm going to look into it! I'm pretty sure that my users would want my binary distributions to be based around the built that runs twice as fast, too! Why on Earth wouldn't they? If using icc means that I can squeeze another year out of my aging systems, then I feel further compelled to look into it. We aren't talking about nominal differences here, we're talking big fun - and hardware is expensive.