Linux and the Top500
Posted Jun 25, 2005 14:08 UTC (Sat)
by leonbrooks (guest, #1494)
[Link] (30 responses)
Posted Jun 25, 2005 17:43 UTC (Sat)
by NAR (subscriber, #1313)
[Link] (10 responses)
Fully supporting PC hardware, especially 3D videocards?
Posted Jun 25, 2005 18:41 UTC (Sat)
by The_Pirate (guest, #21740)
[Link]
A few hardware manufacturers (number rapidly decreasing) still refuse to publish the specs on their hardware. Their problem. They will go the way of the Dodo.
The problem with 3D-hardware is IMHO one of this kind. Perhaps if some competition to the two big players showed? Could be kinda fun, if some small manufacturer created a open-source 3D card.
Even these two seems to be slowly catching on. At least they now do some feeble support. I guess it will intensify:
"When you got them by the balls, their hearts and minds will follow..."
Posted Jun 25, 2005 19:00 UTC (Sat)
by tazmanmo55 (guest, #30674)
[Link] (6 responses)
Posted Jun 25, 2005 20:30 UTC (Sat)
by NAR (subscriber, #1313)
[Link] (5 responses)
This year? OK, my AverMedia card doesn't work under Windows XP, but Linux crashes with the Nvidia kernel module and doesn't handle my MP3 player, so it's 2:1 to Windows :-( Now I go back to playing GTA3 - which is currently the killer application for me that decides which OS should I boot.
Posted Jun 26, 2005 2:33 UTC (Sun)
by JohnBell (guest, #12625)
[Link]
Posted Jun 26, 2005 5:56 UTC (Sun)
by einstein (guest, #2052)
[Link]
It just smells like FUD to me -
On the subject of games, all the games I like are native linux games - q3a, ut2004, doom3 etc. If there was a game I just had to play, and it wasn't available for linux, guess what? I'll buy a console, which is far less trouble and expense than having to buy into the whole microsoft routine just to play a game.
Posted Jun 26, 2005 5:58 UTC (Sun)
by einstein (guest, #2052)
[Link] (2 responses)
It just smells like FUD to me -
On the subject of games, all the games I like are native linux games - q3a, ut2004, doom3 etc. If there was a game I just had to play, and it wasn't available for linux, guess what? I'll buy a console, which is far less trouble and expense than having to buy into the whole microsoft routine just to play a game.
Posted Jun 26, 2005 11:11 UTC (Sun)
by dgc (subscriber, #6611)
[Link] (1 responses)
I used to think so, too, until I upgraded my 2 machines to
2.6 kernels. They'd been running the nvidia drivers for ~3 years
without any problems what-so-ever across 2.4.15-26.
Since I upgraded to 2.6.5 about 9 months ago, I haven't been able to
get the nvidia drivers to even boot X without locking the machine solid.
I started with the same driver release that worked just fine on 2.4
kernels. Since then I've tried every new driver release hoping that
they'd finally fixed the drivers to work properly on a 2.6.x kernel, but
they're still b0rked.
So now I'm using the nv driver because it just works. And I can't
replace the video card in my laptop...
just my 2c worth.
Posted Jun 26, 2005 16:53 UTC (Sun)
by einstein (guest, #2052)
[Link]
There are few things to check, i.e. whether you're using nvidia agp, or the in-kernel agp, if your BIOS setting for agp speed matches what your kernel is trying to use, whether your hardware is listed in the blacklist in the nvidia-installer-readme, which is BTW a very helpful doc if you're having any issues.
You might try also posting in the nvidia forums, they have some good linux people and are responsive.
Posted Jun 26, 2005 3:41 UTC (Sun)
by tialaramex (subscriber, #21167)
[Link]
We have (through occasional vendor documents, reverse engineering, guess work and other sources) most of the know-how to get comparable performance to Windows out of all existing (up to R4xx series) Radeons, and for GeForce cards up to NV1x series (ie up to GeForce 2, and GeForce 4MX but not full 4, 5 or 6 series).
But turning that know-how into drivers, testing the drivers, and getting them rolled into major distros like Fedora Core is a lot of work, and currently with games not being big spenders in Linux land, there's very little money to pay for that work. That's where volunteers come in, especially those who can program and aren't afraid to learn new stuff.
The project you need to help with is dri.sourceforge.net
Posted Jun 27, 2005 12:33 UTC (Mon)
by hymer (guest, #30694)
[Link]
Yes, I really do need 3D video PC support on my Alpha...
Posted Jun 25, 2005 20:10 UTC (Sat)
by jwb (guest, #15467)
[Link] (18 responses)
Don't forget that Microsoft has a lot of top-tier research talent and many of their fundamental operating system components, including their kernel and their filesystem, are very sound. It takes a certain ignorance to come out and claim that Linux is superior in every conceivable application.
Posted Jun 25, 2005 21:21 UTC (Sat)
by MathFox (guest, #6104)
[Link] (5 responses)
Posted Jun 25, 2005 22:47 UTC (Sat)
by jwb (guest, #15467)
[Link] (4 responses)
Actually I believe this was achieved on a 32-way Itanium 2 with over 2500 disks in a SAN. But no such number has been achieved on a similar Linux machine, including by the Linux on Itanium scalability people (at gelato.org), as far as I have heard.
Posted Jun 26, 2005 3:00 UTC (Sun)
by JohnBell (guest, #12625)
[Link] (3 responses)
Posted Jun 26, 2005 14:30 UTC (Sun)
by gdt (subscriber, #6284)
[Link] (2 responses)
A practical approach to TCP high speed WAN data transfers: The RAID controllers used... measured with eight drives achieve 445MB/s sequential read and 455MB/s sequential write. This is an 85% increase over the best write performance in the Linux systems, and is mainly due to better drivers and optimization in the Microsoft OS. Those Linux 2.6 systems were using XFS, which is the best Linux filesystem for these large sequential datasets. Note that the paper says nothing about NTFS v XFS performance -- unless SGI stuffed up totally then XFS should be better as it is tailored to the task at hand (big sequential datasets with one reader/writer). Of course, that assumption is even more damning of the Linux I/O subsystem or drivers. Also note that sendfile() is being used (their LSR attempt used iperf), it's write throughput which was their bottleneck.
Posted Jun 27, 2005 16:34 UTC (Mon)
by philmes (guest, #5024)
[Link]
Posted Jul 1, 2005 20:24 UTC (Fri)
by RogerL (guest, #4046)
[Link]
Posted Jun 25, 2005 22:52 UTC (Sat)
by rqosa (subscriber, #24136)
[Link] (2 responses)
Posted Jun 26, 2005 3:01 UTC (Sun)
by JohnBell (guest, #12625)
[Link] (1 responses)
Posted Jun 26, 2005 3:35 UTC (Sun)
by tialaramex (subscriber, #21167)
[Link]
The 2nd Extended Filesystem is also very CPU frugal. At 3.5GB/s the CPU load will be considerable from any filesystem, so choosing a lightweight one may contribute more to improve speed than clever re-ordering algorithms.
The whole thing sounds to me more like a network driver benchmark than anything else, and we've been there before (I think 4-5 years ago, Microsoft specifies 4x 100Mbit cards, wins clear benchmark lead, someone retries with 1x1Gbit card, Linux wins). Obviously to get 3.5Gbyte/s storage you need 4 or more 10Gbit cards, so probably this is a matter of optimisation in a 10Gbit NIC driver or something similar.
Posted Jun 25, 2005 23:18 UTC (Sat)
by khim (subscriber, #9252)
[Link]
There's lots of fundamental areas where Win32 is superior to Linux. For one, reading and writing to filesystems. I know Windows isn't widely known to be an I/O monster, but it actually is. You can read or write to NTFS at about 3.5GB/s with a sufficiently large computer. Linux ext3 tops out at about 400MB/s read or write on today's fastest machines. XFS doesn't seem to have a read limit, but writes don't exceed 500MB/s for some reason. This is only proves what I've known for a very long time: lots and lots of special-purpose cases. Bad for maintainance, bad for real tasks, great for benchmarks. Don't forget that Microsoft has a lot of top-tier research talent and many of their fundamental operating system components, including their kernel and their filesystem, are very sound. Sorry, but I've actually worked with "their kernel" and "their filesystem". It's a mess: few quite good ideas and huge amount of junk everywhere. It takes a certain ignorance to come out and claim that Linux is superior in every conceivable application. Not really. It only takes common sense. If stuff is needed often enough - it will work just fine. In stock kernel or in some addon. Someone will actully come and fix it. If stuff is not needed in real life - noone will bother. With Windows you have some things deemed important by "big bosses" where Windows is "much better" - and which are mostly irrelevant. How many systems with hardware actually capable of delivering more then 500MB/sec is out there ? How many users actually need such throughput ?
Posted Jun 26, 2005 7:25 UTC (Sun)
by nurhussein (guest, #16226)
[Link] (2 responses)
Posted Jun 26, 2005 8:00 UTC (Sun)
by error27 (subscriber, #8346)
[Link] (1 responses)
It's really not fair to make comparisons between two operating systems unless you are going to be more thourough and identify the exact bottlenecks. There are a lot of variables, IDE vs SATA vs SCSI. What RAID card are you using? Which file system are you using?
Linux filesystems could do better at lowering seek times. A defrag tool would help. Another idea would be to store data used during bootup all together so boot times were quicker. I don't know very much about windows filesystems...
Posted Jun 26, 2005 14:49 UTC (Sun)
by whitemice (guest, #3748)
[Link]
And those are the superficial obvious variables. There are many many more; including a myriad of both kernel and file-system parameters, partitioning schemes, drive firmware, etc.... Performance tuning ("real" performance tuning) is extremely complicated. This is why almost all such benchmarks are complete and utter crap. Claiming one favors your side or the other is just creating a straw-dog, regardless of which side you are on. The fact is that I/O throughput is sufficient in both Win32 and LINUX for the vast majority of tasks any general sense of inferiority or superiority must be decided on other merits.
> A defrag tool would help.
XFS provides one.
> Another idea would be to store data used during bootup all together so
Simply done by choosing a sane partitioning technique. Although just using more spindles works as well.
Posted Jun 26, 2005 11:48 UTC (Sun)
by dgc (subscriber, #6611)
[Link] (4 responses)
I assume you are referring to these results, right?
http://scalability.gelato.org/DiskScalability_2fResults
24 Sata disks doesn't seem like the sort of setup to be able to do
multiple GiB/s of write throughput to me. You're comparing that to a
result from a machine with 2500 disks attached!
Sure, they may have identified a bottleneck, but it's quite
likely that it is a hardware bottleneck that is the issue problem here.
Posted Jun 26, 2005 14:53 UTC (Sun)
by whitemice (guest, #3748)
[Link] (3 responses)
Yep. Comparing 2500 fiber-channel attached spindles to 24 SATA spindles is absurd; that is like comparing the Grand Canyon to a road-side ditch. SATA is barely an enterprise-grade storage system, I wouldn't be suprised if an FC cage of 12 spindles out-ran an 24 spindle SATA cage.
Posted Jun 26, 2005 16:10 UTC (Sun)
by jwb (guest, #15467)
[Link] (2 responses)
Unlike the lot of you blind sycophants, I've actually tried to boost Linux filesystem I/O past the 500MB/s barrier. It just doesn't work. I have an SATA setup here capable of 2GB/s linear reads, but it only hits about 450MB/s when using ext3. And it doesn't even matter if I add or remove CPUs: still 450MB/s. That's what we call a scalability barrier.
Read the paper that gdt linked earlier in the thread. The folks at CERN improved disk bandwidth by a large factor just by switching off Linux to Windows.
Posted Jun 26, 2005 17:33 UTC (Sun)
by iabervon (subscriber, #722)
[Link]
Is this appending tons of data to a single file, or to a set of files, or writing a ton of small new files, or what? That's obviously going to matter in how much the filesystem affects the result. Have you tried just writing the data to a block device without a filesystem, to see if you're maxing out the SATA drivers or something?
Posted Jun 27, 2005 13:21 UTC (Mon)
by rakoch (guest, #4666)
[Link]
Posted Jun 26, 2005 14:46 UTC (Sun)
by gdt (subscriber, #6284)
[Link] (7 responses)
The most remarkable machine doesn't run Linux. At number 4 is Japan's Earth Simulator, which was bought online in 2002. You don't get another 2002-era machine until number 12.
Posted Jun 27, 2005 1:18 UTC (Mon)
by xoddam (subscriber, #2322)
[Link]
Posted Jun 27, 2005 2:45 UTC (Mon)
by jgreenseid (guest, #18640)
[Link] (4 responses)
It wasn't as simple as "buying one online" in 2002. The idea to build it was initially thought of in 1997. In February, 2002, all nodes were started in operation for checkup. In that period, you had design and R&D for parts starting as early as 1998. There's a 4 year window in there from the start of design phase to delivery. They didn't just say, "Let's get one of those systems on that webpage," and have it show up at their door a few weeks later. For more info, check out the
Birth of the Earth Simulator page from the
Earth Simulator website.
Posted Jun 27, 2005 11:19 UTC (Mon)
by odie (guest, #738)
[Link] (3 responses)
Posted Jun 27, 2005 12:46 UTC (Mon)
by gdt (subscriber, #6284)
[Link] (2 responses)
Of course it was a typo :-( The Earth Simulator is the very opposite of buying a stack of systems online and stringing them together. The ES is one of the few supercomputers which is not a cluster of commodity hardware and it's interesting to see just how very competitive that 'old-fashioned' approach remains.
Posted Jun 27, 2005 21:09 UTC (Mon)
by jgreenseid (guest, #18640)
[Link]
Posted Jun 29, 2005 1:12 UTC (Wed)
by gaurav89 (guest, #30727)
[Link]
Posted Jun 27, 2005 11:16 UTC (Mon)
by ncm (guest, #165)
[Link]
Posted Jun 27, 2005 21:12 UTC (Mon)
by Shewmaker (guest, #1126)
[Link]
Posted Jun 29, 2005 1:11 UTC (Wed)
by gaurav89 (guest, #30727)
[Link]
Blue Gene has a custom single user mode kernel known as CNK - Compute Mode Kernel - a fairly small piece of C++ code. Linux is only used for the filesystem - which although important - is not what defines a supercomputer. The top 500 tests are a test of the MPI implementation and raw processing power in which Linux is playing no role at least in the Blue Gene family. Blue Gene could just as well have used IBM's AIX instead of Linux and still attained the same results.
...from the cheapest routers with at most a few megs of memory, to
top-flight multi-threaded multi-processor kilo-node supercomputers from a
single codebase. I'd love to see Microsoft try spinning XP or 2003 as
doing better than that. Linux now...
OK, so now we have Linux running on everything...
What is there that it's not better at, other than running Win32 software?
OK, so now we have Linux running on everything...
At the moment, in general, GNU/Linux supports more hardware than M$Win.OK, so now we have Linux running on everything...
Wow! When was the last time YOU installed WinBlows and didn't have to use a third party or manufacturers disc to get all the hardware to work properly? Fully supported hardware means all I have to do is install the OS or add my new hardware and VOILA! ..... my new TV tuner card displays the tv program I want to watch or my new DLink wireless works without putting the OEM disc in the tray to install the missing DLL's, INF, or EXE files that come with my FULLY SUPPORTED HARDWARE OS! There isn't a single OS out there that doesn't need a "helping hand" every so often to get the desired results from a piece of hardware. Get a life. At least I was able to watch my Avermedia TV card, use my DLink wireless, grab photos from my Kodak DX4530, print and scan from my HP multi-function printer, etc, etc, etc.... without adding the missing "hardware supported" drivers. If it weren't for the hardware manufacturers "extras", you'd have an OS that comes with nothing. Hey, that sounds like WinBlows!OK, so now we have Linux running on everything...
When was the last time YOU installed WinBlows and didn't have to use a third party or manufacturers disc to get all the hardware to work properly?
OK, so now we have Linux running on everything...
But you're the exception, not the rule.OK, so now we have Linux running on everything...
I have a real hard time believing the story abou linux "crashing" with nvidia drivers. I've been running nvidia cards in linux for years, on a variety of x86 hardware, and it's always been rock solid. Seriously, I've never had any issue. ATI cards and DRM drivers, yeah they are unstable as hell, but linux on nvidia drivers has a perfect record on all the systems I know about.OK, so now we have Linux running on everything...
I have a real hard time believing the story abou linux "crashing" with nvidia drivers. I've been running nvidia cards in linux for years, on a variety of x86 hardware, and it's always been rock solid. Seriously, I've never had any issue. ATI cards and DRM drivers, yeah they are unstable as hell, but linux on nvidia drivers has a perfect record on all the systems I know about.Re: OK, so now we have Linux running on everything...
Re: OK, so now we have Linux running on everything...
I have a real hard time believing the story abou linux
"crashing" with nvidia drivers. I've been running nvidia cards in linux
for years, on a variety of x86 hardware, and it's always been rock solid.
Interesting. I don't deny that you might be having an issue, but without more info it's hard to know what might be happening with your system. The biggest factor is whether you're running a self-compiled kernel, and if so, what options, etc you chose. I know that when the kernel first switched to 4k stacks some time ago, it broke the nvidia drivers (along with a lot of other stuff) but nvidia stepped up and updated their drivers fairly quickly, so that shouldn't be an issue if you're trying current drivers.Re: OK, so now we have Linux running on everything...
A little help would go a long way with free OpenGL drivers.OK, so now we have Linux running on everything...
"Fully supporting PC hardware, especially 3D videocards"OK, so now we have Linux running on everything...
oh, by the way... How do I physically install it in my system ??
...there is no bloody AGP, PCI or PCI-X in it...
There's lots of fundamental areas where Win32 is superior to Linux. For one, reading and writing to filesystems. I know Windows isn't widely known to be an I/O monster, but it actually is. You can read or write to NTFS at about 3.5GB/s with a sufficiently large computer. Linux ext3 tops out at about 400MB/s read or write on today's fastest machines. XFS doesn't seem to have a read limit, but writes don't exceed 500MB/s for some reason.OK, so now we have Linux running on everything...
I really wonder how many harddisks you have to connect to a machine to get to 3.5 GB/sec, when a fast (IDE/SATA) harddisk has to work hard to sustain 50 MB/sec... I realy wonder what the MS engineers did to achieve a disk bandwidth that is larger than the RAM bandwith of a modern PC.OK, so now we have Linux running on everything...
OK, so now we have Linux running on everything...
I really wonder how many harddisks you have to connect to a machine to get to 3.5 GB/sec, when a fast (IDE/SATA) harddisk has to work hard to sustain 50 MB/sec
70?
Link please...OK, so now we have Linux running on everything...
CERN report on Internet2 land speed record and 1Gbps file transfers
That just seems to demonstrate that the Windows drivers for the DAC-SATA-MV8 are better than the Linux ones, doesn't it?CERN report on Internet2 land speed record and 1Gbps file transfers
Now I guess Linux is better again? (Ignoring HW cost) CERN sustained 1GB/second file transfers using Linux
http://www.linuxhpc.org/stories.php?story=05/03/31/8893233
"Using IBM TotalStorage SAN File System storage virtualization software,
the internal tests shattered performance records during a data challenge
test by CERN by reading and writing data to disk at rates in excess of
1GB/second for a total I/O of over 1 petabyte (1 million gigabytes) in a
13-day period. This result shows that IBM's pioneering virtualization
solution has the ability to manage the anticipated needs of what will be
the most data-intensive experiment in the world. First tests of the
integration of SAN File System with CERN's storage management system for
the LHC experiments have already obtained excellent results."
"As part of the CERN openlab work, IBM has involved several leading
storage management experts from IBM's Almaden Research Center in
California, USA, and Zurich Research Lab in Switzerland in the work at
CERN. In addition, through its Shared University Research (SUR) program,
IBM supplied CERN with 28 terabytes of iSCSI disk storage, a cluster of
six eServer xSeries systems running Linux and on-site engineering support
and services by IBM Switzerland."
> Linux ext3 tops out at about 400MB/s read or write on today's
fastest machines.OK, so now we have Linux running on everything...
What about ext2?
This is a job for XFS, not ext2.OK, so now we have Linux running on everything...
Why so sure? ext2 write performance used to be untouchable because it's completely unsafe. Are you sure XFS manages to beat that on the straight with metadata journalling and other impediments? Or are you depending on XFS disk extents to be a better idea than N-indirect blocks from ext2?OK, so now we have Linux running on everything...
OK, so now we have Linux running on everything...
While I'm sure about Windows kernel developers being talented programmers, but I'm not sure about Windows filesystems being faster. Copying huge files (like CD ISO images) onto Windows partitions in Windows takes forever, while on Linux it's amazingly fast (I use reiserfs). A lot of times when copying files in Linux, I've always found it to be blink-and-you-miss-it fast even for moderately sized files. YMMV.OK, so now we have Linux running on everything...
So the parent post is talking about a setup with 2500 harddrives. That kind of a system will cost you over a million dollars. You can't compare it to your desktop.OK, so now we have Linux running on everything...
>It's really not fair to make comparisons between two operating systems OK, so now we have Linux running on everything...
>unless you are going to be more thourough and identify the exact
>bottlenecks. There are a lot of variables, IDE vs SATA vs SCSI. What RAID
>card are you using? Which file system are you using?
> boot times were quicker.
OK, so now we have Linux running on everything...
Linux ext3 tops out at about 400MB/s read or write on today's fastest
machines. XFS doesn't seem to have a read limit, but writes don't exceed
500MB/s for some reason.
>24 Sata disks doesn't seem like the sort of setup to be able to do multiple OK, so now we have Linux running on everything...
>GiB/s of write throughput to me. You're comparing that to a result from a
>machine with 2500 disks attached!
I would be surprised, because I've tried it. SATA is faster than FC and more efficient. Using port multipliers you can quite easily saturate the 300MB/s nominal speed of current SATA channels.OK, so now we have Linux running on everything...
Have you done any profiling to figure out where it's blocking? I'd guess that it's something like your journal being too small (for this load, not in general) or some setting limiting the amount of I/O in progress. I think that having a lot of simultaneous outstanding SATA requests is a feature still under development, so it might be that.OK, so now we have Linux running on everything...
So where is the bottleneck? Inefficiency in handling some latency? CPU OK, so now we have Linux running on everything...
overuse because of Filesystems being single threaded?
Just for comparison: Did you try how sequencial raw i/o goes?
-Rudiger
Linux and the Top500
> Japan's Earth Simulator, which was bought online in 2002 Linux and the Top500
From ebay, or dell.com?
Linux and the Top500
It was obviously a simple typo, the Earth Engine was brought online in 2002.
Linux and the Top500
Linux and the Top500
Sorry gdt, I didn't even think to see the typo. I was having this very conversation with someone last week, when he was trying to say that if you could just go online and buy an IBM mainframe, a Cray, or an Earth Simulator (yeah he said this) right off a website, why would you want 10,000 Linux boxes. So that was fresh in my mind when I saw your comment. My bad...Linux and the Top500
Whoever told you that Blue Gene is a cluster of commodity hardware??Linux and the Top500
I suspect he meant to type "brought online". :-)Linux and the Top500
Blue Gene systems only run Linux (SuSE) on their IO nodes. They have a proprietary kernel on the compute nodes.Linux and the Top500
I am not sure what everyone is so happy about. Its not like Linux had any role in making Blue Gene the fastest super computer. Blue Gene uses Linux only for the filesystem