News
PovBench now upgraded!
After years of promising to upgrade POVBench to cope with the now astonishingly fast machines, and especially clusters,
I've finally moved into the 21st Century at last with a proper new database driven benchmarking site using the new more demanding benchmark supplied with PovRay 3.5.
There are currently now 543 new benchmark results and 2169 skyvase results.
Submission and searching is even easier, and I hope prettier - comments and suggestions are welcome. Thank you to everyone who contributed. You make it what it is.
August 24 2003 - Beowulf Clusters still hot and getting hotter!
At the University of Kentucky, KASY0, a Linux cluster of 128+4 AMD Athlon XP 2600+ nodes, achieved 471 GFLOPS on 32-bit HPL. At a cost of less than $39,500, that makes it the first supercomputer to break $100/GFLOPS.
Read more in this article that appeared on Slashdot.com on August 23 2003.
Well done to Tim Mattox and the guys at Kentucky. I worry that the benchmark doesn't fully illustrate the enormous power of clusters because each machine in a cluster has to 'waste' so much time precalculating photon maps, as this task cannot be distributed. Note to POVRay authors... Can we have a new benchmark please with minimum latency?
Also thanks to Andrey Slepuhin for his AMD Opteron results.
August 24 2003 - POVBench redesign imminent!
I am updating my whole site to be php based, and not use frames. This may make a few people happier, especially some old Netscape users. Click for a sneak preview! Feedback welcome.
March 1999 - Beowulf Clusters are HOT!
'Beowulf' clusters have been gaining in popularity because of their great scalability, price/performance and ease of configuration.
NASA created the Beowulf Project a few years ago to try and combine the power of several small PCs into something
approaching useful. A 'Beowulf Class' computer has become the term given to a network of Linux PCs that work together
as a single parallel computer, with impressive results. Egan Ford and Jay Urbanski of IBM demonstrated one such cluster
at LinuxWorld Expo, submitted the results here and made headlines around the world. (As I discovered when this page took
7000 visitors and 200,000 hits in just one day - 100x average! (Here's a chart of the
influx which coincided with the Infoworld article.)
New Benchmark
When I started benchmarking POVRay in 1994, it took hours to render an image with a 386/486, now we can do it in
seconds. It's astonishing just how far we have progressed since those early days. Now we
don't even have time to make a cuppa while the image is rendering!
I proposed to make a new image to render, a nice short elegant script, with
no includes, a bigger canvas and something that takes about 50 times longer to render than
skyvase. It wasn't easy to find the time, and when I did, I gave up because I'm a perfectionist and was never
happy with what I created. Then PovRay 3.5 came out with one already built in, and at last there won't be anymore
confusion over command line parameters, which I suspect was leading to some silly times.
Let's hope it'll run on clusters too. It'll give them more of a chance to flex their muscles for a little while yet, but I suspect even the new benchmark
isn't even demanding enough for the superclusters with several TeraFlops performance, [shudder!]. I hope some of the Top 500 clusters will participate.
I will keep the skyvase results going, but it's only just useful for single processor benchmarks.
March 1999 saw the addition of 120 interesting new benchmark results, though it was a few months
before anyone else matched the 3 seconds set by the Netfinity team. The 62x PII-400 Gravitor I cluster
came in at 6 seconds, but the processors were idle most of the time due to communication
overheads. The newer benchmark will reduce the significance of these latencies.
Some of you may notice that I've summed all of the clusters' individual machines' clock rates to give a virtual clock rate.
A little silly perhaps, but it gives an idea of how much processing is lost in overheads when comparing a large network of slow machines
with a small network of fast machines.
|