[lug] WAS, can't make this stuff up, folks... any OpenCL folks out there?

Davide Del Vento davide.del.vento at gmail.com
Wed Oct 28 15:44:34 MDT 2009


I don't have developed with OpenCL at all, but I have attended several
talks in several conferences about it. In a nutshell:

1) it's good at really large floating point operations count and bad
at everything else (like ifs). It is *especially* bad when large data
needs to be transferred and very little computation in occurring (like
in this case, very large data, large ifs, and basically the
computation is an n=n+1)

2) because of 1) it twists completely your knowledge of computer
science, which is usually done taking in mind that floating point
operations are expensive, compared to other things. In GPU, floating
point operations are so cheap (and everything else is so expensive -
especially memory access outside the tiny-tiny "cache") that the most
performant algorithms are the ones that do-and-re-do the same
computation in different places, instead (say) of saving it somewhere
and accessing it when needed later. If you are a long term developer,
that's a really weird thing to do, and it's a serious example of the
"everything you know is wrong" (for example the fastest sorting
algorithm on GPUs is not one in the top 5 faster sorting algorithms
that you'd use on CPUs)

3) OpenCL is very difficult - for example is much difficult than CUDA
(which I believe it's not easy too)

4) OpenCL is much easier than what was available before (like
hijacking OpenGL for doing GPGPU) -

5) by design, OpenCL works on different hw (unlike CUDA), BUT the
performance tuning is NOT automatic, i.e. the fastest algorithm in
your hardware may be not-optimal at all on a different hw - closely
related with 2) there is only so much you can abstract! If your hw is
faster at different things than another hw, it's obvious that you'll
really need different algorithms if you'll switch vendor - despite
that the code itself is "portable". Otherwise you won't get the
performance you're after (by a very large margin, not just a few %)

6) if you don't have anything better to do in your life, it is worth
playing with it, since it can surely put a tremendous power at your
fingertip: for example you can have a TeraFlop form an nVidia GPU for
slightly more than 1k$ and less than 200W - you will need about 20
Intel “Nehalem”Core i7 Extreme to much that floating point power - at
about 10 times the cost and the power consumption

Thus, I'll continue to program for CPUs :-)

Bye,
;Dav

On Wed, Oct 28, 2009 at 12:04,  <stimits at comcast.net> wrote:
> ...
> tell "the customer", since he just wanted an image. My scripts worked
> ok. Until eventually I didn't get any plot at all when my "some" data
> reached 4GB (I was plotting on my 32bit laptop). To get the plots in
> time to the f*** customer, the less painful solution was to install an
> Ubuntu 64 bit on a spare desktop (with 1GB of  physical RAM), enabling
> the largest available partition as swap (48GB) and let it chew for one
> night-per-plot.
> ...
>
> A long time ago, I had suggested to a colleague that on some of our large
> data projects, it would be fascinating to offload some image processing
> functions to the GPU of what were then was a brand new high speed video card
> world. We were thinking about all kinds of things which were not practical
> due to software interface, such as geometric computations, security hash
> generation, and image manipulation. Well, there is now OpenCL:
> http://en.wikipedia.org/wiki/OpenCL
> http://www.khronos.org/opencl/
>
> I have not yet touched OpenCL, but issues like the above plot taking so
> long, would make an interesting candidate (OpenCL is more or less a uniform
> interface to non-uniform hardware, especially video cards...much the way
> OpenGL graphics is a similar interface specialized in graphics). Has anyone
> here experimented with OpenCL on linux?
>
> Dan Stimits, stimits AT comcast DOT net
>
> _______________________________________________
> Web Page:  http://lug.boulder.co.us
> Mailing List: http://lists.lug.boulder.co.us/mailman/listinfo/lug
> Join us on IRC: lug.boulder.co.us port=6667 channel=#hackingsociety
>



More information about the LUG mailing list