Code center > Suggestions

GPGPU acceleration?

<< < (2/4) > >>

Billy:

--- Quote from: Numsgil on April 28, 2017, 05:58:10 AM ---Problem is GPUs only benefit when they can do the same operation (add, multiply, etc.) in parallel.  Even if you put the DNA on the GPUs somehow, you still can't execute more than one DNA program at a time.

It's a problem of SIMD vs. MIMD.  GPUs need SIMD to overcome the additional cost of transferring the data to/from the GPU and starting up a processing batch, but most of Darwinbots is either SISD or MIMD.  The DNA execution is MIMD, certainly.  The physics has passes of MIMD but then everything bottlenecks in places through a SISD section.  In that sort of problem, CPUs are still king.

--- End quote ---

I guess you're probably right. I was thinking that if 90% of the bots have very similar DNA and are doing exactly the same activity, there would be very little divergence and it would effectively be SIMD.  I'd still like to make a prototype, maybe with some minimal language rather than DNA, so I can compare the performance with that on a CPU.

Botsareus: what kind of thing do you think would be better? A chip with lots of simple MIMD cores?

Botsareus:
Do not care about what MIMD stands for. In general a lot of small CPU cores compute a lot of small parallel instruction sets really quickly. Then have one huge chip like the new Intel shit super cooled for large instruction sets.

Botsareus:
The problem is no one figured out how to place different sized CPUs on the same board.

Billy:

--- Quote from: Botsareus on April 28, 2017, 11:23:33 AM ---Do not care about what MIMD stands for. In general a lot of small CPU cores compute a lot of small parallel instruction sets really quickly. Then have one huge chip like the new Intel shit super cooled for large instruction sets.

--- End quote ---

MIMD just means that each core runs its own program with its own data, so yeah, that pretty much what I was getting at. I guess the issue with that is expense. Each core having its own control unit would cost silicon relative to a GPU, so it might work out better to just use the larger, faster cores in today's CPUs. Perhaps not though, it would be very handy for massively parallel problems where GPUs can't be used.

By the way, 'instruction set' has a specific meaning that is different, I think from how you're using the term. A CPU has one instruction set, which is just the instructions that it can process (e.g. add, load, store, etc.). When source code is compiled, it is translated into the instruction set of the platform it's targeting (e.g., x86-64 or ARM). Each platform has its own assembly language too, where you can write a program directly using the instructions from the instruction set of that platform.

Botsareus:
I get I am not good with words; it is not my native.
However point still stands that no one figured it out. It would have been perfect architecture because each master process can just assign threads or processes or whatever to different sized CPUs. Or just hack it into a programming language. Something like "start a new n speed thread"

Navigation

[0] Message Index

[#] Next page

[*] Previous page

Go to full version