NHacker Next
login
▲GPEmu: A GPU emulator for rapid, low-cost deep learning prototyping [pdf]vldb.org
53 points by matt_d 14 hours ago | 10 comments
Loading comments...
mdaniel 11 hours ago [-]
Sadly, no licensing in the repo and I have no idea what the licensing weight the associated fields in setup.py carry <https://github.com/mengwanguc/gpemu/blob/27e9534ee0c3d594030...>
0points 6 hours ago [-]
MIT is a well known foss license.
devturn 4 hours ago [-]
Nobody here is doubting that. Your parent comment said:

> I have no idea what the licensing weight the associated fields in setup.py carry

That's a valid concern. I had the same question myself.

immibis 1 hours ago [-]
The relevant weight is: if the author of the copyrighted work sues you in a court of law, will the evidence convince the judge that the author gave you permission to do so?
Voloskaya 2 hours ago [-]
I was thinking about building something like this because this would be *very useful* if it worked well, so got excited for a sec, but this does not seem to be an active project, last commit 10 months ago.
Retr0id 1 hours ago [-]
Does it not work out more expensive to emulate a GPU vs just renting time on a real one?
Voloskaya 20 minutes ago [-]
This isn't actually an emulator in the proper sense of the word. This does not give you correct outputs, but it will try to simulate the actual time it would take a real GPU to perform the series of operation you care about.

This could be useful e.g. for performance profiling, optimization etc.

MangoToupe 27 minutes ago [-]
I imagine this is only true for high throughput loads. For development a full GPU is likely a waste.
socalgal2 9 hours ago [-]
What is the difference between a gpu emulator and maybe specifically GPEmu and say llvmpipe?
almostgotcaught 11 hours ago [-]
> To emulate DL workloads without actual GPUs, we replace GPU- related steps (steps #3–5, and Step 2 if GPU-based) with simple sleep(T) calls, where T represents the projected time for each step.

This is a model (of GPU arch/system/runtime/etc) being used to feed downstream analysis. Pretty silly because if you're going to model these things (which are extremely difficult to model!) you should at least have real GPUs around to calibrate/recalibrate the model.