gpt4 book ai didi

java - 在 Java 中运行工作 CUDA 代码的最简单方法是什么?

转载 作者:行者123 更新时间:2023-11-30 11:40:16 26 4
gpt4 key购买 nike

我有一些用 C 编写的 CUDA 代码,它似乎工作正常(它是普通的旧 C 而不是 C++)。我正在运行一个 Hadoop 集群并且想要整合我的代码,所以理想情况下我希望在 Java 中运行它(长话短说:系统太复杂了)。

目前 C 程序解析一个日志文件,需要几千行,在 GPU 上并行处理每一行,将特定错误/事务保存到链表中,并将它们写入驱动器。

执行此操作的最佳方法是什么? JCUDA 是到 C CUDA 的完美映射还是完全不同?或者从 Java 中调用 C 代码并共享结果是否有意义(链表是否可访问)?

最佳答案

国际海事组织? JavaCPP .例如,这里是 the main page of Thrust's Web site 上显示的示例的 Java 端口:

import com.googlecode.javacpp.*;
import com.googlecode.javacpp.annotation.*;

@Platform(include={"<thrust/host_vector.h>", "<thrust/device_vector.h>", "<thrust/generate.h>", "<thrust/sort.h>",
"<thrust/copy.h>", "<thrust/reduce.h>", "<thrust/functional.h>", "<algorithm>", "<cstdlib>"})
@Namespace("thrust")
public class ThrustTest {
static { Loader.load(); }

public static class IntGenerator extends FunctionPointer {
static { Loader.load(); }
protected IntGenerator() { allocate(); }
private native void allocate();
public native int call();
}

@Name("plus<int>")
public static class IntPlus extends Pointer {
static { Loader.load(); }
public IntPlus() { allocate(); }
private native void allocate();
public native @Name("operator()") int call(int x, int y);
}

@Name("host_vector<int>")
public static class IntHostVector extends Pointer {
static { Loader.load(); }
public IntHostVector() { allocate(0); }
public IntHostVector(long n) { allocate(n); }
public IntHostVector(IntDeviceVector v) { allocate(v); }
private native void allocate(long n);
private native void allocate(@ByRef IntDeviceVector v);

public IntPointer begin() { return data(); }
public IntPointer end() { return data().position((int)size()); }

public native IntPointer data();
public native long size();
public native void resize(long n);
}

@Name("device_ptr<int>")
public static class IntDevicePointer extends Pointer {
static { Loader.load(); }
public IntDevicePointer() { allocate(null); }
public IntDevicePointer(IntPointer ptr) { allocate(ptr); }
private native void allocate(IntPointer ptr);

public native IntPointer get();
}

@Name("device_vector<int>")
public static class IntDeviceVector extends Pointer {
static { Loader.load(); }
public IntDeviceVector() { allocate(0); }
public IntDeviceVector(long n) { allocate(n); }
public IntDeviceVector(IntHostVector v) { allocate(v); }
private native void allocate(long n);
private native void allocate(@ByRef IntHostVector v);

public IntDevicePointer begin() { return data(); }
public IntDevicePointer end() { return new IntDevicePointer(data().get().position((int)size())); }

public native @ByVal IntDevicePointer data();
public native long size();
public native void resize(long n);
}

public static native @MemberGetter @Namespace IntGenerator rand();
public static native void copy(@ByVal IntDevicePointer first, @ByVal IntDevicePointer last, IntPointer result);
public static native void generate(IntPointer first, IntPointer last, IntGenerator gen);
public static native void sort(@ByVal IntDevicePointer first, @ByVal IntDevicePointer last);
public static native int reduce(@ByVal IntDevicePointer first, @ByVal IntDevicePointer last, int init, @ByVal IntPlus binary_op);

public static void main(String[] args) {
// generate 32M random numbers serially
IntHostVector h_vec = new IntHostVector(32 << 20);
generate(h_vec.begin(), h_vec.end(), rand());

// transfer data to the device
IntDeviceVector d_vec = new IntDeviceVector(h_vec);

// sort data on the device (846M keys per second on GeForce GTX 480)
sort(d_vec.begin(), d_vec.end());

// transfer data back to host
copy(d_vec.begin(), d_vec.end(), h_vec.begin());

// compute sum on device
int x = reduce(d_vec.begin(), d_vec.end(), 0, new IntPlus());
}
}

不过,您的 C 代码应该更容易映射。

我们可以使用这些命令在 Linux x86_64 上编译并运行它,或者通过适当修改 -properties 选项在其他支持的平台上编译和运行它:

$ javac -cp javacpp.jar ThrustTest.java
$ java -jar javacpp.jar ThrustTest -properties linux-x86_64-cuda
$ java -cp javacpp.jar ThrustTest

关于java - 在 Java 中运行工作 CUDA 代码的最简单方法是什么?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/12843758/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com