You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The work that will be done is supposed to clear the path for other GPGPU platforms in future.
7
9
8
10
## Why should it be put into place?
9
11
10
-
First steps to basic support of the feature have already been made by individual developers.
12
+
The WG is already in place as an unofficial WG, see our [Github], [Roadmap], [Zulip], and some PRs by WG members: [rust@denzp], [rust@peterhj], [stdsimd], [libc], etc.
13
+
14
+
Being an official WG would allow us to use the official communication channels like the rust-lang zulip, which would give us more visibility and allow us to attract more contributors.
Regardless of recent progress, there are still a lot of questions about safety and soundness of SIMT (Single Instruction, Multiple Threads) code and they require a lot of collaboration to find solutions.
12
23
13
24
## What is the working group about?
14
25
15
-
We want to work together on getting a solid foundation for writing CUDA code.
26
+
Right now, this WG is about making the already-existing Rust CUDA support minimally reliable.
16
27
Getting a safe and production-ready development experience on Stable Rust is our primary goal!
17
28
18
29
Also, currently a major obstacle for developing GPGPU applications and algorithms in Rust is a lack of learning resources.
@@ -22,11 +33,11 @@ We plan to solve the "documentation debt" with a broad range of tutorials, examp
22
33
23
34
The WG is not focused on promoting or developing "standard" frameworks.
24
35
Instead, we want to provide basic and reliable support of the feature and inspire the community to start using it.
25
-
This should lead to experimenting with different approaches on how to use it and creating awesome tooling.
36
+
This WG is only about CUDA support - other GPGPU targets are out-of-scope. Our focus is on making the current CUDA target more reliable. Everything that goes beyond that (e.g. higher-level CUDA libraries, CUDA frameworks, etc.) is also out-of-scope.
26
37
27
38
## Is your WG long-running or temporary?
28
39
29
-
In our current vision, the WG should live until we fulfill our goals.
40
+
The CUDA WG is long-running. We have a [Roadmap] for an MVP, but there are many issues worth solving once that MVP is achieved (e.g. foundational libraries).
30
41
31
42
In the end, we hope the WG will evolve into another one to cover similar topics:
32
43
to support other GPGPU platforms or to create higher-level frameworks to improve end-to-end experience based on community feedback.
@@ -80,7 +91,7 @@ Excessive learning materials and retrospective about made decisions should help
80
91
81
92
## Everything that is already decided upon
82
93
83
-
We already have a [`rust-cuda`](https://github.com/rust-cuda) Github organization and a [`rust-cuda`](https://rust-cuda.zulipchat.com) Zulip server.
94
+
We work in the open, see our [Github].
84
95
85
96
> TBD... would it make sense to move to a `rust-lang` Zulip server?
86
97
@@ -90,4 +101,4 @@ We already have a [`rust-cuda`](https://github.com/rust-cuda) Github organizatio
0 commit comments