Skip to content

Commit 15b1511

Browse files
authored
[GPU Backend] [Doc]: Remove duplicate statements on missing GPU wheels. (#29962)
Signed-off-by: Ioana Ghiban <[email protected]>
1 parent b78772c commit 15b1511

File tree

2 files changed

+0
-6
lines changed

2 files changed

+0
-6
lines changed

docs/getting_started/installation/gpu.rocm.inc.md

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -5,9 +5,6 @@ vLLM supports AMD GPUs with ROCm 6.3 or above, and torch 2.8.0 and above.
55
!!! tip
66
[Docker](#set-up-using-docker) is the recommended way to use vLLM on ROCm.
77

8-
!!! warning
9-
There are no pre-built wheels for this device, so you must either use the pre-built Docker image or build vLLM from source.
10-
118
# --8<-- [end:installation]
129
# --8<-- [start:requirements]
1310

docs/getting_started/installation/gpu.xpu.inc.md

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -2,9 +2,6 @@
22

33
vLLM initially supports basic model inference and serving on Intel GPU platform.
44

5-
!!! warning
6-
There are no pre-built wheels for this device, so you need build vLLM from source. Or you can use pre-built images which are based on vLLM released versions.
7-
85
# --8<-- [end:installation]
96
# --8<-- [start:requirements]
107

0 commit comments

Comments
 (0)