Skip to content

Commit 0fd0b03

Browse files
committed
Update documentation, update pre-commit and linter/formatter, some minor refactorings (#70)
1 parent bde0dd8 commit 0fd0b03

File tree

33 files changed

+323
-253
lines changed

33 files changed

+323
-253
lines changed

.pre-commit-config.yaml

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -19,12 +19,12 @@ repos:
1919
args: ['--no-sort-keys', "--autofix"]
2020

2121
- repo: https://github.com/astral-sh/ruff-pre-commit
22-
rev: v0.6.2
22+
rev: v0.14.6
2323
hooks:
24-
- id: ruff
24+
- id: ruff-check # Run the linter.
2525
args: [--fix]
2626
exclude: __init__.py$
27-
- id: ruff-format
27+
- id: ruff-format # Run the formatter.
2828
exclude: __init__.py$
2929
- repo: https://github.com/kynan/nbstripout
3030
rev: 0.8.1

docs/README.md

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,10 @@
1+
You can install relevant dependencies for editing the public documentation via:
2+
```sh
3+
pip install -e .[docs]
4+
```
5+
6+
It is recommended to uses [sphinx-autobuild](https://github.com/sphinx-doc/sphinx-autobuild) (installed above) to edit and view the documentation. You can run:
7+
8+
```sh
9+
sphinx-autobuild docs docs/_build/html
10+
```

docs/contributing.md

Lines changed: 108 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,108 @@
1+
## Contributing
2+
3+
Contributions to 123D are highly encouraged! This guide will help you get started with the development process.
4+
5+
### Getting Started
6+
7+
#### 1. Clone the Repository
8+
9+
```sh
10+
git clone [email protected]:autonomousvision/py123d.git
11+
cd py123d
12+
```
13+
14+
#### 2. Installation
15+
16+
```sh
17+
conda create -n py123d_dev python=3.12 # Optional
18+
conda activate py123d_dev
19+
pip install -e .[dev]
20+
pre-commit install
21+
```
22+
23+
The above installation should also include linting, formatting, type-checking in the pre-commit.
24+
We use [`ruff`](https://docs.astral.sh/ruff/) as linter/formatter, for which you can run:
25+
```sh
26+
ruff check --fix .
27+
ruff format .
28+
```
29+
Type checking is not strictly enforced, but ideally added with [`pyright`](https://github.com/microsoft/pyright).
30+
31+
32+
#### 3. Managing dependencies
33+
34+
We try to keep dependencies minimal to ensure quick and easy installations.
35+
However, various datasets require dependencies in order to load or preprocess the dataset.
36+
In this case, you can add optional dependencies to the `pyproject.toml` install file.
37+
You can follow examples of nuPlan or nuScenes. These optional dependencies can be install with
38+
39+
```sh
40+
pip install -e .[dev,nuplan,nuscenes]
41+
```
42+
where you can combined the different optional dependencies.
43+
44+
The optional dependencies should only be required for data pre-processing.
45+
When writing a dataset conversion method, you can check if the necessary dependencies are installed by calling with the `check_dependencies` function.
46+
47+
```python
48+
from py123d.common.utils.dependencies import check_dependencies
49+
50+
check_dependencies(["optional_package_a", "optional_package_b"], "optional_dataset")
51+
import optional_package_a
52+
import optional_package_b
53+
54+
def load_camera_from_outdated_dataset(...) -> ...:
55+
optional_package_a.module(...)
56+
optional_package_b.module(...)
57+
pass
58+
```
59+
This will notify the user if `optional_dataset` is not included in the 123D install.
60+
61+
Also ensure that functions/modules that require optional installs are only imported when necessary, e.g:
62+
63+
```python
64+
def load_camera_from_file(file_path: str, dataset: str) -> ...:
65+
...
66+
if dataset == "optional_dataset":
67+
from py123d.some_module import load_camera_from_outdated_dataset
68+
69+
return load_camera_from_outdated_dataset(...)
70+
...
71+
```
72+
73+
#### 4. Other useful tools
74+
75+
If you are using VSCode, it is recommended to install:
76+
- [autodocstring](https://marketplace.visualstudio.com/items?itemName=njpwerner.autodocstring) - Creating docstrings (please set `"autoDocstring.docstringFormat": "sphinx-notypes"`).
77+
- [Code Spell Checker](https://marketplace.visualstudio.com/items?itemName=streetsidesoftware.code-spell-checker) - A basic spell checker.
78+
79+
Or other similar plugins depending on your preference/editor.
80+
81+
### Documentation Requirements
82+
83+
#### Docstrings
84+
- **Development:** Docstrings are encouraged but not strictly required during active development
85+
- **Format:** Use [Sphinx-style docstrings](https://sphinx-rtd-tutorial.readthedocs.io/en/latest/docstrings.html)
86+
87+
88+
#### Sphinx documentation
89+
90+
All datasets should be included in the `/docs/datasets/` documentation. Please follow the documentation format of other datasets.
91+
92+
You can install relevant dependencies for editing the public documentation via:
93+
```sh
94+
pip install -e .[docs]
95+
```
96+
97+
It is recommended to uses [sphinx-autobuild](https://github.com/sphinx-doc/sphinx-autobuild) (installed above) to edit and view the documentation. You can run:
98+
```sh
99+
sphinx-autobuild docs docs/_build/html
100+
```
101+
102+
### Adding new datasets
103+
TODO
104+
105+
106+
### Questions?
107+
108+
If you have any questions about contributing, please open an issue or reach out to the maintainers.

docs/datasets/av2.rst

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -149,27 +149,27 @@ The downloaded dataset should have the following structure:
149149
Installation
150150
~~~~~~~~~~~~
151151

152-
No additional installation steps are required beyond the standard ``py123d`` installation.
152+
No additional installation steps are required beyond the standard `py123d`` installation.
153153

154154

155155
Conversion
156156
~~~~~~~~~~
157157

158-
To run the conversion, you either need to set the environment variable ``$AV2_SENSOR_ROOT`` or ``$AV2_SENSOR_ROOT``.
158+
To run the conversion, you either need to set the environment variable ``$AV2_DATA_ROOT`` or ``$AV2_SENSOR_ROOT``.
159159
You can also override the file path and run:
160160

161161
.. code-block:: bash
162162
163163
py123d-conversion datasets=["av2_sensor_dataset"] \
164-
dataset_paths.av2_sensor_data_root=$AV2_SENSOR_ROOT # optional if env variable is set
165-
164+
dataset_paths.av2_data_root=$AV2_DATA_ROOT # optional if env variable is set
166165
167166
168167
169168
Dataset Issues
170169
~~~~~~~~~~~~~~
171170

172-
n/a
171+
- **Ego Vehicle:** The vehicle parameters are partially estimated and may be subject to inaccuracies.
172+
173173

174174

175175
Citation

docs/datasets/carla.rst

Lines changed: 3 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,7 @@ CARLA
33

44
CARLA is an open-source simulator for autonomous driving research.
55
As such CARLA data is synthetic and can be generated with varying sensor and environmental conditions.
6+
The following documentation is largely incomplete and merely describes the provided demo data.
67

78
.. dropdown:: Quick Links
89
:open:
@@ -46,7 +47,7 @@ Available Modalities
4647
- Depending on the collected dataset. For further information, see :class:`~py123d.datatypes.detections.BoxDetectionWrapper`.
4748
* - Traffic Lights
4849
- X
49-
- TODO
50+
- n/a
5051
* - Pinhole Cameras
5152
- ✓
5253
- Depending on the collected dataset. For further information, see :class:`~py123d.datatypes.sensors.PinholeCamera`.
@@ -90,12 +91,7 @@ Dataset Specific
9091
Dataset Issues
9192
~~~~~~~~~~~~~~
9293

93-
[Document any known issues, limitations, or considerations when using this dataset]
94-
95-
* Issue 1: Description
96-
* Issue 2: Description
97-
* Issue 3: Description
98-
94+
n/a
9995

10096
Citation
10197
~~~~~~~~

docs/datasets/index.rst

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,8 @@ Datasets
33

44
Brief overview of the datasets section...
55

6-
This section provides comprehensive documentation for various autonomous driving and computer vision datasets. Each dataset entry includes installation instructions, available data types, known issues, and proper citation formats.
6+
This section provides comprehensive documentation for various autonomous driving and computer vision datasets.
7+
Each dataset entry includes installation instructions, available data types, known issues, and references for further reading.
78

89
.. toctree::
910
:maxdepth: 1

0 commit comments

Comments
 (0)