From a44100018d32a5ab398e217485e2bc10e19053ee Mon Sep 17 00:00:00 2001 From: Roberto Gazia Date: Mon, 1 Dec 2025 10:32:45 +0100 Subject: [PATCH 1/2] chore: remove design folder --- design/code-containers-release.md | 40 ------------------ design/core_bricks.md | 25 ----------- design/declarative.md | 70 ------------------------------- design/imperative.md | 20 --------- 4 files changed, 155 deletions(-) delete mode 100644 design/code-containers-release.md delete mode 100644 design/core_bricks.md delete mode 100644 design/declarative.md delete mode 100644 design/imperative.md diff --git a/design/code-containers-release.md b/design/code-containers-release.md deleted file mode 100644 index c7dde3a7..00000000 --- a/design/code-containers-release.md +++ /dev/null @@ -1,40 +0,0 @@ - -# Abstract - -We need to release some artifacts for different purposes (logical list of what is required, not yet described as package): - - WHL file containing code. To be published on python repository (PyPI) - - A file containing a materialized list of modules (for AppsLab) - - A file (set of?) containing the conf variables for the module (to be configured by AppsLab/User) - - A list of supported models (probably static containing LLMs/AI models) that can be used/installed by Lab - - Module code examples (for AppsLab) - - Container images to be used by modules - -# Possible packages (release artifacts) - -How we can organize above content: - - 0:N containers will be internally developed on private ECR and finally pushed to Docker Hub for public - - 1 WHL file published on PiPI - - 1 YAML file for available MODELS (static - for now only for Arduino modules - no custom) - -options: - - 1 overall index of different libraries (for OOTB not needed, there is only one. For custom, will list all possible custom modules with a brief exaplanation of what is it and download link and PyPI package name -> needed to know what to add in requirements.txt) - - for every library, 1 YAML file for the list of modules available inside library - - releted archive (zip) with examples/variables OR everything inside this YAML file - -# How to release - -## 1. Python module release process - - - -## How to release containers (how to release and update dependencies inside code) - -Options: - - version will be discovered at runtime for arduino_bricks library and will be exported in compose file as variable called APPSLAB_LIB_VERSION. Then we need to refernce it and will be reseolved while running compose. Version can be extracted at runtime from: "pip show arduino_bricks" (to be checked how to do it in code) - - like: arduino/appslab-modules:models-runner-v${APPSLAB_LIB_VERSION} - - to pin a specifc container, do not add any variable - - -# NOTES from DESIGN dock - -When I click on a brick, show the module description and info, RELATED models if available and Code snippets. diff --git a/design/core_bricks.md b/design/core_bricks.md deleted file mode 100644 index e72096f3..00000000 --- a/design/core_bricks.md +++ /dev/null @@ -1,25 +0,0 @@ -# Core bricks - -The following modules wrap system peripherals and re-expose them to the user without any need for driver and library configuration. These bricks will be available OOTB, without any explicit import. - -High priority (mostly MPU-native): -- RPC -- USBCamera (Webcams) -- XOutput (Xorg server) -- AudioInput* -- AudioOutput* -- CSICamera (CSI interface)* -- ScreenOutput (DSI interface)* -- LED (the 2x MPU LEDs)* - -Low priority (mostly off-loaded to MCU): -- LED Matrix* -- GPIO* -- Analog I/O -> analogRead, analogReadResolution, analogWrite, analogWriteResolution, analogReference* -- Digital I/O -> pinMode, digitalRead, digitalWrite* - -* not yet available or known - -## Assumptions - -/dev will be mounted inside the container with user capabilities. We'll also need the libraries for interacting with the peripherals (e.g. Alsa or V4L2) inside the container. \ No newline at end of file diff --git a/design/declarative.md b/design/declarative.md deleted file mode 100644 index f968223a..00000000 --- a/design/declarative.md +++ /dev/null @@ -1,70 +0,0 @@ -# Declarative style design - -## Requirements - -1. Declarative programming style: the user declaratively defines the pipeline structure, e.g. via chaining or composition of the step classes. -2. Java Stream API-like: handles parallelism, backpressure, and synchronization automatically. -3. Invisible framework: the user code should not depend directly on the framework, if possible. The framework should wrap the logic and add asynchronicity. -4. Maximum freedom, simplicity, reusability: as a consequence, nodes should be easy to develop and usable outside the framework. -5. Encapsulation: the user defines "nodes of computation" that are plain Python classes unaware of the framework, of their asynchronous execution and of other nodes. -6. Single entry point: we want a single entry point, responsible for linking the nodes together, orchestration and handling parallelism, backpressure, and synchronization. -7. Simple API: the API should be easy to understand and implement. - -## Additional requirements - -1. Buffering <- covers the video streaming case, policy (block or block with spill-over after timeout (needed?) or spill-over when full) -2. Rate-limiting <- covers the case of the video camera rate (acquisition at a fixed rate) + API limits (block or spill-over when full) -3. Track the state of a node (processing, idle) <- covers the LLM "thinking" case -4. API ergonomics <- non-ergonomic source API? -> pipe.inject(...)? -5. Multiple outputs (LLM streaming) === for a message, potentially I could want to produce N messages -> return an array or an iterator? -6. Re-expose the basic board peripherals as default modules => Direct access to /dev? User and capabilities? -7. Don't undermine the imperative version -8. Branching / forking -9. Merging? -10. Execution statistics (latency, throughput, distributions etc.) - -## Design - -- Pipeline: the Pipeline class will take a sequence of source -> processor -> sink classes as input. The order of these classes will define the order in the processing pipeline. -- Source, Processor & Sink: since the framework shouldn't impose any inheritance or interface to implement, we need a convention for how the framework will interact with these classes. A simple approach is to expect each node to have a specific method that the framework can call to process an item. Let's call this method `produce` for sources, `process` for processors and `consume` for sinks. This method should take one input and return one output (or an exception). -- Data Flow: the framework will be responsible for passing data between the steps and should provide a way to capture the source data, process it in processors and consume the final output of the pipeline in the sink. This could involve using queues for internal communication and managing backpressure and asynchronicity. We also need to expose some sort of tuning of the backpressure strategy and adapters to manage the data model mismatch between these components. -- Boot sequence: the Pipeline class will need a method to start the processing. The source can be a class that naturally produces data when polled (by blocking) or an iterable of data. For establishing connections or sessions outside constructors, we'll need an optional `start` method on the step classes. -- Shutdown sequence: the Pipeline class will need a method to stop the processing. Since the steps might be stopped while processing or while blocked, we'll need an optional `stop` method on the step classes to handle resources cleanup and a way to handle exceptions that might be generated by the processing part. - - -## Example syntax -### Defaults -```python -pipe = Pipeline() -pipe.add_source(UserInputText()) -pipe.add_processor(WeatherForecast()) -pipe.add_processor(print) -pipe.add_sink(DBStore()) -pipe.start() -``` - -### Adapters - explicit mapping -```python -def user_mapping_function(some_input: dict) -> dict: - # User defined logic - ... - -pipe = Pipeline() -pipe.add_source(MQTTInput()) -pipe.add_processor(JSONParser()) # To properly filter input - or -pipe.add_processor(user_mapping_function) # To let use inline a used defined function -pipe.add_processor(WeatherForecast()) -pipe.add_processor(print) -pipe.add_sink(DBStore()) -pipe.start() -``` - -### Adapters - inline mapping -```python -pipe = Pipeline() -pipe.add_source(UserInputText(), map = lambda x: x.strip()) -pipe.add_processor(WeatherForecast(), map = lambda x: x.temperature_c, max_rate = "1/s") -pipe.add_processor(print) -pipe.add_sink(DBStore(), map = lambda x: { "temperature_c": x.temperature_c, "time": datetime.now() }) -pipe.start() -``` \ No newline at end of file diff --git a/design/imperative.md b/design/imperative.md deleted file mode 100644 index 79f03026..00000000 --- a/design/imperative.md +++ /dev/null @@ -1,20 +0,0 @@ -# Imperative style design - -## Requirements - -1. Imperative programming style -2. Simple and straightforward: no need to adapt to a framework, the user "just writes" code in the most basic way - -## Example syntax -```python -userinput = UserTextInput() -city = userinput.get() - -weather_forecast = WeatherForecast() -forecast = weather_forecast.get_forecast_by_city(city) - -print(forecast.temperature_c) - -db = DBStore() -db.save(forecast.temperature_c) -``` \ No newline at end of file From 92b47ae9c94b0665274afd4b28f43425447ff1f8 Mon Sep 17 00:00:00 2001 From: Roberto Gazia Date: Mon, 1 Dec 2025 10:33:11 +0100 Subject: [PATCH 2/2] chore: ignore design folder --- .gitignore | 8 ++------ 1 file changed, 2 insertions(+), 6 deletions(-) diff --git a/.gitignore b/.gitignore index 1ba029f8..caca47cb 100644 --- a/.gitignore +++ b/.gitignore @@ -178,10 +178,6 @@ _version.py TODO.* docs/ -# Examples -*.db -*.pem - # Taskfile .keys/ .task/ @@ -195,5 +191,5 @@ Taskfile.yml *.exe -# Audio files -*.wav +# Other +design/ \ No newline at end of file