-
Notifications
You must be signed in to change notification settings - Fork 101
[WIP] fix: add script to install node_modules in mounted mfes #232
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: release
Are you sure you want to change the base?
Changes from all commits
9438a66
3a63d4a
5fd1571
8b819e6
f94c755
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1 @@ | ||
| - [Improvement] Auto install npm dependencies in mounted MFEs during initialization. (by @Danyal-Faheem) |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,13 @@ | ||
| # These jobs are required to run the npm install script when mounting MFEs | ||
| {%- set mfe_data = MFEMountData(MOUNTS) %} | ||
|
|
||
| {%- for app_name, app, mounts in mfe_data.mounted %} | ||
| {{ app_name }}-job: | ||
| image: "{{ MFE_DOCKER_IMAGE_DEV_PREFIX }}-{{ app_name }}-dev:{{ MFE_VERSION }}" | ||
| volumes: | ||
| {%- for mount in mounts %} | ||
| - {{ mount }} | ||
| {%- endfor %} | ||
| environment: | ||
| - "NODE_ENV=development" | ||
| {%- endfor %} |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,14 @@ | ||
| # We only need to define these for local as well otherwise docker throws a not found error when we run the local init jobs | ||
| # These jobs are required to run the npm install script when mounting MFEs | ||
| {%- set mfe_data = MFEMountData(MOUNTS) %} | ||
|
|
||
| {%- for app_name, app, mounts in mfe_data.mounted %} | ||
| {{ app_name }}-job: | ||
| image: "{{ MFE_DOCKER_IMAGE_DEV_PREFIX }}-{{ app_name }}-dev:{{ MFE_VERSION }}" | ||
| volumes: | ||
| {%- for mount in mounts %} | ||
| - {{ mount }} | ||
| {%- endfor %} | ||
| environment: | ||
| - "NODE_ENV=production" | ||
| {%- endfor %} | ||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -2,6 +2,7 @@ | |
|
|
||
| import functools | ||
| import os | ||
| import re | ||
| import typing as t | ||
| from glob import glob | ||
|
|
||
|
|
@@ -334,3 +335,29 @@ def _check_mfe_host(config: Config) -> None: | |
| f'Warning: MFE_HOST="{mfe_host}" is not a subdomain of LMS_HOST="{lms_host}". ' | ||
| "This configuration is not typically recommended and may lead to unexpected behavior." | ||
| ) | ||
|
|
||
|
|
||
| @tutor_hooks.Actions.CONFIG_LOADED.add() | ||
| def _run_jobs_in_mounted_mfes(config: Config) -> None: | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'm not a big fan of adding callbacks to the CONFIG_LOADED action. It means we have to manually parse Instead, I suggest the following: What do you think?
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. At first glance I agree with @regisb, here. If there are no technical blockers to doing it this way, I think it would be less problematic than adding a We're already assuming |
||
|
|
||
| mounts = get_typed(config, "MOUNTS", list, []) | ||
| mfe_mount_data = MFEMountData(mounts) | ||
|
|
||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We can return here if no mounts exist in the config.
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. +1 |
||
| if not mfe_mount_data.mounted: | ||
| return None | ||
|
|
||
| mfe_npm_install_file = os.path.join( | ||
| str( | ||
| importlib_resources.files("tutormfe") | ||
| / "templates" | ||
| / "mfe" | ||
| / "tasks" | ||
| / "mfe" | ||
| / "npm-install.sh" | ||
| ) | ||
| ) | ||
|
|
||
| for mfe, _, _ in mfe_mount_data.mounted: | ||
| with tutor_hooks.Contexts.app("mfe").enter(): | ||
| with open(mfe_npm_install_file, encoding="utf-8") as fi: | ||
| tutor_hooks.Filters.CLI_DO_INIT_TASKS.add_item((mfe, fi.read())) | ||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,46 @@ | ||
| #!/bin/bash | ||
|
|
||
| if [ "$NODE_ENV" != "development" ]; then | ||
| echo "This script is intended to run only in the development environment." | ||
| exit | ||
| fi | ||
|
|
||
| # Paths for the fingerprint and package-lock.json | ||
| FINGERPRINT_FILE="node_modules/.fingerprint" | ||
| PACKAGE_LOCK="package-lock.json" | ||
|
|
||
| # Function to generate the SHA1 hash of the package-lock.json file | ||
| generate_fingerprint() { | ||
| sha1sum "$PACKAGE_LOCK" | awk '{print $1}' | ||
| } | ||
|
|
||
| # If node_modules directory is missing, install dependencies | ||
| if [ ! -d "node_modules" ]; then | ||
| echo "⚠️ node_modules not found! Installing dependencies..." | ||
| npm clean-install | ||
|
|
||
| # If fingerprint file is missing, assume packages may be out of sync and reinstall | ||
| elif [ ! -f "$FINGERPRINT_FILE" ]; then | ||
| echo "⚠️ Fingerprint file not found! Re-installing dependencies..." | ||
| npm clean-install | ||
|
|
||
| # If both node_modules and fingerprint file exist, compare fingerprints | ||
| else | ||
| CURRENT_FINGERPRINT=$(generate_fingerprint) | ||
| STORED_FINGERPRINT=$(cat "$FINGERPRINT_FILE") | ||
|
|
||
| # Reinstall if package-lock.json has changed | ||
| if [ "$CURRENT_FINGERPRINT" != "$STORED_FINGERPRINT" ]; then | ||
| echo "⚠️ package-lock.json changed! Re-installing dependencies..." | ||
| npm clean-install | ||
| else | ||
| # Everything is up to date | ||
| echo "✅ Dependencies are up to date." | ||
| exit 0 | ||
| fi | ||
| fi | ||
|
|
||
|
|
||
| # Store the new fingerprint after installation | ||
| generate_fingerprint > "$FINGERPRINT_FILE" | ||
| echo "✅ Dependencies installed and fingerprint updated." |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I dont understand why this is needed. It is using dev prefixes in local/production env, which in itself is not coherent. Why is local init job failing if this is not provided?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We add a job for every mounted MFE to the CLI_DO_INIT_TASKS filter. When we run the initialization task with the local context, aka
tutor local do init, it will try to run those jobs that we added. However, we never defined a docker compose service to run that job in, therefore it fails.We define this job structure here to let docker know where to run the job for each MFE. The job will still run in local mode but will exit due to the NODE_ENV environment variable being set to production.
The reason we added dev prefixes for local is just for consistency as we need to task to exit after discovering that NODE_ENV is set to production.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's only do this for dev context. It is confusing to have this for local when it is not meant to be used.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right, we tried to find a way to do that, however we are using the CONFIG_LOADED action to parse mounts and add our jobs. This action does not provide us any information if the job is being run with the local or dev context and therefore, we can't add it conditionally.
The only hook that provides this information is the
APP_PUBLIC_HOSTSfilter and we cannot use it as it doesn't provide us information regarding mounts. This is why we had no choice but to run the job in both local and dev containers and then exit if a production ENV is discovered.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, ok.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@arbrandes what do you think about this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@arbrandes Ping 🙂
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Have you given @regisb's suggestion a try? I too feel that having this in the local file is troublesome, and doing it some other way (i.e., a DO_INIT hook that iterates over COMPOSE_MOUNTS) looks like it would not require adding a
-jobsservice.