Compare commits
8 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| 61c5b322dd | |||
| 3e95ad76f9 | |||
| 695e206157 | |||
| 15faada234 | |||
| 9484857d9a | |||
| 7075b3f52b | |||
| f6933978e9 | |||
| 684195a04f |
@@ -18,3 +18,4 @@ ncbitaxo_*
|
|||||||
Readme_files
|
Readme_files
|
||||||
Readme.html
|
Readme.html
|
||||||
tmp.*
|
tmp.*
|
||||||
|
reserve
|
||||||
|
|||||||
@@ -2,14 +2,14 @@
|
|||||||
|
|
||||||
## Intended use
|
## Intended use
|
||||||
|
|
||||||
This project packages the MetabarcodingSchool training lab into one reproducible bundle. You get Python, R, and Bash kernels, a Quarto-built course website, and preconfigured admin/student accounts, so onboarding a class is a single command instead of a day of setup. Everything runs locally on a single machine, student work persists between sessions, and `./start-jupyterhub.sh` takes care of building images, rendering the site, preparing volumes, and bringing JupyterHub up at `http://localhost:8888`. Defaults (accounts, passwords, volumes) live in the repo so instructors can tweak them quickly.
|
This project packages the MetabarcodingSchool training lab into one reproducible bundle. You get Python, R, and Bash kernels, a Quarto-built course website, and preconfigured admin/student accounts, so onboarding a class is a single command instead of a day of setup. Everything runs locally on a single machine, student work persists between sessions, and `./start-jupyterhub.sh` takes care of pulling images, rendering the site, preparing volumes, and bringing JupyterHub up at `http://localhost:8888`.
|
||||||
|
|
||||||
## Prerequisites (with quick checks)
|
## Prerequisites (with quick checks)
|
||||||
|
|
||||||
You only need **Docker and Docker Compose** on the machine that will host the lab. All other tools (Quarto, Hugo, Python, R) are provided via a builder Docker image and do not need to be installed on your system.
|
You only need **Docker and Docker Compose** on the machine that will host the lab. All other tools (Quarto, Hugo, Python, R) are provided via a builder Docker image and do not need to be installed on your system.
|
||||||
|
|
||||||
- macOS: install [OrbStack](https://orbstack.dev/) (recommended) or Docker Desktop; both ship Docker Engine and Compose.
|
- macOS: install [OrbStack](https://orbstack.dev/) (recommended) or Docker Desktop; both ship Docker Engine and Compose.
|
||||||
- Linux: install Docker Engine and the Compose plugin from your distribution (e.g., `sudo apt install docker.io docker-compose-plugin`) or from Docker’s official packages.
|
- Linux: install Docker Engine and the Compose plugin (`sudo apt install docker.io docker-compose-plugin`) or from Docker's official packages.
|
||||||
- Windows: install Docker Desktop with the WSL2 backend enabled.
|
- Windows: install Docker Desktop with the WSL2 backend enabled.
|
||||||
|
|
||||||
Verify from a terminal:
|
Verify from a terminal:
|
||||||
@@ -19,260 +19,311 @@ docker --version
|
|||||||
docker compose version # or: docker-compose --version
|
docker compose version # or: docker-compose --version
|
||||||
```
|
```
|
||||||
|
|
||||||
## How the startup script works
|
## Three operating modes
|
||||||
|
|
||||||
`./start-jupyterhub.sh` is the single entry point. It builds the Docker images, renders the course website, prepares the volume folders, and starts the stack. Internally it:
|
`./start-jupyterhub.sh` has three modes that control how Docker images are obtained:
|
||||||
|
|
||||||
- creates the `jupyterhub_volumes/` tree (caddy, course, shared, users, web...)
|
| Mode | Flag | Description |
|
||||||
- builds the `obijupyterhub-builder` image (contains Quarto, Hugo, R, Python) if not already present
|
|------|------|-------------|
|
||||||
- builds `jupyterhub-student` and `jupyterhub-hub` images
|
| **Pull** (default) | *(none)* | Pull pre-built images from the registry and start |
|
||||||
- detects R package dependencies from Quarto files using the `{attachment}` package and installs them automatically
|
| **Local build** | `--local-build` | Build images locally on your machine and start (no push) |
|
||||||
- renders the Quarto site from `web_src/`, generates PDF galleries and `pages.json`, and copies everything into `jupyterhub_volumes/web/`
|
| **Publish** | `--publish` | Build multi-arch images (amd64 + arm64), push to registry, then start |
|
||||||
- runs `docker-compose up -d --remove-orphans`
|
|
||||||
|
|
||||||
### Builder image
|
### Pull mode — default, fastest
|
||||||
|
|
||||||
The builder image (`obijupyterhub-builder`) contains all the tools needed to prepare the course materials:
|
```bash
|
||||||
|
./start-jupyterhub.sh
|
||||||
|
```
|
||||||
|
|
||||||
- **Quarto** for rendering the course website
|
Downloads the three pre-built images from `registry.metabarcoding.org/metabarschool/`:
|
||||||
- **Hugo** for building the obidoc documentation
|
- `obijupyterhub-builder:latest`
|
||||||
- **R** with the `{attachment}` package for automatic dependency detection
|
- `obijupyterhub-hub:latest`
|
||||||
- **Python 3** for utility scripts
|
- `obijupyterhub-student:latest`
|
||||||
|
|
||||||
This means you don't need to install any of these tools on your host system. The script automatically builds this image on first run and reuses it for subsequent builds. Use `--force-rebuild` to rebuild the builder image if needed.
|
This is what instructors should use in class. No compilation, no wait.
|
||||||
|
|
||||||
### R package caching for builds
|
### Local build mode — for development
|
||||||
|
|
||||||
R packages required by your Quarto documents are automatically detected and installed during the build process. These packages are cached in `jupyterhub_volumes/builder/R_packages/` so they persist across builds. This means:
|
```bash
|
||||||
|
./start-jupyterhub.sh --local-build
|
||||||
|
```
|
||||||
|
|
||||||
- **First build**: All R packages used in your `.qmd` files are detected and installed (may take some time)
|
Builds all three images locally using the Dockerfiles in `obijupyterhub/`. Rebuilt images stay on your machine and are not pushed to the registry. Additional flags apply only in this mode:
|
||||||
- **Subsequent builds**: Only missing packages are installed, making builds much faster
|
|
||||||
- **Adding new packages**: Simply use `library(newpackage)` in your Quarto files; the build process will detect and install it automatically
|
|
||||||
|
|
||||||
To clear the R package cache and force a fresh installation, delete the `jupyterhub_volumes/builder/R_packages/` directory.
|
| Flag | Effect |
|
||||||
|
|------|--------|
|
||||||
|
| `--no-build` / `--offline` | Skip all image operations, use whatever is already local |
|
||||||
|
| `--force-rebuild` | Rebuild all images without Docker cache |
|
||||||
|
| `--rebuild-builder` | Force rebuild the builder image only |
|
||||||
|
| `--rebuild-student` | Force rebuild the student image only |
|
||||||
|
| `--rebuild-hub` | Force rebuild the JupyterHub image only |
|
||||||
|
|
||||||
You can tailor what it does with a few flags:
|
`--rebuild-*` and `--force-rebuild` imply `--local-build` automatically.
|
||||||
|
|
||||||
- `--no-build` (or `--offline`): skip Docker image builds and reuse existing images (useful when offline).
|
### Publish mode — for maintainers
|
||||||
- `--force-rebuild`: rebuild images without cache.
|
|
||||||
- `--stop-server`: stop the stack and remove student containers, then exit.
|
```bash
|
||||||
- `--update-lectures`: rebuild the course website only (no Docker stop/start).
|
./start-jupyterhub.sh --publish
|
||||||
- `--build-obidoc`: force rebuilding the obidoc documentation (auto-built if empty; skipped in offline mode).
|
```
|
||||||
|
|
||||||
|
Builds all three images for both `linux/amd64` and `linux/arm64` using `docker buildx`, then pushes them to the registry tagged with both `:latest` and the version from `version.txt`. Requires write access to the registry and `docker buildx` with a `docker-container` driver.
|
||||||
|
|
||||||
|
**Before publishing a new version**, bump `version.txt` at the project root:
|
||||||
|
|
||||||
|
```
|
||||||
|
0.2.0
|
||||||
|
```
|
||||||
|
|
||||||
|
## Actions (all modes)
|
||||||
|
|
||||||
|
These flags work alongside any mode:
|
||||||
|
|
||||||
|
| Flag | Effect |
|
||||||
|
|------|--------|
|
||||||
|
| `--stop-server` | Stop the stack and remove student containers, then exit |
|
||||||
|
| `--update-lectures` | Rebuild the course website only (no Docker stop/start) |
|
||||||
|
| `--update-obidoc` | Rebuild the obidoc documentation only (no Docker stop/start) |
|
||||||
|
| `--build-obidoc` | Force rebuild of obidoc documentation on next full start |
|
||||||
|
|
||||||
## Installation and first run
|
## Installation and first run
|
||||||
|
|
||||||
1) Clone the project:
|
1. Clone the project:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
git clone https://forge.metabarcoding.org/MetabarcodingSchool/OBIJupyterHub.git
|
git clone https://forge.metabarcoding.org/MetabarcodingSchool/OBIJupyterHub.git
|
||||||
cd OBIJupyterHub
|
cd OBIJupyterHub
|
||||||
```
|
```
|
||||||
|
|
||||||
2) (Optional) glance at the structure you’ll populate:
|
2. Repository structure:
|
||||||
|
|
||||||
```
|
```
|
||||||
OBIJupyterHub
|
OBIJupyterHub/
|
||||||
├── start-jupyterhub.sh - single entry point (build + render + start)
|
├── start-jupyterhub.sh single entry point
|
||||||
├── obijupyterhub - Docker images and stack definitions
|
├── version.txt current image version number
|
||||||
│ ├── docker-compose.yml
|
├── obijupyterhub/
|
||||||
│ ├── Dockerfile
|
│ ├── docker-compose.yml
|
||||||
│ ├── Dockerfile.hub
|
│ ├── Dockerfile student image
|
||||||
│ └── jupyterhub_config.py
|
│ ├── Dockerfile.hub JupyterHub image
|
||||||
├── jupyterhub_volumes - data persisted on the host
|
│ ├── Dockerfile.builder builder image (Quarto, Hugo, R, Python)
|
||||||
│ ├── course - read-only for students (notebooks, data, bin, R packages)
|
│ └── jupyterhub_config.py
|
||||||
│ ├── shared - shared read/write space for everyone
|
├── jupyterhub_volumes/ data persisted on the host
|
||||||
│ ├── users - per-user persistent data
|
│ ├── builder/R_packages/ R package cache for building lectures
|
||||||
│ └── web - rendered course website
|
│ ├── course/ read-only for students (notebooks, data, bin)
|
||||||
└── web_src - Quarto sources for the course website
|
│ ├── shared/ shared read/write space for everyone
|
||||||
|
│ ├── users/ per-user persistent data
|
||||||
|
│ └── web/ rendered course website
|
||||||
|
├── tools/
|
||||||
|
│ ├── install_quarto_deps.R automatic R dependency detection and install
|
||||||
|
│ └── install_packages.sh install shared R packages into course/
|
||||||
|
└── web_src/ Quarto sources for the course website
|
||||||
```
|
```
|
||||||
|
|
||||||
Note: The `obijupyterhub/` directory also contains `Dockerfile.builder` which provides the build environment, the `tools/` directory contains utility scripts including `install_quarto_deps.R` for automatic R dependency detection, and `jupyterhub_volumes/builder/` stores cached R packages for faster builds.
|
3. (Optional) place course materials in `jupyterhub_volumes/course/` before first run.
|
||||||
|
|
||||||
3) Prepare course materials (optional before first run):
|
4. Start everything:
|
||||||
- Put notebooks, datasets, scripts, binaries, or PDFs for students under `jupyterhub_volumes/course/`. They will appear read-only at `/home/jovyan/work/course/`.
|
|
||||||
- For collaborative work, drop files in `jupyterhub_volumes/shared/` (read/write for all at `/home/jovyan/work/shared/`).
|
|
||||||
- Edit or add Quarto sources in `web_src/` to update the course website; the script will render them.
|
|
||||||
|
|
||||||
4) Start everything (build + render + launch):
|
```bash
|
||||||
|
./start-jupyterhub.sh # pulls images from registry (recommended)
|
||||||
|
# or
|
||||||
|
./start-jupyterhub.sh --local-build # builds locally
|
||||||
|
```
|
||||||
|
|
||||||
|
5. Access JupyterHub at `http://localhost:8888`.
|
||||||
|
|
||||||
|
6. Stop when done:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
./start-jupyterhub.sh --stop-server
|
||||||
|
# or from obijupyterhub/
|
||||||
|
docker-compose down
|
||||||
|
```
|
||||||
|
|
||||||
|
## How the builder image works
|
||||||
|
|
||||||
|
The `obijupyterhub-builder` image contains Quarto, Hugo, R, and Python — you do not need any of these on your host. The script runs this image as a temporary container to:
|
||||||
|
|
||||||
|
- detect R package dependencies from your `.qmd` files (scans `library()`, `require()`, and `remotes::install_git/github()` calls using base R — no external package required)
|
||||||
|
- install missing R packages into `jupyterhub_volumes/builder/R_packages/` (cached between runs)
|
||||||
|
- render the Quarto website from `web_src/`
|
||||||
|
- generate PDF galleries and `pages.json`
|
||||||
|
- (optionally) build the obidoc documentation with Hugo
|
||||||
|
|
||||||
|
### R package caching
|
||||||
|
|
||||||
|
Packages are cached in `jupyterhub_volumes/builder/R_packages/`:
|
||||||
|
|
||||||
|
- **First build**: all packages used in your `.qmd` files are detected and installed (may take a while).
|
||||||
|
- **Subsequent builds**: only new packages are installed, making builds much faster.
|
||||||
|
- **Non-CRAN packages**: packages installed via `remotes::install_git()` or `remotes::install_github()` in your `.qmd` files are detected and pre-installed automatically before rendering.
|
||||||
|
- **Clear the cache**: delete `jupyterhub_volumes/builder/R_packages/` to force a full reinstall.
|
||||||
|
|
||||||
|
## OBITools documentation (obidoc)
|
||||||
|
|
||||||
|
The OBITools4 documentation is built from the [`obitools4-doc`](https://github.com/metabarcoding/obitools4-doc) repository using Hugo and served as a static site at `http://localhost:8888/obidoc/`.
|
||||||
|
|
||||||
|
### How it works
|
||||||
|
|
||||||
|
The builder container clones the repository (with all submodules), runs `hugo build`, and writes the generated HTML into `jupyterhub_volumes/web/obidoc/`. Caddy then serves these files directly — no special routing is needed.
|
||||||
|
|
||||||
|
### First installation
|
||||||
|
|
||||||
|
The documentation is built automatically on the first full start if `jupyterhub_volumes/web/obidoc/` is empty:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
./start-jupyterhub.sh
|
./start-jupyterhub.sh
|
||||||
```
|
```
|
||||||
|
|
||||||
5) Access JupyterHub in a browser at `http://localhost:8888`.
|
To force a build even if the directory is already populated, use `--build-obidoc` during a full start:
|
||||||
|
|
||||||
6) Stop the stack when you’re done (run from `obijupyterhub/`):
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
docker-compose down
|
./start-jupyterhub.sh --build-obidoc
|
||||||
```
|
```
|
||||||
|
|
||||||
### Operating the stack (one command, a few options)
|
### Updating the documentation
|
||||||
|
|
||||||
- Start or rebuild: `./start-jupyterhub.sh` (rebuilds images, regenerates the website, starts the stack).
|
To rebuild the documentation without stopping the running stack:
|
||||||
- Start without rebuilding images (offline): `./start-jupyterhub.sh --no-build`
|
|
||||||
- Force rebuild without cache: `./start-jupyterhub.sh --force-rebuild`
|
```bash
|
||||||
- Stop only: `./start-jupyterhub.sh --stop-server`
|
./start-jupyterhub.sh --update-obidoc
|
||||||
- Rebuild website only (no Docker stop/start): `./start-jupyterhub.sh --update-lectures`
|
```
|
||||||
- Rebuild obidoc docs: `./start-jupyterhub.sh --build-obidoc` (also builds automatically if `jupyterhub_volumes/web/obidoc` is empty; skipped in offline mode)
|
|
||||||
- Access at `http://localhost:8888` (students: any username / password `metabar2025`; admin: `admin` / `admin2025`).
|
This pulls the latest version of the builder image (or uses the local one with `--local-build`), reclones the `obitools4-doc` repository, rebuilds the site, and replaces the contents of `jupyterhub_volumes/web/obidoc/`. The JupyterHub stack keeps running throughout.
|
||||||
- Check logs from `obijupyterhub/` with `docker-compose logs -f jupyterhub`.
|
|
||||||
- Stop with `docker-compose down` (from `obijupyterhub/`). Rerun `./start-jupyterhub.sh` to start again or after config changes.
|
### Removing the documentation
|
||||||
|
|
||||||
## Managing shared data
|
To remove the built documentation (e.g. to free disk space or force a clean rebuild):
|
||||||
|
|
||||||
Each student lands in `/home/jovyan/work/` with three key areas: their own files, a shared space, and a read-only course space. Everything under `work/` is persisted on the host in `jupyterhub_volumes`.
|
```bash
|
||||||
|
rm -rf jupyterhub_volumes/web/obidoc/*
|
||||||
```
|
```
|
||||||
work/ # Personal workspace root (persistent)
|
|
||||||
├── [student files] # Their own files and notebooks
|
The next `./start-jupyterhub.sh` will rebuild it automatically.
|
||||||
├── R_packages/ # Personal R packages (writable by student)
|
|
||||||
├── shared/ # Shared workspace (read/write, shared with all)
|
## Managing course and student data
|
||||||
└── course/ # Course files (read-only, managed by admin)
|
|
||||||
├── R_packages/ # Shared R packages (read-only, installed by prof)
|
Each student lands in `/home/jovyan/work/` with three areas:
|
||||||
├── bin/ # Shared executables (in PATH)
|
|
||||||
└── [course materials] # Your course files
|
```
|
||||||
```
|
work/
|
||||||
|
├── [student files] personal workspace (persistent)
|
||||||
R looks for packages in this order: personal `work/R_packages/`, then shared `work/course/R_packages/`, then system libraries. Because everything lives under `work/`, student files survive restarts.
|
├── R_packages/ personal R packages (writable by student)
|
||||||
|
├── shared/ shared space (read/write, all students)
|
||||||
### User Accounts
|
└── course/ course files (read-only)
|
||||||
|
├── R_packages/ shared R packages installed by the instructor
|
||||||
Defaults are defined in `obijupyterhub/docker-compose.yml`: admin (`admin` / `admin2025`) with write access to `course/`, and students (any username, password `metabar2025`) with read-only access to `course/`. Adjust `JUPYTERHUB_ADMIN_PASSWORD` and `JUPYTERHUB_PASSWORD` there, then rerun `./start-jupyterhub.sh`.
|
├── bin/ shared executables (added to PATH)
|
||||||
|
└── [course materials]
|
||||||
### Installing R Packages (Admin Only)
|
```
|
||||||
|
|
||||||
From the host, install shared R packages into `course/R_packages/`:
|
On the host, place course files in `jupyterhub_volumes/course/`, collaborative files in `jupyterhub_volumes/shared/`, and collect student work from `jupyterhub_volumes/users/`.
|
||||||
|
|
||||||
|
### Installing shared R packages (instructor)
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Install packages
|
|
||||||
tools/install_packages.sh reshape2 plotly knitr
|
tools/install_packages.sh reshape2 plotly knitr
|
||||||
```
|
```
|
||||||
|
|
||||||
Students can install their own packages into their personal `work/R_packages/`:
|
### Installing personal R packages (students)
|
||||||
|
|
||||||
```r
|
```r
|
||||||
# Install in personal library (each student has their own)
|
install.packages('mypackage') # installs into work/R_packages/
|
||||||
install.packages('mypackage') # Will install in work/R_packages/
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### Using R Packages (Students)
|
### Loading packages (students)
|
||||||
|
|
||||||
Students simply load packages normally:
|
|
||||||
|
|
||||||
```r
|
```r
|
||||||
library(reshape2) # R checks: 1) work/R_packages/ 2) work/course/R_packages/ 3) system
|
library(reshape2) # searches: work/R_packages/ → work/course/R_packages/ → system
|
||||||
library(plotly)
|
|
||||||
```
|
```
|
||||||
|
|
||||||
R automatically searches in this order:
|
## User accounts
|
||||||
|
|
||||||
1. Personal packages: `/home/jovyan/work/R_packages/` (R_LIBS_USER)
|
Defaults are set in `obijupyterhub/docker-compose.yml`:
|
||||||
1. Prof packages: `/home/jovyan/work/course/R_packages/` (R_LIBS_SITE)
|
|
||||||
1. System packages
|
|
||||||
|
|
||||||
### List Available Packages
|
| Account | Username | Password |
|
||||||
|
|---------|----------|----------|
|
||||||
|
| Admin | `admin` | `admin2025` |
|
||||||
|
| Students | any | `metabar2025` |
|
||||||
|
|
||||||
``` r
|
Change `JUPYTERHUB_ADMIN_PASSWORD` and `JUPYTERHUB_PASSWORD` in the compose file, then rerun `./start-jupyterhub.sh`.
|
||||||
# List all available packages (personal + course + system)
|
|
||||||
installed.packages()[,"Package"]
|
|
||||||
|
|
||||||
# Check personal packages
|
To restrict access to a predefined list, edit `jupyterhub_config.py`:
|
||||||
list.files("/home/jovyan/work/R_packages")
|
|
||||||
|
|
||||||
# Check course packages (installed by prof)
|
|
||||||
list.files("/home/jovyan/work/course/R_packages")
|
|
||||||
```
|
|
||||||
|
|
||||||
### Deposit or retrieve course and student files
|
|
||||||
|
|
||||||
On the host, place course files in `jupyterhub_volumes/course/` (they appear read-only to students), shared files in `jupyterhub_volumes/shared/`, and collect student work from `jupyterhub_volumes/users/`.
|
|
||||||
|
|
||||||
## User Management
|
|
||||||
|
|
||||||
### Option 1: Predefined User List
|
|
||||||
|
|
||||||
In `jupyterhub_config.py`, uncomment and modify:
|
|
||||||
|
|
||||||
```python
|
```python
|
||||||
c.Authenticator.allowed_users = {'student1', 'student2', 'student3'}
|
c.Authenticator.allowed_users = {'student1', 'student2', 'student3'}
|
||||||
```
|
```
|
||||||
|
|
||||||
### Option 2: Allow Everyone (for testing)
|
## Customising the images
|
||||||
|
|
||||||
By default, the configuration allows any user:
|
All image customisations require a rebuild. Use `--local-build` (or the targeted `--rebuild-*` flag) to apply changes locally, or `--publish` to push them to the registry.
|
||||||
|
|
||||||
``` python
|
### Add R packages baked into the student image
|
||||||
c.Authenticator.allow_all = True
|
|
||||||
```
|
|
||||||
|
|
||||||
⚠️ **Warning**: DummyAuthenticator is ONLY for local testing!
|
Edit `obijupyterhub/Dockerfile` (before `USER ${NB_UID}`):
|
||||||
|
|
||||||
## Kernel Verification
|
|
||||||
|
|
||||||
Once logged in, create a new notebook and verify you have access to:
|
|
||||||
|
|
||||||
- **Python 3** (default kernel)
|
|
||||||
- **R** (R kernel)
|
|
||||||
- **Bash** (bash kernel)
|
|
||||||
|
|
||||||
## Customization for Your Labs
|
|
||||||
|
|
||||||
### Add Additional R Packages
|
|
||||||
|
|
||||||
Modify the `Dockerfile` (before `USER ${NB_UID}`):
|
|
||||||
|
|
||||||
```dockerfile
|
```dockerfile
|
||||||
RUN R -e "install.packages(c('your_package'), repos='http://cran.rstudio.com/')"
|
RUN R -e "install.packages(c('your_package'), repos='http://cran.rstudio.com/')"
|
||||||
```
|
```
|
||||||
|
|
||||||
Then rerun `./start-jupyterhub.sh` to rebuild and restart.
|
Then rebuild:
|
||||||
|
|
||||||
### Add Python Packages
|
```bash
|
||||||
|
./start-jupyterhub.sh --rebuild-student
|
||||||
|
```
|
||||||
|
|
||||||
Add to the `Dockerfile` (before `USER ${NB_UID}`):
|
### Add Python packages
|
||||||
|
|
||||||
|
Edit `obijupyterhub/Dockerfile` (before `USER ${NB_UID}`):
|
||||||
|
|
||||||
```dockerfile
|
```dockerfile
|
||||||
RUN pip install numpy pandas matplotlib seaborn
|
RUN pip install numpy pandas matplotlib seaborn
|
||||||
```
|
```
|
||||||
|
|
||||||
Then rerun `./start-jupyterhub.sh` to rebuild and restart.
|
Then rebuild:
|
||||||
|
|
||||||
### Change Port (if 8000 is occupied)
|
```bash
|
||||||
|
./start-jupyterhub.sh --rebuild-student
|
||||||
|
```
|
||||||
|
|
||||||
Modify in `docker-compose.yml`:
|
### Change the listening port
|
||||||
|
|
||||||
|
In `obijupyterhub/docker-compose.yml`:
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
ports:
|
ports:
|
||||||
- "8001:8000" # Accessible on localhost:8001
|
- "8001:80" # accessible at http://localhost:8001
|
||||||
```
|
```
|
||||||
|
|
||||||
## Advantages of This Approach
|
|
||||||
|
|
||||||
✅ **Everything in Docker**: No need to install Python/JupyterHub on your computer\
|
|
||||||
✅ **Portable**: Easy to deploy on another server\
|
|
||||||
✅ **Isolated**: No pollution of your system environment\
|
|
||||||
✅ **Easy to Clean**: A simple `docker-compose down` is enough\
|
|
||||||
✅ **Reproducible**: Students will have exactly the same environment
|
|
||||||
|
|
||||||
## Troubleshooting
|
## Troubleshooting
|
||||||
|
|
||||||
- Docker daemon unavailable: make sure OrbStack/Docker Desktop/daemon is running; verify `/var/run/docker.sock` exists.
|
**Docker daemon unavailable**: make sure OrbStack / Docker Desktop / the daemon is running.
|
||||||
- Student containers do not start: check `docker-compose logs jupyterhub` and confirm the images exist with `docker images | grep jupyterhub-student`.
|
|
||||||
- Port conflict: change the published port in `docker-compose.yml`.
|
|
||||||
|
|
||||||
|
**Student containers do not start**: run `docker-compose logs jupyterhub` from `obijupyterhub/` and confirm the student image is present:
|
||||||
**I want to start from scratch**:
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
pushd obijupyterhub
|
docker images | grep obijupyterhub-student
|
||||||
docker-compose down -v
|
```
|
||||||
docker rmi jupyterhub-hub jupyterhub-student obijupyterhub-builder
|
|
||||||
popd
|
**Port conflict**: change the published port in `docker-compose.yml`.
|
||||||
|
|
||||||
# Optionally clear the R package cache
|
**Registry pull fails**: check your network, or fall back to a local build:
|
||||||
rm -rf jupyterhub_volumes/builder/R_packages
|
|
||||||
|
```bash
|
||||||
# Then rebuild everything
|
./start-jupyterhub.sh --local-build
|
||||||
./start-jupyterhub.sh
|
```
|
||||||
|
|
||||||
|
**Start from scratch**:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
./start-jupyterhub.sh --stop-server
|
||||||
|
|
||||||
|
cd obijupyterhub
|
||||||
|
docker-compose down -v
|
||||||
|
docker rmi jupyterhub-hub jupyterhub-student obijupyterhub-builder 2>/dev/null || true
|
||||||
|
docker rmi registry.metabarcoding.org/metabarschool/obijupyterhub-hub:latest \
|
||||||
|
registry.metabarcoding.org/metabarschool/obijupyterhub-student:latest \
|
||||||
|
registry.metabarcoding.org/metabarschool/obijupyterhub-builder:latest 2>/dev/null || true
|
||||||
|
cd ..
|
||||||
|
|
||||||
|
rm -rf jupyterhub_volumes/builder/R_packages # clear R package cache
|
||||||
|
|
||||||
|
./start-jupyterhub.sh # pull fresh images and start
|
||||||
```
|
```
|
||||||
|
|||||||
+52
-18
@@ -19,37 +19,63 @@ RUN TEMP=. curl -L https://raw.githubusercontent.com/metabarcoding/obitools4/mas
|
|||||||
&& cp $HOME/obitools-build/bin/* /usr/local/bin
|
&& cp $HOME/obitools-build/bin/* /usr/local/bin
|
||||||
RUN ls -l /usr/local/bin
|
RUN ls -l /usr/local/bin
|
||||||
|
|
||||||
|
|
||||||
# ---------- Stage 2 : image finale ----------
|
# ---------- Stage 2 : image finale ----------
|
||||||
FROM jupyter/base-notebook:latest
|
FROM jupyter/base-notebook:latest
|
||||||
|
|
||||||
USER root
|
USER root
|
||||||
|
|
||||||
# Installer seulement les dépendances d'exécution (sans build-essential)
|
# Installer seulement les dépendances d'exécution (sans build-essential)
|
||||||
RUN apt-get update && apt-get install -y \
|
RUN apt-get update && apt-get install -y --no-install-recommends \
|
||||||
|
# R et dépendances de base
|
||||||
r-base \
|
r-base \
|
||||||
libcurl4-openssl-dev libssl-dev libxml2-dev \
|
r-base-dev \
|
||||||
|
libcurl4-openssl-dev \
|
||||||
|
libssl-dev \
|
||||||
|
libxml2-dev \
|
||||||
|
libicu-dev \
|
||||||
|
zlib1g-dev \
|
||||||
|
# Polices et rendu graphique (indispensable pour ggplot2, ragg, etc.)
|
||||||
|
libharfbuzz-dev \
|
||||||
|
libfribidi-dev \
|
||||||
|
libfontconfig1-dev \
|
||||||
|
libfreetype6-dev \
|
||||||
|
libpng-dev \
|
||||||
|
libtiff5-dev \
|
||||||
|
libjpeg-dev \
|
||||||
|
pandoc \
|
||||||
|
# Outils de compilation et gestion de versions
|
||||||
|
libgit2-dev \
|
||||||
|
cmake \
|
||||||
|
# Utilitaires systèmes déjà présents dans votre Dockerfile
|
||||||
curl \
|
curl \
|
||||||
|
wget \
|
||||||
git \
|
git \
|
||||||
texlive-xetex texlive-fonts-recommended texlive-plain-generic \
|
vim \
|
||||||
ruby ruby-dev \
|
nano \
|
||||||
vim nano \
|
less \
|
||||||
|
gdebi-core \
|
||||||
|
ripgrep \
|
||||||
|
# Pour générer des PDF/rapports depuis R Markdown / Jupyter
|
||||||
|
texlive-xetex \
|
||||||
|
texlive-luatex \
|
||||||
|
texlive-fonts-recommended \
|
||||||
|
texlive-fonts-extra \
|
||||||
|
texlive-latex-extra \
|
||||||
|
texlive-plain-generic \
|
||||||
|
lmodern \
|
||||||
|
fonts-lmodern \
|
||||||
|
librsvg2-bin \
|
||||||
|
cm-super \
|
||||||
|
# Ruby (si vous en avez besoin pour autre chose)
|
||||||
|
ruby \
|
||||||
|
ruby-dev \
|
||||||
&& apt-get clean \
|
&& apt-get clean \
|
||||||
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
|
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
|
||||||
|
|
||||||
# Installer R et packages
|
# Installer R et packages
|
||||||
RUN R -e "install.packages(c('IRkernel','tidyverse','vegan','ade4','BiocManager','remotes','igraph'), \
|
COPY install_R_packages.R /tmp/install_R_packages.R
|
||||||
dependencies=TRUE, \
|
RUN Rscript /tmp/install_R_packages.R --no-save --no-restore && \
|
||||||
repos='http://cran.rstudio.com/')" && \
|
rm -rf /tmp/Rtmp* /tmp/install_R_packages.R
|
||||||
R -e "BiocManager::install('biomformat')" && \
|
|
||||||
R -e "remotes::install_github('metabaRfactory/metabaR')" && \
|
|
||||||
R -e "remotes::install_git('https://forge.metabarcoding.org/obitools/ROBIUtils.git')" && \
|
|
||||||
R -e "remotes::install_git('https://forge.metabarcoding.org/obitools/ROBITaxonomy.git')" && \
|
|
||||||
R -e "remotes::install_git('https://forge.metabarcoding.org/obitools/ROBITools.git')" && \
|
|
||||||
R -e "remotes::install_git('https://forge.metabarcoding.org/obitools/ROBITaxonomy.git')" && \
|
|
||||||
R -e "remotes::install_git('https://forge.metabarcoding.org/MetabarcodingSchool/biodiversity-metrics.git')" && \
|
|
||||||
R -e "IRkernel::installspec(user = FALSE)" && \
|
|
||||||
rm -rf /tmp/Rtmp*
|
|
||||||
|
|
||||||
# Installer les autres outils
|
# Installer les autres outils
|
||||||
RUN pip install --no-cache-dir bash_kernel csvkit && \
|
RUN pip install --no-cache-dir bash_kernel csvkit && \
|
||||||
@@ -57,11 +83,19 @@ RUN pip install --no-cache-dir bash_kernel csvkit && \
|
|||||||
|
|
||||||
RUN gem install youplot
|
RUN gem install youplot
|
||||||
|
|
||||||
|
# Installation de Quarto (multi-arch)
|
||||||
|
RUN ARCH=$(dpkg --print-architecture) && \
|
||||||
|
QUARTO_VERSION="1.8.27" && \
|
||||||
|
wget https://github.com/quarto-dev/quarto-cli/releases/download/v${QUARTO_VERSION}/quarto-${QUARTO_VERSION}-linux-${ARCH}.deb && \
|
||||||
|
gdebi --non-interactive quarto-${QUARTO_VERSION}-linux-${ARCH}.deb && \
|
||||||
|
rm quarto-${QUARTO_VERSION}-linux-${ARCH}.deb
|
||||||
|
|
||||||
# Set permissions for Jupyter user
|
# Set permissions for Jupyter user
|
||||||
RUN mkdir -p /home/${NB_USER}/.local/share/jupyter && \
|
RUN mkdir -p /home/${NB_USER}/.local/share/jupyter && \
|
||||||
chown -R ${NB_UID}:${NB_GID} /home/${NB_USER}
|
chown -R ${NB_UID}:${NB_GID} /home/${NB_USER}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
# Copier uniquement le binaire csvlens du builder
|
# Copier uniquement le binaire csvlens du builder
|
||||||
COPY --from=rust-builder /home/jovyan/.cargo/bin/csvlens /usr/local/bin/
|
COPY --from=rust-builder /home/jovyan/.cargo/bin/csvlens /usr/local/bin/
|
||||||
COPY --from=rust-builder /usr/local/bin/* /usr/local/bin/
|
COPY --from=rust-builder /usr/local/bin/* /usr/local/bin/
|
||||||
|
|||||||
@@ -32,6 +32,8 @@ RUN apt-get update \
|
|||||||
libpng-dev \
|
libpng-dev \
|
||||||
libtiff5-dev \
|
libtiff5-dev \
|
||||||
libjpeg-dev \
|
libjpeg-dev \
|
||||||
|
libuv1-dev \
|
||||||
|
golang-go \
|
||||||
&& apt-get clean \
|
&& apt-get clean \
|
||||||
&& rm -rf /var/lib/apt/lists/*
|
&& rm -rf /var/lib/apt/lists/*
|
||||||
|
|
||||||
@@ -43,7 +45,7 @@ RUN mkdir -p ${R_LIBS_BUILDER} \
|
|||||||
|
|
||||||
# Install Hugo (extended version for SCSS support)
|
# Install Hugo (extended version for SCSS support)
|
||||||
# Detect architecture and download appropriate binary
|
# Detect architecture and download appropriate binary
|
||||||
ARG HUGO_VERSION=0.140.2
|
ARG HUGO_VERSION=0.159.2
|
||||||
RUN ARCH=$(dpkg --print-architecture) \
|
RUN ARCH=$(dpkg --print-architecture) \
|
||||||
&& case "$ARCH" in \
|
&& case "$ARCH" in \
|
||||||
amd64) HUGO_ARCH="amd64" ;; \
|
amd64) HUGO_ARCH="amd64" ;; \
|
||||||
@@ -54,12 +56,15 @@ RUN ARCH=$(dpkg --print-architecture) \
|
|||||||
| tar -xz -C /usr/local/bin hugo \
|
| tar -xz -C /usr/local/bin hugo \
|
||||||
&& chmod +x /usr/local/bin/hugo
|
&& chmod +x /usr/local/bin/hugo
|
||||||
|
|
||||||
# Install Quarto using the official .deb package (handles all dependencies properly)
|
# Install Quarto from the official tarball.
|
||||||
|
# Using tar.gz instead of .deb avoids dpkg and is more reliable in cross-arch
|
||||||
|
# (QEMU) builds where GitHub downloads are slower and more prone to transient errors.
|
||||||
ARG QUARTO_VERSION=1.6.42
|
ARG QUARTO_VERSION=1.6.42
|
||||||
RUN ARCH=$(dpkg --print-architecture) \
|
RUN ARCH=$(dpkg --print-architecture) \
|
||||||
&& curl -fsSL -o /tmp/quarto.deb "https://github.com/quarto-dev/quarto-cli/releases/download/v${QUARTO_VERSION}/quarto-${QUARTO_VERSION}-linux-${ARCH}.deb" \
|
&& curl -fsSL --retry 5 --retry-delay 10 \
|
||||||
&& dpkg -i /tmp/quarto.deb \
|
"https://github.com/quarto-dev/quarto-cli/releases/download/v${QUARTO_VERSION}/quarto-${QUARTO_VERSION}-linux-${ARCH}.tar.gz" \
|
||||||
&& rm /tmp/quarto.deb
|
| tar -xz -C /opt \
|
||||||
|
&& ln -s "/opt/quarto-${QUARTO_VERSION}/bin/quarto" /usr/local/bin/quarto
|
||||||
|
|
||||||
# Create working directory
|
# Create working directory
|
||||||
WORKDIR /workspace
|
WORKDIR /workspace
|
||||||
|
|||||||
@@ -1,11 +1,8 @@
|
|||||||
services:
|
services:
|
||||||
jupyterhub:
|
jupyterhub:
|
||||||
build:
|
|
||||||
context: .
|
|
||||||
dockerfile: Dockerfile.hub
|
|
||||||
container_name: jupyterhub
|
container_name: jupyterhub
|
||||||
hostname: jupyterhub
|
hostname: jupyterhub
|
||||||
image: jupyterhub-hub:latest
|
image: ${HUB_IMAGE:-registry.metabarcoding.org/metabarschool/obijupyterhub-hub:latest}
|
||||||
expose:
|
expose:
|
||||||
- "8000"
|
- "8000"
|
||||||
volumes:
|
volumes:
|
||||||
@@ -21,6 +18,8 @@ services:
|
|||||||
- jupyterhub-network
|
- jupyterhub-network
|
||||||
restart: unless-stopped
|
restart: unless-stopped
|
||||||
environment:
|
environment:
|
||||||
|
# Docker image used for student containers (read by jupyterhub_config.py)
|
||||||
|
STUDENT_IMAGE: ${STUDENT_IMAGE:-registry.metabarcoding.org/metabarschool/obijupyterhub-student:latest}
|
||||||
# Shared password for all students
|
# Shared password for all students
|
||||||
JUPYTERHUB_PASSWORD: metabar2025
|
JUPYTERHUB_PASSWORD: metabar2025
|
||||||
# Admin password (for installing R packages)
|
# Admin password (for installing R packages)
|
||||||
|
|||||||
@@ -0,0 +1,43 @@
|
|||||||
|
#!/usr/bin/env Rscript
|
||||||
|
|
||||||
|
# Installer pak (lui-même en binaire si possible)
|
||||||
|
install.packages("pak", repos = sprintf("https://r-lib.github.io/p/pak/stable/%s/%s/%s", .Platform$pkgType, R.Version()$os, R.Version()$arch))
|
||||||
|
pak::pkg_install("cli")
|
||||||
|
|
||||||
|
# Détection automatique du système et installation de tous les paquets en binaire
|
||||||
|
pak::pkg_install(c(
|
||||||
|
"IRkernel",
|
||||||
|
"tidyverse",
|
||||||
|
"vegan",
|
||||||
|
"ade4",
|
||||||
|
"BiocManager",
|
||||||
|
"remotes",
|
||||||
|
"igraph",
|
||||||
|
"Rdpack"
|
||||||
|
))
|
||||||
|
|
||||||
|
# ------------------------------------------------------------
|
||||||
|
# Paquets Bioconductor (toujours via BiocManager)
|
||||||
|
# ------------------------------------------------------------
|
||||||
|
pak::pkg_install("bioc::biomformat")
|
||||||
|
|
||||||
|
# ------------------------------------------------------------
|
||||||
|
# Paquets depuis GitHub / dépôts git
|
||||||
|
# ------------------------------------------------------------
|
||||||
|
pak::pkg_install("metabaRfactory/metabaR")
|
||||||
|
pak::pkg_install("git::https://forge.metabarcoding.org/obitools/ROBIUtils.git")
|
||||||
|
pak::pkg_install("git::https://forge.metabarcoding.org/obitools/ROBITaxonomy.git")
|
||||||
|
pak::pkg_install("git::https://forge.metabarcoding.org/obitools/ROBITools.git")
|
||||||
|
pak::pkg_install("git::https://forge.metabarcoding.org/MetabarcodingSchool/biodiversity-metrics.git")
|
||||||
|
|
||||||
|
# ------------------------------------------------------------
|
||||||
|
# Installation du noyau Jupyter pour IRkernel
|
||||||
|
# ------------------------------------------------------------
|
||||||
|
# Si on est root -> installation système, sinon -> user
|
||||||
|
if (Sys.info()["user"] == "root") {
|
||||||
|
IRkernel::installspec(user = FALSE)
|
||||||
|
} else {
|
||||||
|
IRkernel::installspec(user = TRUE)
|
||||||
|
}
|
||||||
|
|
||||||
|
cat("\n✅ Tous les paquets R ont été installés avec succès.\n")
|
||||||
@@ -14,7 +14,10 @@ VOLUMES_BASE_PATH = '/volumes/users' # Path as seen from JupyterHub container (
|
|||||||
HOST_VOLUMES_PATH = os.environ.get('HOST_VOLUMES_PATH', '/volumes') # Real path on host machine (parent dir)
|
HOST_VOLUMES_PATH = os.environ.get('HOST_VOLUMES_PATH', '/volumes') # Real path on host machine (parent dir)
|
||||||
|
|
||||||
# Docker image to use for student containers
|
# Docker image to use for student containers
|
||||||
c.DockerSpawner.image = 'jupyterhub-student:latest'
|
c.DockerSpawner.image = os.environ.get(
|
||||||
|
'STUDENT_IMAGE',
|
||||||
|
'registry.metabarcoding.org/metabarschool/obijupyterhub-student:latest'
|
||||||
|
)
|
||||||
|
|
||||||
# Docker network (create with: docker network create jupyterhub-network)
|
# Docker network (create with: docker network create jupyterhub-network)
|
||||||
c.DockerSpawner.network_name = 'jupyterhub-network'
|
c.DockerSpawner.network_name = 'jupyterhub-network'
|
||||||
|
|||||||
+334
-103
@@ -1,46 +1,84 @@
|
|||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
|
|
||||||
# JupyterHub startup script for labs
|
# JupyterHub startup script for labs
|
||||||
# Usage: ./start-jupyterhub.sh [--no-build|--offline] [--force-rebuild] [--stop-server] [--update-lectures] [--build-obidoc]
|
#
|
||||||
|
# Modes (mutually exclusive):
|
||||||
|
# (default) Pull images from registry and start
|
||||||
|
# --local-build Build images locally and start (no push)
|
||||||
|
# --publish Build multi-arch images, push to registry, and start
|
||||||
|
#
|
||||||
|
# Usage: ./start-jupyterhub.sh [mode] [options]
|
||||||
|
|
||||||
set -e
|
set -e
|
||||||
|
|
||||||
SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
|
SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
|
||||||
DOCKER_DIR="${SCRIPT_DIR}/obijupyterhub/"
|
DOCKER_DIR="${SCRIPT_DIR}/obijupyterhub/"
|
||||||
BUILDER_IMAGE="obijupyterhub-builder:latest"
|
|
||||||
|
REGISTRY="registry.metabarcoding.org/metabarschool"
|
||||||
|
PLATFORMS="linux/amd64,linux/arm64"
|
||||||
|
BUILDX_BUILDER_NAME="obijupyterhub-buildx"
|
||||||
|
|
||||||
# Colors for display
|
# Colors for display
|
||||||
GREEN='\033[0;32m'
|
GREEN='\033[0;32m'
|
||||||
BLUE='\033[0;34m'
|
BLUE='\033[0;34m'
|
||||||
YELLOW='\033[1;33m'
|
YELLOW='\033[1;33m'
|
||||||
NC='\033[0m' # No Color
|
NC='\033[0m'
|
||||||
|
|
||||||
|
# Operating mode
|
||||||
|
LOCAL_BUILD=false
|
||||||
|
PUBLISH=false
|
||||||
|
|
||||||
|
# Build options (meaningful in --local-build mode)
|
||||||
NO_BUILD=false
|
NO_BUILD=false
|
||||||
FORCE_REBUILD=false
|
FORCE_REBUILD=false
|
||||||
|
REBUILD_BUILDER=false
|
||||||
|
REBUILD_STUDENT=false
|
||||||
|
REBUILD_HUB=false
|
||||||
|
|
||||||
|
# Actions
|
||||||
STOP_SERVER=false
|
STOP_SERVER=false
|
||||||
UPDATE_LECTURES=false
|
UPDATE_LECTURES=false
|
||||||
|
UPDATE_OBIDOC=false
|
||||||
BUILD_OBIDOC=false
|
BUILD_OBIDOC=false
|
||||||
|
|
||||||
usage() {
|
usage() {
|
||||||
cat <<EOF
|
cat <<EOF
|
||||||
Usage: ./start-jupyterhub.sh [options]
|
Usage: ./start-jupyterhub.sh [mode] [options]
|
||||||
|
|
||||||
Options:
|
Modes (mutually exclusive, default is pull-from-registry):
|
||||||
--no-build | --offline Skip Docker image builds (use existing images)
|
--local-build Build images locally and start (no push to registry)
|
||||||
--force-rebuild Rebuild images without cache
|
--publish Build multi-arch images, push to registry, and start
|
||||||
|
|
||||||
|
Build options (--local-build only):
|
||||||
|
--no-build | --offline Skip all image operations (use existing local images)
|
||||||
|
--force-rebuild Rebuild all local images without cache
|
||||||
|
--rebuild-builder Force rebuild the builder image only
|
||||||
|
--rebuild-student Force rebuild the student image only
|
||||||
|
--rebuild-hub Force rebuild the JupyterHub image only
|
||||||
|
|
||||||
|
Actions:
|
||||||
--stop-server Stop the stack and remove student containers, then exit
|
--stop-server Stop the stack and remove student containers, then exit
|
||||||
--update-lectures Rebuild the course website only (no Docker stop/start)
|
--update-lectures Rebuild the course website only (no Docker stop/start)
|
||||||
--build-obidoc Force rebuild of obidoc documentation
|
--update-obidoc Rebuild the obidoc documentation only (no Docker stop/start)
|
||||||
|
--build-obidoc Force rebuild of obidoc documentation on next full start
|
||||||
-h, --help Show this help
|
-h, --help Show this help
|
||||||
EOF
|
EOF
|
||||||
}
|
}
|
||||||
|
|
||||||
|
dockercompose=$(which docker-compose 2>/dev/null || echo 'docker compose')
|
||||||
|
|
||||||
while [[ $# -gt 0 ]]; do
|
while [[ $# -gt 0 ]]; do
|
||||||
case "$1" in
|
case "$1" in
|
||||||
|
--local-build) LOCAL_BUILD=true ;;
|
||||||
|
--publish) PUBLISH=true ;;
|
||||||
--no-build|--offline) NO_BUILD=true ;;
|
--no-build|--offline) NO_BUILD=true ;;
|
||||||
--force-rebuild) FORCE_REBUILD=true ;;
|
--force-rebuild) FORCE_REBUILD=true; LOCAL_BUILD=true ;;
|
||||||
|
--rebuild-builder) REBUILD_BUILDER=true; LOCAL_BUILD=true ;;
|
||||||
|
--rebuild-student) REBUILD_STUDENT=true; LOCAL_BUILD=true ;;
|
||||||
|
--rebuild-hub) REBUILD_HUB=true; LOCAL_BUILD=true ;;
|
||||||
--stop-server) STOP_SERVER=true ;;
|
--stop-server) STOP_SERVER=true ;;
|
||||||
--update-lectures) UPDATE_LECTURES=true ;;
|
--update-lectures) UPDATE_LECTURES=true ;;
|
||||||
|
--update-obidoc) UPDATE_OBIDOC=true ;;
|
||||||
--build-obidoc) BUILD_OBIDOC=true ;;
|
--build-obidoc) BUILD_OBIDOC=true ;;
|
||||||
-h|--help) usage; exit 0 ;;
|
-h|--help) usage; exit 0 ;;
|
||||||
*) echo "Unknown option: $1" >&2; usage; exit 1 ;;
|
*) echo "Unknown option: $1" >&2; usage; exit 1 ;;
|
||||||
@@ -48,82 +86,234 @@ while [[ $# -gt 0 ]]; do
|
|||||||
shift
|
shift
|
||||||
done
|
done
|
||||||
|
|
||||||
|
if $LOCAL_BUILD && $PUBLISH; then
|
||||||
|
echo "Error: --local-build and --publish cannot be used together" >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
if $STOP_SERVER && $UPDATE_LECTURES; then
|
if $STOP_SERVER && $UPDATE_LECTURES; then
|
||||||
echo "Error: --stop-server and --update-lectures cannot be used together" >&2
|
echo "Error: --stop-server and --update-lectures cannot be used together" >&2
|
||||||
exit 1
|
exit 1
|
||||||
fi
|
fi
|
||||||
|
|
||||||
echo "Starting JupyterHub for Lab"
|
# ---------------------------------------------------------------------------
|
||||||
echo "=============================="
|
# Image name helpers
|
||||||
echo ""
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
echo -e "${BLUE}Building the volume directories...${NC}"
|
local_image_name() {
|
||||||
pushd "${SCRIPT_DIR}/jupyterhub_volumes" >/dev/null
|
case "$1" in
|
||||||
mkdir -p caddy/data
|
hub) echo "jupyterhub-hub:latest" ;;
|
||||||
mkdir -p caddy/config
|
student) echo "jupyterhub-student:latest" ;;
|
||||||
mkdir -p course/bin
|
builder) echo "obijupyterhub-builder:latest" ;;
|
||||||
mkdir -p course/R_packages
|
esac
|
||||||
mkdir -p jupyterhub
|
}
|
||||||
mkdir -p shared
|
|
||||||
mkdir -p users
|
|
||||||
mkdir -p web/obidoc
|
|
||||||
mkdir -p builder/R_packages
|
|
||||||
popd >/dev/null
|
|
||||||
|
|
||||||
pushd "${DOCKER_DIR}" >/dev/null
|
registry_image_name() {
|
||||||
|
echo "${REGISTRY}/obijupyterhub-$1:${2:-latest}"
|
||||||
|
}
|
||||||
|
|
||||||
# Check we're in the right directory
|
dockerfile_for() {
|
||||||
if [ ! -f "Dockerfile" ] || [ ! -f "docker-compose.yml" ]; then
|
case "$1" in
|
||||||
echo "Error: Run this script from the jupyterhub-tp/ directory"
|
hub) echo "Dockerfile.hub" ;;
|
||||||
|
student) echo "Dockerfile" ;;
|
||||||
|
builder) echo "Dockerfile.builder" ;;
|
||||||
|
esac
|
||||||
|
}
|
||||||
|
|
||||||
|
read_version() {
|
||||||
|
local vfile="${SCRIPT_DIR}/version.txt"
|
||||||
|
if [ ! -f "$vfile" ]; then
|
||||||
|
echo "Error: version.txt not found at ${vfile}" >&2
|
||||||
exit 1
|
exit 1
|
||||||
fi
|
fi
|
||||||
|
tr -d '[:space:]' < "$vfile"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Set image names based on mode
|
||||||
|
if $LOCAL_BUILD; then
|
||||||
|
BUILDER_IMAGE=$(local_image_name builder)
|
||||||
|
HUB_IMAGE=$(local_image_name hub)
|
||||||
|
STUDENT_IMAGE=$(local_image_name student)
|
||||||
|
else
|
||||||
|
BUILDER_IMAGE=$(registry_image_name builder)
|
||||||
|
HUB_IMAGE=$(registry_image_name hub)
|
||||||
|
STUDENT_IMAGE=$(registry_image_name student)
|
||||||
|
fi
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# Utility
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
get_file_timestamp() {
|
||||||
|
local file="$1"
|
||||||
|
case "$(uname -s)" in
|
||||||
|
Linux) stat -c %Y "$file" ;;
|
||||||
|
Darwin) stat -f %m "$file" ;;
|
||||||
|
*) echo "Système non supporté" >&2; return 1 ;;
|
||||||
|
esac
|
||||||
|
}
|
||||||
|
|
||||||
check_if_image_needs_rebuild() {
|
check_if_image_needs_rebuild() {
|
||||||
local image_name="$1"
|
local image_name="$1"
|
||||||
local dockerfile="$2"
|
local dockerfile="$2"
|
||||||
|
local force="${3:-false}"
|
||||||
|
|
||||||
|
echo -e "${BLUE}Checking image ${image_name}...${NC}"
|
||||||
|
|
||||||
# Check if image exists
|
|
||||||
if ! docker image inspect "$image_name" >/dev/null 2>&1; then
|
if ! docker image inspect "$image_name" >/dev/null 2>&1; then
|
||||||
return 0 # Need to build (image doesn't exist)
|
echo -e "${YELLOW}Docker image ${image_name} doesn't exist.${NC}"
|
||||||
|
return 0
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# If force rebuild, always rebuild
|
if $FORCE_REBUILD || $force; then
|
||||||
if $FORCE_REBUILD; then
|
echo -e "${YELLOW}Docker image build is forced.${NC}"
|
||||||
return 0 # Need to rebuild
|
return 0
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Compare Dockerfile modification time with image creation time
|
|
||||||
if [ -f "$dockerfile" ]; then
|
if [ -f "$dockerfile" ]; then
|
||||||
local dockerfile_mtime=$(stat -c %Y "$dockerfile" 2>/dev/null || echo 0)
|
local dockerfile_mtime
|
||||||
local image_created=$(docker image inspect "$image_name" --format='{{.Created}}' 2>/dev/null | sed 's/\.000000000//' | xargs -I {} date -d "{}" +%s 2>/dev/null || echo 0)
|
dockerfile_mtime=$(get_file_timestamp "$dockerfile" 2>/dev/null || echo 0)
|
||||||
|
local image_created
|
||||||
|
image_created=$(docker image inspect "$image_name" --format='{{.Created}}' 2>/dev/null \
|
||||||
|
| sed -E 's/\.[0-9]+//' \
|
||||||
|
| (read d; if [[ "$(uname -s)" == "Darwin" ]]; then date -ju -f "%Y-%m-%dT%H:%M:%S" "${d%Z}" +%s; else date -d "$d" +%s; fi) 2>/dev/null || echo 0)
|
||||||
|
|
||||||
|
echo -e "${BLUE}Docker image ${image_name} created at: ${image_created}.${NC}"
|
||||||
|
echo -e "${BLUE}Docker file ${dockerfile} modified at: ${dockerfile_mtime}.${NC}"
|
||||||
|
|
||||||
if [ "$dockerfile_mtime" -gt "$image_created" ]; then
|
if [ "$dockerfile_mtime" -gt "$image_created" ]; then
|
||||||
echo -e "${YELLOW}Dockerfile is newer than image, rebuild needed${NC}"
|
echo -e "${YELLOW}Dockerfile is newer than image, rebuild needed${NC}"
|
||||||
return 0 # Need to rebuild
|
return 0
|
||||||
fi
|
fi
|
||||||
fi
|
fi
|
||||||
|
|
||||||
return 1 # No need to rebuild
|
return 1
|
||||||
}
|
}
|
||||||
|
|
||||||
build_builder_image() {
|
# ---------------------------------------------------------------------------
|
||||||
if check_if_image_needs_rebuild "$BUILDER_IMAGE" "Dockerfile.builder"; then
|
# Builder image (local-build mode)
|
||||||
local build_flag=()
|
# ---------------------------------------------------------------------------
|
||||||
if $FORCE_REBUILD; then
|
|
||||||
build_flag+=(--no-cache)
|
|
||||||
fi
|
|
||||||
|
|
||||||
|
build_builder_image() {
|
||||||
|
if check_if_image_needs_rebuild "$(local_image_name builder)" "Dockerfile.builder" "$REBUILD_BUILDER"; then
|
||||||
|
local build_flag=()
|
||||||
|
if $FORCE_REBUILD || $REBUILD_BUILDER; then build_flag+=(--no-cache); fi
|
||||||
echo ""
|
echo ""
|
||||||
echo -e "${BLUE}Building builder image...${NC}"
|
echo -e "${BLUE}Building builder image...${NC}"
|
||||||
docker build "${build_flag[@]}" -t "$BUILDER_IMAGE" -f Dockerfile.builder .
|
docker build "${build_flag[@]}" -t "$(local_image_name builder)" -f Dockerfile.builder .
|
||||||
else
|
else
|
||||||
echo -e "${BLUE}Builder image is up to date, skipping build.${NC}"
|
echo -e "${BLUE}Builder image is up to date, skipping build.${NC}"
|
||||||
fi
|
fi
|
||||||
}
|
}
|
||||||
|
|
||||||
# Run a command inside the builder container with the workspace mounted
|
# ---------------------------------------------------------------------------
|
||||||
# R packages are persisted in jupyterhub_volumes/builder/R_packages
|
# Student + Hub images (local-build mode)
|
||||||
# R_LIBS includes both the builder packages (attachment) and the mounted volume
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
build_images() {
|
||||||
|
if $NO_BUILD; then
|
||||||
|
echo -e "${YELLOW}Skipping image builds (offline/no-build mode).${NC}"
|
||||||
|
return
|
||||||
|
fi
|
||||||
|
|
||||||
|
if check_if_image_needs_rebuild "$(local_image_name student)" "Dockerfile" "$REBUILD_STUDENT"; then
|
||||||
|
local student_flag=()
|
||||||
|
if $FORCE_REBUILD || $REBUILD_STUDENT; then student_flag+=(--no-cache); fi
|
||||||
|
echo ""
|
||||||
|
echo -e "${BLUE}Building student image...${NC}"
|
||||||
|
docker build "${student_flag[@]}" -t "$(local_image_name student)" -f Dockerfile .
|
||||||
|
else
|
||||||
|
echo -e "${BLUE}Student image is up to date, skipping build.${NC}"
|
||||||
|
fi
|
||||||
|
|
||||||
|
if check_if_image_needs_rebuild "$(local_image_name hub)" "Dockerfile.hub" "$REBUILD_HUB"; then
|
||||||
|
local hub_flag=()
|
||||||
|
if $FORCE_REBUILD || $REBUILD_HUB; then hub_flag+=(--no-cache); fi
|
||||||
|
echo ""
|
||||||
|
echo -e "${BLUE}Building JupyterHub image...${NC}"
|
||||||
|
docker build "${hub_flag[@]}" -t "$(local_image_name hub)" -f Dockerfile.hub .
|
||||||
|
else
|
||||||
|
echo -e "${BLUE}JupyterHub image is up to date, skipping build.${NC}"
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# Pull images from registry (default mode)
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
pull_images() {
|
||||||
|
if $NO_BUILD; then
|
||||||
|
echo -e "${YELLOW}Skipping image pull (offline/no-build mode).${NC}"
|
||||||
|
return
|
||||||
|
fi
|
||||||
|
echo ""
|
||||||
|
echo -e "${BLUE}Pulling images from registry...${NC}"
|
||||||
|
docker pull "$BUILDER_IMAGE"
|
||||||
|
docker pull "$HUB_IMAGE"
|
||||||
|
docker pull "$STUDENT_IMAGE"
|
||||||
|
}
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# Multi-arch build + push to registry (--publish mode)
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
ensure_buildx_builder() {
|
||||||
|
docker buildx inspect "$BUILDX_BUILDER_NAME" >/dev/null 2>&1 \
|
||||||
|
|| docker buildx create --name "$BUILDX_BUILDER_NAME" --driver docker-container --bootstrap
|
||||||
|
}
|
||||||
|
|
||||||
|
publish_images() {
|
||||||
|
local version
|
||||||
|
version=$(read_version)
|
||||||
|
|
||||||
|
# docker buildx --push uses Docker's own credential store, independent of
|
||||||
|
# skopeo. Verify auth early to get a clear error before a long build.
|
||||||
|
echo -e "${BLUE}Checking registry authentication...${NC}"
|
||||||
|
local registry_host="${REGISTRY%%/*}"
|
||||||
|
if ! docker login "$registry_host" >/dev/null 2>&1; then
|
||||||
|
echo -e "${YELLOW}Not logged in to ${registry_host}. Running docker login...${NC}"
|
||||||
|
docker login "$registry_host" || {
|
||||||
|
echo "Error: authentication to ${registry_host} failed." >&2
|
||||||
|
echo "Run: docker login ${registry_host}" >&2
|
||||||
|
exit 1
|
||||||
|
}
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo -e "${BLUE}Publishing images (version ${version}) to ${REGISTRY}${NC}"
|
||||||
|
echo -e "${BLUE}Platforms: ${PLATFORMS}${NC}"
|
||||||
|
|
||||||
|
ensure_buildx_builder
|
||||||
|
|
||||||
|
local names=(builder student hub)
|
||||||
|
local dockerfiles=(Dockerfile.builder Dockerfile Dockerfile.hub)
|
||||||
|
|
||||||
|
for i in "${!names[@]}"; do
|
||||||
|
local name="${names[$i]}"
|
||||||
|
local df="${dockerfiles[$i]}"
|
||||||
|
local remote="${REGISTRY}/obijupyterhub-${name}"
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo -e "${BLUE}Building and pushing ${name} image...${NC}"
|
||||||
|
docker buildx build \
|
||||||
|
--builder "$BUILDX_BUILDER_NAME" \
|
||||||
|
--platform "$PLATFORMS" \
|
||||||
|
--tag "${remote}:latest" \
|
||||||
|
--tag "${remote}:${version}" \
|
||||||
|
--file "${df}" \
|
||||||
|
--push \
|
||||||
|
.
|
||||||
|
echo -e "${GREEN} ${remote}:latest${NC}"
|
||||||
|
echo -e "${GREEN} ${remote}:${version}${NC}"
|
||||||
|
done
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo -e "${GREEN}All images published (version ${version}).${NC}"
|
||||||
|
}
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# Builder container (for website / docs)
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
run_in_builder() {
|
run_in_builder() {
|
||||||
docker run --rm \
|
docker run --rm \
|
||||||
-v "${SCRIPT_DIR}:/workspace" \
|
-v "${SCRIPT_DIR}:/workspace" \
|
||||||
@@ -134,42 +324,39 @@ run_in_builder() {
|
|||||||
bash -c "$1"
|
bash -c "$1"
|
||||||
}
|
}
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# Stack management
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
stop_stack() {
|
stop_stack() {
|
||||||
echo -e "${BLUE}Stopping existing containers...${NC}"
|
echo -e "${BLUE}Stopping existing containers...${NC}"
|
||||||
docker-compose down 2>/dev/null || true
|
HUB_IMAGE="$HUB_IMAGE" STUDENT_IMAGE="$STUDENT_IMAGE" \
|
||||||
|
${dockercompose} down 2>/dev/null || true
|
||||||
|
|
||||||
echo -e "${BLUE}Cleaning up student containers...${NC}"
|
echo -e "${BLUE}Cleaning up student containers...${NC}"
|
||||||
docker ps -aq --filter name=jupyter- | xargs -r docker rm -f 2>/dev/null || true
|
docker ps -aq --filter name=jupyter- | xargs -r docker rm -f 2>/dev/null || true
|
||||||
}
|
}
|
||||||
|
|
||||||
build_images() {
|
build_website() {
|
||||||
if $NO_BUILD; then
|
|
||||||
echo -e "${YELLOW}Skipping image builds (offline/no-build mode).${NC}"
|
|
||||||
return
|
|
||||||
fi
|
|
||||||
|
|
||||||
local build_flag=()
|
|
||||||
if $FORCE_REBUILD; then
|
|
||||||
build_flag+=(--no-cache)
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Check and build student image
|
|
||||||
if check_if_image_needs_rebuild "jupyterhub-student:latest" "Dockerfile"; then
|
|
||||||
echo ""
|
echo ""
|
||||||
echo -e "${BLUE}Building student image...${NC}"
|
echo -e "${BLUE}Building web site (in builder container)...${NC}"
|
||||||
docker build "${build_flag[@]}" -t jupyterhub-student:latest -f Dockerfile .
|
run_in_builder '
|
||||||
else
|
set -e
|
||||||
echo -e "${BLUE}Student image is up to date, skipping build.${NC}"
|
echo "-> Detecting and installing R dependencies..."
|
||||||
fi
|
Rscript /workspace/tools/install_quarto_deps.R /workspace/web_src
|
||||||
|
|
||||||
# Check and build JupyterHub image
|
echo "-> Rendering Quarto site..."
|
||||||
if check_if_image_needs_rebuild "jupyterhub-hub:latest" "Dockerfile.hub"; then
|
cd /workspace/web_src
|
||||||
echo ""
|
quarto render
|
||||||
echo -e "${BLUE}Building JupyterHub image...${NC}"
|
find . -name "*.pdf" -print | while read pdfname; do
|
||||||
docker build "${build_flag[@]}" -t jupyterhub-hub:latest -f Dockerfile.hub .
|
dest="/workspace/jupyterhub_volumes/web/pages/${pdfname}"
|
||||||
else
|
dirdest=$(dirname "$dest")
|
||||||
echo -e "${BLUE}JupyterHub image is up to date, skipping build.${NC}"
|
mkdir -p "$dirdest"
|
||||||
fi
|
cp "$pdfname" "$dest"
|
||||||
|
done
|
||||||
|
python3 /workspace/tools/generate_pdf_galleries.py
|
||||||
|
python3 /workspace/tools/generate_pages_json.py
|
||||||
|
'
|
||||||
}
|
}
|
||||||
|
|
||||||
build_obidoc() {
|
build_obidoc() {
|
||||||
@@ -203,7 +390,7 @@ build_obidoc() {
|
|||||||
-j 8 \
|
-j 8 \
|
||||||
https://github.com/metabarcoding/obitools4-doc.git
|
https://github.com/metabarcoding/obitools4-doc.git
|
||||||
cd obitools4-doc
|
cd obitools4-doc
|
||||||
hugo -D build --baseURL "/obidoc/"
|
hugo --gc --minify --buildDrafts --baseURL "/obidoc/"
|
||||||
mkdir -p /workspace/jupyterhub_volumes/web/obidoc
|
mkdir -p /workspace/jupyterhub_volumes/web/obidoc
|
||||||
rm -rf /workspace/jupyterhub_volumes/web/obidoc/*
|
rm -rf /workspace/jupyterhub_volumes/web/obidoc/*
|
||||||
mv public/* /workspace/jupyterhub_volumes/web/obidoc/
|
mv public/* /workspace/jupyterhub_volumes/web/obidoc/
|
||||||
@@ -212,32 +399,11 @@ build_obidoc() {
|
|||||||
'
|
'
|
||||||
}
|
}
|
||||||
|
|
||||||
build_website() {
|
|
||||||
echo ""
|
|
||||||
echo -e "${BLUE}Building web site (in builder container)...${NC}"
|
|
||||||
run_in_builder '
|
|
||||||
set -e
|
|
||||||
echo "-> Detecting and installing R dependencies..."
|
|
||||||
Rscript /workspace/tools/install_quarto_deps.R /workspace/web_src
|
|
||||||
|
|
||||||
echo "-> Rendering Quarto site..."
|
|
||||||
cd /workspace/web_src
|
|
||||||
quarto render
|
|
||||||
find . -name "*.pdf" -print | while read pdfname; do
|
|
||||||
dest="/workspace/jupyterhub_volumes/web/pages/${pdfname}"
|
|
||||||
dirdest=$(dirname "$dest")
|
|
||||||
mkdir -p "$dirdest"
|
|
||||||
cp "$pdfname" "$dest"
|
|
||||||
done
|
|
||||||
python3 /workspace/tools/generate_pdf_galleries.py
|
|
||||||
python3 /workspace/tools/generate_pages_json.py
|
|
||||||
'
|
|
||||||
}
|
|
||||||
|
|
||||||
start_stack() {
|
start_stack() {
|
||||||
echo ""
|
echo ""
|
||||||
echo -e "${BLUE}Starting JupyterHub...${NC}"
|
echo -e "${BLUE}Starting JupyterHub...${NC}"
|
||||||
docker-compose up -d --remove-orphans
|
HUB_IMAGE="$HUB_IMAGE" STUDENT_IMAGE="$STUDENT_IMAGE" \
|
||||||
|
${dockercompose} up -d --remove-orphans
|
||||||
|
|
||||||
echo ""
|
echo ""
|
||||||
echo -e "${YELLOW}Waiting for JupyterHub to start...${NC}"
|
echo -e "${YELLOW}Waiting for JupyterHub to start...${NC}"
|
||||||
@@ -246,13 +412,19 @@ start_stack() {
|
|||||||
|
|
||||||
print_success() {
|
print_success() {
|
||||||
if docker ps | grep -q jupyterhub; then
|
if docker ps | grep -q jupyterhub; then
|
||||||
|
local version
|
||||||
|
version=$(read_version 2>/dev/null || echo "?")
|
||||||
echo ""
|
echo ""
|
||||||
echo -e "${GREEN}JupyterHub is running!${NC}"
|
echo -e "${GREEN}JupyterHub is running! (version ${version})${NC}"
|
||||||
echo ""
|
echo ""
|
||||||
echo "-------------------------------------------"
|
echo "-------------------------------------------"
|
||||||
echo -e "${GREEN}JupyterHub available at: http://localhost:8888${NC}"
|
echo -e "${GREEN}JupyterHub available at: http://localhost:8888${NC}"
|
||||||
echo "-------------------------------------------"
|
echo "-------------------------------------------"
|
||||||
echo ""
|
echo ""
|
||||||
|
echo "Images in use:"
|
||||||
|
echo " Hub: ${HUB_IMAGE}"
|
||||||
|
echo " Student: ${STUDENT_IMAGE}"
|
||||||
|
echo ""
|
||||||
echo "Password: metabar2025"
|
echo "Password: metabar2025"
|
||||||
echo "Students can connect with any username"
|
echo "Students can connect with any username"
|
||||||
echo ""
|
echo ""
|
||||||
@@ -268,17 +440,49 @@ print_success() {
|
|||||||
echo " - work/course/R_packages/ : shared R packages by prof (read-only)"
|
echo " - work/course/R_packages/ : shared R packages by prof (read-only)"
|
||||||
echo " - work/course/bin/ : shared executables (in PATH)"
|
echo " - work/course/bin/ : shared executables (in PATH)"
|
||||||
echo ""
|
echo ""
|
||||||
echo "To view logs: docker-compose logs -f jupyterhub"
|
echo "To view logs: ${dockercompose} logs -f jupyterhub"
|
||||||
echo "To stop: docker-compose down"
|
echo "To stop: ${dockercompose} down"
|
||||||
echo ""
|
echo ""
|
||||||
else
|
else
|
||||||
echo ""
|
echo ""
|
||||||
echo -e "${YELLOW}JupyterHub container doesn't seem to be starting${NC}"
|
echo -e "${YELLOW}JupyterHub container doesn't seem to be starting${NC}"
|
||||||
echo "Check logs with: docker-compose logs jupyterhub"
|
echo "Check logs with: ${dockercompose} logs jupyterhub"
|
||||||
exit 1
|
exit 1
|
||||||
fi
|
fi
|
||||||
}
|
}
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# Setup volume directories
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
echo "Starting JupyterHub for Lab"
|
||||||
|
echo "=============================="
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
echo -e "${BLUE}Building the volume directories...${NC}"
|
||||||
|
pushd "${SCRIPT_DIR}/jupyterhub_volumes" >/dev/null
|
||||||
|
mkdir -p caddy/data
|
||||||
|
mkdir -p caddy/config
|
||||||
|
mkdir -p course/bin
|
||||||
|
mkdir -p course/R_packages
|
||||||
|
mkdir -p jupyterhub
|
||||||
|
mkdir -p shared
|
||||||
|
mkdir -p users
|
||||||
|
mkdir -p web/obidoc
|
||||||
|
mkdir -p builder/R_packages
|
||||||
|
popd >/dev/null
|
||||||
|
|
||||||
|
pushd "${DOCKER_DIR}" >/dev/null
|
||||||
|
|
||||||
|
if [ ! -f "Dockerfile" ] || [ ! -f "docker-compose.yml" ]; then
|
||||||
|
echo "Error: Run this script from the OBIJupyterHub directory"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# Main flow
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
if $STOP_SERVER; then
|
if $STOP_SERVER; then
|
||||||
stop_stack
|
stop_stack
|
||||||
popd >/dev/null
|
popd >/dev/null
|
||||||
@@ -286,15 +490,42 @@ if $STOP_SERVER; then
|
|||||||
fi
|
fi
|
||||||
|
|
||||||
if $UPDATE_LECTURES; then
|
if $UPDATE_LECTURES; then
|
||||||
|
if $LOCAL_BUILD; then
|
||||||
build_builder_image
|
build_builder_image
|
||||||
|
elif ! $NO_BUILD; then
|
||||||
|
docker pull "$BUILDER_IMAGE" 2>/dev/null \
|
||||||
|
|| echo -e "${YELLOW}Could not pull builder image, using local cache.${NC}"
|
||||||
|
fi
|
||||||
build_website
|
build_website
|
||||||
popd >/dev/null
|
popd >/dev/null
|
||||||
exit 0
|
exit 0
|
||||||
fi
|
fi
|
||||||
|
|
||||||
|
if $UPDATE_OBIDOC; then
|
||||||
|
if $LOCAL_BUILD; then
|
||||||
|
build_builder_image
|
||||||
|
elif ! $NO_BUILD; then
|
||||||
|
docker pull "$BUILDER_IMAGE" 2>/dev/null \
|
||||||
|
|| echo -e "${YELLOW}Could not pull builder image, using local cache.${NC}"
|
||||||
|
fi
|
||||||
|
BUILD_OBIDOC=true
|
||||||
|
build_obidoc
|
||||||
|
popd >/dev/null
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
|
||||||
stop_stack
|
stop_stack
|
||||||
|
|
||||||
|
if $PUBLISH; then
|
||||||
|
publish_images
|
||||||
|
pull_images # pull the freshly published images into the local daemon
|
||||||
|
elif $LOCAL_BUILD; then
|
||||||
build_builder_image
|
build_builder_image
|
||||||
build_images
|
build_images
|
||||||
|
else
|
||||||
|
pull_images # default: pull from registry
|
||||||
|
fi
|
||||||
|
|
||||||
build_website
|
build_website
|
||||||
build_obidoc
|
build_obidoc
|
||||||
start_stack
|
start_stack
|
||||||
|
|||||||
+108
-25
@@ -1,17 +1,15 @@
|
|||||||
#!/usr/bin/env Rscript
|
#!/usr/bin/env Rscript
|
||||||
# Script to dynamically detect and install R dependencies from Quarto files
|
# Script to dynamically detect and install R dependencies from Quarto files.
|
||||||
# Uses the {attachment} package to scan .qmd files for library()/require() calls
|
# Scans library()/require() calls and remotes::install_git/github() calls.
|
||||||
|
|
||||||
args <- commandArgs(trailingOnly = TRUE)
|
args <- commandArgs(trailingOnly = TRUE)
|
||||||
quarto_dir <- if (length(args) > 0) args[1] else "."
|
quarto_dir <- if (length(args) > 0) args[1] else "."
|
||||||
|
|
||||||
# Target library for installing packages (the mounted volume)
|
|
||||||
target_lib <- "/usr/local/lib/R/site-library"
|
target_lib <- "/usr/local/lib/R/site-library"
|
||||||
|
|
||||||
cat("Scanning Quarto files in:", quarto_dir, "\n")
|
cat("Scanning Quarto files in:", quarto_dir, "\n")
|
||||||
cat("Target library:", target_lib, "\n")
|
cat("Target library:", target_lib, "\n")
|
||||||
|
|
||||||
# Find all .qmd files
|
|
||||||
qmd_files <- list.files(
|
qmd_files <- list.files(
|
||||||
path = quarto_dir,
|
path = quarto_dir,
|
||||||
pattern = "\\.qmd$",
|
pattern = "\\.qmd$",
|
||||||
@@ -26,34 +24,119 @@ if (length(qmd_files) == 0) {
|
|||||||
|
|
||||||
cat("Found", length(qmd_files), "Quarto files\n")
|
cat("Found", length(qmd_files), "Quarto files\n")
|
||||||
|
|
||||||
# Extract dependencies using attachment
|
# Extract package names from library()/require() calls
|
||||||
deps <- attachment::att_from_rmds(qmd_files, inline = TRUE)
|
extract_cran_packages <- function(files) {
|
||||||
|
pattern <- "(?:library|require)\\s*\\(\\s*['\"]?([A-Za-z0-9._]+)['\"]?"
|
||||||
if (length(deps) == 0) {
|
pkgs <- character(0)
|
||||||
cat("No R package dependencies detected.\n")
|
for (f in files) {
|
||||||
quit(status = 0)
|
lines <- tryCatch(readLines(f, warn = FALSE), error = function(e) character(0))
|
||||||
|
m <- regmatches(lines, gregexpr(pattern, lines, perl = TRUE))
|
||||||
|
hits <- unlist(m)
|
||||||
|
if (length(hits) > 0) {
|
||||||
|
extracted <- sub(
|
||||||
|
"(?:library|require)\\s*\\(\\s*['\"]?([A-Za-z0-9._]+)['\"]?.*",
|
||||||
|
"\\1", hits, perl = TRUE
|
||||||
|
)
|
||||||
|
pkgs <- c(pkgs, extracted)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
unique(pkgs)
|
||||||
}
|
}
|
||||||
|
|
||||||
cat("\nDetected R packages:\n")
|
# Extract git/github URLs from remotes::install_git/github() calls
|
||||||
cat(paste(" -", deps, collapse = "\n"), "\n\n")
|
extract_git_packages <- function(files) {
|
||||||
|
# Matches remotes::install_git('url') or remotes::install_github('user/repo')
|
||||||
|
pattern <- "remotes::install_(git|github)\\s*\\(\\s*['\"]([^'\"]+)['\"]"
|
||||||
|
result <- list()
|
||||||
|
for (f in files) {
|
||||||
|
lines <- tryCatch(readLines(f, warn = FALSE), error = function(e) character(0))
|
||||||
|
text <- paste(lines, collapse = "\n")
|
||||||
|
m <- gregexpr(pattern, text, perl = TRUE)
|
||||||
|
hits <- regmatches(text, m)[[1]]
|
||||||
|
for (hit in hits) {
|
||||||
|
type <- sub("remotes::install_(git|github).*", "\\1", hit, perl = TRUE)
|
||||||
|
url <- sub("remotes::install_(?:git|github)\\s*\\(\\s*['\"]([^'\"]+)['\"].*",
|
||||||
|
"\\1", hit, perl = TRUE)
|
||||||
|
result[[length(result) + 1]] <- list(type = type, url = url)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
result
|
||||||
|
}
|
||||||
|
|
||||||
|
cran_deps <- extract_cran_packages(qmd_files)
|
||||||
|
git_deps <- extract_git_packages(qmd_files)
|
||||||
|
|
||||||
|
# Quarto's implicit runtime dependencies — must be in target_lib (the persistent
|
||||||
|
# volume), not just somewhere in libPaths, because Quarto spawns its own R session.
|
||||||
|
quarto_required <- c("rmarkdown", "knitr")
|
||||||
|
if (length(git_deps) > 0) quarto_required <- c(quarto_required, "remotes")
|
||||||
|
|
||||||
|
cat("\nDetected CRAN packages:\n")
|
||||||
|
cat(paste(" -", unique(c(quarto_required, cran_deps)), collapse = "\n"), "\n")
|
||||||
|
|
||||||
|
if (length(git_deps) > 0) {
|
||||||
|
cat("\nDetected git/github packages:\n")
|
||||||
|
for (d in git_deps) cat(" -", d$type, ":", d$url, "\n")
|
||||||
|
}
|
||||||
|
cat("\n")
|
||||||
|
|
||||||
|
# --- Install CRAN packages ---
|
||||||
|
|
||||||
# Filter out base R packages that are always available
|
|
||||||
base_pkgs <- rownames(installed.packages(priority = "base"))
|
base_pkgs <- rownames(installed.packages(priority = "base"))
|
||||||
deps <- setdiff(deps, base_pkgs)
|
|
||||||
|
|
||||||
# Check which packages are not installed
|
# quarto_required: check only in target_lib so they are guaranteed to be there
|
||||||
|
installed_in_target <- rownames(installed.packages(lib.loc = target_lib))
|
||||||
|
quarto_missing <- setdiff(quarto_required, c(base_pkgs, installed_in_target))
|
||||||
|
|
||||||
|
# other deps: check anywhere in libPaths (they just need to be loadable)
|
||||||
|
cran_deps <- setdiff(cran_deps, c(base_pkgs, quarto_required))
|
||||||
installed <- rownames(installed.packages())
|
installed <- rownames(installed.packages())
|
||||||
to_install <- setdiff(deps, installed)
|
to_install <- unique(c(quarto_missing, setdiff(cran_deps, installed)))
|
||||||
|
|
||||||
if (length(to_install) == 0) {
|
if (length(to_install) == 0) {
|
||||||
cat("All required packages are already installed.\n")
|
cat("All CRAN packages already installed.\n")
|
||||||
} else {
|
} else {
|
||||||
cat("Installing missing packages:", paste(to_install, collapse = ", "), "\n\n")
|
cat("Installing CRAN packages:", paste(to_install, collapse = ", "), "\n\n")
|
||||||
install.packages(
|
failed <- character(0)
|
||||||
to_install,
|
for (pkg in to_install) {
|
||||||
lib = target_lib,
|
result <- tryCatch({
|
||||||
repos = "https://cloud.r-project.org/",
|
withCallingHandlers(
|
||||||
dependencies = TRUE
|
install.packages(pkg, lib = target_lib, repos = "https://cloud.r-project.org/",
|
||||||
)
|
dependencies = TRUE, quiet = FALSE),
|
||||||
cat("\nPackage installation complete.\n")
|
warning = function(w) {
|
||||||
|
if (grepl("not available", conditionMessage(w))) invokeRestart("muffleWarning")
|
||||||
}
|
}
|
||||||
|
)
|
||||||
|
if (!requireNamespace(pkg, quietly = TRUE)) "unavailable" else "ok"
|
||||||
|
}, error = function(e) "error")
|
||||||
|
|
||||||
|
if (result %in% c("unavailable", "error")) {
|
||||||
|
cat(" [SKIP]", pkg, "- not available on CRAN\n")
|
||||||
|
failed <- c(failed, pkg)
|
||||||
|
} else {
|
||||||
|
cat(" [OK]", pkg, "\n")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if (length(failed) > 0)
|
||||||
|
cat("\nNot installed (not on CRAN):", paste(failed, collapse = ", "), "\n")
|
||||||
|
}
|
||||||
|
|
||||||
|
# --- Install git/github packages ---
|
||||||
|
|
||||||
|
if (length(git_deps) > 0) {
|
||||||
|
cat("\nInstalling git/github packages...\n")
|
||||||
|
for (d in git_deps) {
|
||||||
|
tryCatch({
|
||||||
|
if (d$type == "git") {
|
||||||
|
remotes::install_git(d$url, lib = target_lib, upgrade = "never")
|
||||||
|
} else {
|
||||||
|
remotes::install_github(d$url, lib = target_lib, upgrade = "never")
|
||||||
|
}
|
||||||
|
cat(" [OK]", d$url, "\n")
|
||||||
|
}, error = function(e) {
|
||||||
|
cat(" [FAIL]", d$url, "-", conditionMessage(e), "\n")
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
cat("\nDependency installation complete.\n")
|
||||||
|
|||||||
@@ -0,0 +1 @@
|
|||||||
|
0.1.0
|
||||||
Reference in New Issue
Block a user