Skip to content

Commit

Permalink
Update documentation to reference renamed jupyterhub-home-nfs chart
Browse files Browse the repository at this point in the history
  • Loading branch information
sgibson91 committed Oct 16, 2024
1 parent 255c5ab commit ed38919
Show file tree
Hide file tree
Showing 2 changed files with 10 additions and 11 deletions.
17 changes: 8 additions & 9 deletions docs/howto/features/storage-quota.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,18 +26,17 @@ ebs_volumes = {

This will create a disk with a size of 100GB for the `staging` hub that we can reference when configuring the NFS server.

## Enabling jupyterhub-home-nfs

## Enabling jupyter-home-nfs
To be able to configure per-user storage quotas, we need to run an in-cluster NFS server using [`jupyterhub-home-nfs`](https://github.com/sunu/jupyterhub-home-nfs). This can be enabled by setting `jupyterhub-home-nfs.enabled` to `true` in the hub's values file.

To be able to configure per-user storage quotas, we need to run an in-cluster NFS server using [`jupyter-home-nfs`](https://github.com/sunu/jupyter-home-nfs). This can be enabled by setting `jupyter-home-nfs.enabled` to `true` in the hub's values file.

jupyter-home-nfs expects a reference to an pre-provisioned disk. Here's an example of how to configure that on AWS and GCP.
jupyterhub-home-nfs expects a reference to an pre-provisioned disk. Here's an example of how to configure that on AWS and GCP.

`````{tab-set}
````{tab-item} AWS
:sync: aws-key
```yaml
jupyter-home-nfs:
jupyterhub-home-nfs:
enabled: true
eks:
enabled: true
Expand All @@ -48,7 +47,7 @@ jupyter-home-nfs:
````{tab-item} GCP
:sync: gcp-key
```yaml
jupyter-home-nfs:
jupyterhub-home-nfs:
enabled: true
gke:
enabled: true
Expand All @@ -63,7 +62,7 @@ These changes can be deployed by running the following command:
deployer deploy <cluster_name> <hub_name>
```

Once these changes are deployed, we should have a new NFS server running in our cluster through the `jupyter-home-nfs` Helm chart. We can get the IP address of the NFS server by running the following commands:
Once these changes are deployed, we should have a new NFS server running in our cluster through the `jupyterhub-home-nfs` Helm chart. We can get the IP address of the NFS server by running the following commands:

```bash
# Authenticate with the cluster
Expand Down Expand Up @@ -120,10 +119,10 @@ deployer deploy <cluster_name> <hub_name>

Now we can set quotas for each user and configure the path to monitor for storage quota enforcement.

This can be done by updating `basehub.jupyter-home-nfs.quotaEnforcer` in the hub's values file. For example, to set a quota of 10GB for all users on the `staging` hub, we would add the following to the hub's values file:
This can be done by updating `basehub.jupyterhub-home-nfs.quotaEnforcer` in the hub's values file. For example, to set a quota of 10GB for all users on the `staging` hub, we would add the following to the hub's values file:

```yaml
jupyter-home-nfs:
jupyterhub-home-nfs:
quotaEnforcer:
hardQuota: "10" # in GB
path: "/export/staging"
Expand Down
4 changes: 2 additions & 2 deletions terraform/aws/variables.tf
Original file line number Diff line number Diff line change
Expand Up @@ -304,7 +304,7 @@ variable "ebs_volumes" {
description = <<-EOT
Deploy one or more AWS ElasticBlockStore volumes.
This provisions a managed EBS volume that can be used by jupyter-home-nfs server
to store home directories for users.
This provisions a managed EBS volume that can be used by jupyterhub-home-nfs
server to store home directories for users.
EOT
}

0 comments on commit ed38919

Please sign in to comment.