Storage

That’s probably the trickiest part. CoCalc requires a filesystem supporting ReadWriteMany (short: "RWX") with VolumeMode: "Filesystem" (not block storage!). This is used to share data across the pods in the cluster. This could be an NFS server, but depending on what you already have or which public cloud provider you use, there are various options.

Note

CoCalc OnPrem primarily needs to know the names of two PersistentVolumeClaim with specific purposes: Projects Data and Projects Software. Their PVC names must be different from each other as well!

See Deployment/Storage for how to configure this.

Note

For security reasons, users in their projects run with the UID 2001. Therefore, the file-system’s permissions must be set accordingly. See note about permissions for more information.

Projects Data

The projects-data PVC stores all files of CoCalc projects. There are two main directories:

  • projects/[uuid]: each project’s data is mounted as a subdirectory with the project’s UUID.

  • shares/[uuid]: data shared from a project is copied over to that directory.

  • If necessary, global.shared_files_path configures the path schema for shared projects (but this isn’t tested). The idea is to maybe use a different storage backend for these particular files.

Projects Software

The projects-software PVC is shared across all project pods. The idea is to share data and software globally. It mounts global files with read-only access in /ext. (Use the $EXT environment variable from within a project to resolve that directory).

Write Access

Via a special License, some projects can get read/write access to this global software directory. Such licenses have the ext_rw quota enabled.

Then hand the License ID to the user, who has to add it to their project, or just add the license directly.

Next time the the project starts, the underlying Pod’s readOnly flag of that particular volume will be false and everyone having having access to this project will be able to modify the files in /ext.

See Global Software about how to make full use of this.

Install NFS Server

If you need to setup a flexible NFS server and storage provisioner, you can use the HELM chart nfs-ganesha-server-and-external-provisioner. There are notes about installing it in GKE/NFS Server.

Under the hood

Under the hood, Kubernetes sees the following for a project – and equivalent configs appear in hub-next, or manage-copy/-share, …:

volumes:
- name: home
  persistentVolumeClaim:
    claimName: projects-data
[...]

and mounts it via:

volumeMounts:
- mountPath: /home/user
  name: home
  subPath: projects/09c119f5-....
[...]

Where projects-data is bound to a volume and has access mode RWX and storage class nfs.