.. _setup: Setup ============ This is what you need to have in place first, in order to :doc:`deploy CoCalc `. Introduction ------------ To ensure successful management of your service, it is important to have a some understanding of the following tools: * A :term:`Kubernetes` cluster and experience in managing it. This guide provides information on how to manage the service, respond to issues, plan resource requirements, and scale various services to meet your usage needs. * Some experience working with :term:`HELM charts`. * For the two above, you should know at least a bit of :term:`YAML`. * A (sub)domain that points to the service, as well as a TLS certificate (:term:`Let's Encrypt` is a possibility). * A standard :term:`PostgreSQL` database. * For storage, a network filesystem like NFS – which supports :term:`ReadWriteMany` – is necessary to hold data for all projects. * A standard E-Mail service for sending notifications and password resets (:term:`SMTP server`). * Familiarity with CoCalc in its `CoCalc Documentation`_ may be necessary, since your users may have questions. .. _CoCalc Documentation: https://doc.cocalc.com/ This page gives you an overview about compatibility and prerequisites. Further setup steps are described in the following pages: .. toctree:: :maxdepth: 1 setup/helm setup/namespace setup/registry setup/database setup/networking setup/storage setup/nodes Compatibility ------------- The table below gives you a rough overview about the compatibility of this setup. The latest update of these version numbers has been on **2023-11-13** for version ``2.11.4``. In general, this is just a recent collection of known versions – other versions, especially older, should work as well. Beyond this list, adjacent to the ``/cocalc`` directory in this repository, there are notes and config file examples. Please check each directory for the latest information. +----------------------------------+--------------------------------------------------------------------------------------------------------------------------+ | Cluster | Details | +==================================+==========================================================================================================================+ | :term:`Kubernetes` | The primary cluster to test this setup runs on GKE version ``1.26`` (see :doc:`GKE Setup `). | | | Other recent versions below that one should work fine as well. | +----------------------------------+--------------------------------------------------------------------------------------------------------------------------+ | :term:`GKE` | This setup runs on Google Kubernetes Engine in Google's GCP cloud: :doc:`deploy/gke` | +----------------------------------+--------------------------------------------------------------------------------------------------------------------------+ | :term:`EKS` | It also runs on Amazon's Elastic Kubernetes Service in their AWS cloud: :doc:`deploy/eks` | +----------------------------------+--------------------------------------------------------------------------------------------------------------------------+ | :term:`AKS` | Microsoft's Azure Kubernetes Service in their Azure cloud works as well: :doc:`deploy/aks` | +----------------------------------+--------------------------------------------------------------------------------------------------------------------------+ | :term:`Minikube` | Local Kubernetes cluster, for testing and experimentation, currently not maintained. | | | See example ``/cocalc/minikube.yaml`` – :term:`YMMV`! | +----------------------------------+--------------------------------------------------------------------------------------------------------------------------+ | :term:`Bare metal` | Bare metal clusters, created by `kubeadm `_, using | | | :term:`MetalLB` as a :term:`LoadBalancer`. There is a broad spectrum of possible cluster types and configurations. | | | Setting up a cluster is beyond the scope of this document, though. | | | Please read the remainder of the requirements to see, if your setup is compatible. | | | From the point of view of CoCalc OnPrem, there are no complex requirements for the cluster. | | | Mainly, it must support storing data, have network access, and support a :term:`LoadBalancer` and :term:`Ingress` rules. | +----------------------------------+--------------------------------------------------------------------------------------------------------------------------+ Dependencies ~~~~~~~~~~~~ +----------------------------------+--------------------------------------------------------------------------------------------------------------------------+ | Tool/Service | Details | +==================================+==========================================================================================================================+ | :term:`HELM charts` | Version ``3.14.4`` | +----------------------------------+--------------------------------------------------------------------------------------------------------------------------+ | :term:`Kubectl` | Version ``1.21`` or higher. Testing and dev is done using version ``1.28``. | | | Make sure to :ref:`set the version string ` in ``global.kubectl``, | | | such that :term:`kubectl` running configuration jobs roughly match your version of Kubernetes. | +----------------------------------+--------------------------------------------------------------------------------------------------------------------------+ | :term:`NGINX Ingress Controller` | The HELM chart ``4.8.3`` installing version ``1.9.4`` is known to work. | +----------------------------------+--------------------------------------------------------------------------------------------------------------------------+ | :term:`Let's Encrypt` | As a :term:`Certificate Manager` (optional, if you don't set key and cert manually): | | | The HELM chart installing version ``v1.13.2`` is known to work. | +----------------------------------+--------------------------------------------------------------------------------------------------------------------------+ | :term:`NFS` | (optional, you can bring your own :term:`ReadWriteMany` provider): | | | ``nfs-ganesha-server-and-external-provisioner`` version ``4.0.8`` is known to work – | | | please check out :ref:`details at GKE/NFS `. | +----------------------------------+--------------------------------------------------------------------------------------------------------------------------+ | :term:`PostgreSQL` | Version ``14`` is known to work. | | | Version ``11`` is the minimum requirement. | +----------------------------------+--------------------------------------------------------------------------------------------------------------------------+ Prerequisites ------------- +--------------------------+-----------------------------------------------------------------------------------------------------------------------+ | Aspect | Details | +==========================+=======================================================================================================================+ | :term:`Kubernetes` | Everything runs in a single namespace. Throughout this guide this will be ``cocalc``. | +--------------------------+-----------------------------------------------------------------------------------------------------------------------+ | :term:`HELM Charts` | As an alternative to HELM, there is a ``kustomization.yaml`` example file for :term:`Kustomize`. | | | Render the Helm charts via ``kustomize build --enable-helm . > cocalc.yaml``. | | | This is not actively maintained and might be broken. | +--------------------------+-----------------------------------------------------------------------------------------------------------------------+ | **Cluster/VMs** | Regarding nodes, you can either start with a "minimal" setup for testing, or start with the slightly more complex but | | | recommended "partitioned" setup. | +--------------------------+-----------------------------------------------------------------------------------------------------------------------+ | * Minimal | 2 nodes with 2 CPU cores, 16GB of RAM, and 100GB disk space each. This is enough to run all services with a redundancy| | | of 2 and a few projects, just for getting started. | +--------------------------+-----------------------------------------------------------------------------------------------------------------------+ | * Partitioned | Two nodes hold CoCalc services – same specs as above – while all other nodes get a :term:`Taint`, | | | such that only CoCalc projects are allowed to run there. | | | Start with two 4 CPU cores, 32GB of RAM, and 100GB disk space for these tainted project nodes. | | | This avoids interferences beyond what containers provide. Also, this makes it easy to only scale | | | the part of the cluster that runs the projects, without interfering the workload on the "service" nodes. | | | | | | More information: :ref:`project-node-setup`. | +--------------------------+-----------------------------------------------------------------------------------------------------------------------+ | Networking | There are two groups of :term:`Services `, which define Ingress rules. | | | Included is a standard setup of an NGINX Ingress Controller. | | | It usually runs behind a :term:`LoadBalancer`, provided by the cluster infrastructure. | +--------------------------+-----------------------------------------------------------------------------------------------------------------------+ | Domain or Sub-Domain | CoCalc also requires its own domain or sub-domain, i.e. it's currently not possible to run CoCalc with a "base path". | +--------------------------+-----------------------------------------------------------------------------------------------------------------------+ | TLS configuration | TLS is configured via standard Ingress TLS – it's straightforward to setup :term:`Certificate Manager` | | | and :term:`Let's Encrypt`, managing this for you. | +--------------------------+-----------------------------------------------------------------------------------------------------------------------+ | Local disk | The nodes running the CoCalc projects need to be able to load and run Docker images with more than 10 GB of size. The | | | software users want to be able to run takes more space than usual. 50GB per node should the lower limit, while | | | at least 100GB for the nodes running projects is recommended. | +--------------------------+-----------------------------------------------------------------------------------------------------------------------+ Next steps ---------- #. :doc:`Access HELM Charts <./setup/helm>` #. :doc:`Create Namespace <./setup/namespace>` #. :doc:`Access to Docker Registry <./setup/registry>` #. :doc:`Database <./setup/database>` #. :doc:`Networking <./setup/networking>` #. :doc:`Storage <./setup/storage>`