![]() Of course you can bring your own implementation and disable mine. My approach is simpler: Use Helm to provision (if you disable autoscaling) or pre-provision (with autoscaling enabled) all JVB's services and ingresses (for media traffic and colibri websockets). I think their architecture is awesome but they're lack of documentation on metacontroller's part so a lot of people stuck with that part. Their approach is relying on metacontroller to provision a Service per JVB's pod and expose that service with NodePort. The original deployment uses kustomize to deploy all components and requires some manual copy-paste stuffs which I don't really like. This chart is tested with Google GKE and DigitalOcean DOKS cluster version 1.20, 1.21. A more detailed explanation of the system architecture with multiple shards can be found in original architecture. I only made some modifications to fit more general cases. So in order to make jvb work, you need to open 30000-300xx port on worker nodes with UDP protocol Architecture You can predict those ports like this: with shard-0.jvbBasePort = 30000, shard-0-jvb-0 will expose 30000, shard-0-jvb-1 will expose 30001. JVB NodePort is calculated based on shard-.jvbBasePort and pod index in statefulset. Opened firewall rules for JVB's port ranges (for ex: 30000-30xxx + UDP).A Kubernetes cluster (>= 1.17 - dont know if it work before this version) with public-accessible nodes for JVB.This chart bootstraps a scalable jitsi-meet deployment with multiple shards, multiple JVB on Kubernetes. Helm upgrade -install -f values-custom.yaml easyjitsi. jitsi-scalable-helm/scripts/generate_password.sh values-custom.yaml ![]() Cp jitsi-scalable-helm/values.yaml values-custom.yaml
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |