MinIO Deployment to BareMetal Kubernetes Clusters - Part 1

So if you don't know what MinIO is then I would suggest reading through my other post here or through MinIO's own site here.

What am I doing?

I have a K8 cluster running on bare metal (VMware) and I do not have access to public cloud providers and so I want to have a blog store for my Prometheus-Stack where I will be collecting logs from different K8 clusters and different environments, but I wanted a central location for it all.  Lucky Grafana's Loki can use S3 as the backend storage, so I use MinIO to have my own private S3.

Bit of Background on the Setup

So I want to build a production version as this will be used for business use.  My k8 setup is 2 control nodes and 2 worker nodes and I have a "Control Cluster" and an "App Cluster".

Basically the "Control Cluster" has all the services it takes to run the platform i.e. ArgoCD, Prometheus-Stack, etc

The "App Cluster" has all the business applications that the business build using microservice architecture.

As you saw, we have ArgoCD deploying our helm charts and I will be deploying MinIO this way and NOT using the MinIO Kubernetes Plugin!

MinIO Operator

So there are two parts to this, the "Operator", is the bad boy that runs everything and gives us all our interfaces i.e. API's, crd's and consoles, etc

Second is MinIO Tenant, Tenants are a storage layer above buckets. Tenants are groups of users with their own sets of buckets and their own pods handling storage.

In part one of this blog we going to look at deploying the Operator, so far from my testing this is the easy part!

Helm Chart

I found it a bit frustrating as I couldn't find the official helm Charts on artifacthub (that said I am pretty blind so I could be wrong).

So I just used what I could find from the GitHub repo. My suggestion is do the same, while you can follow along with what I have, it doesn't mean that its up to date or the best!

First I want to create the helm chart in my Charts folder

helm-charts/minio-operator/Chart.yaml

apiVersion: v2
name: minio-operator
home: https://min.io
icon: https://min.io/resources/img/logo/MINIO_wordmark.png
version: 0.0.0
appVersion: 0.0.0
type: application
description: A Helm chart for MinIO Operator

dependencies:
  - name: operator
    repository: https://operator.min.io/
    version: 5.0.6

So basically all I am doing is telling helm to use the operator chart from the repository as a dependency.  At the moment I don't need any custom resources so it will be just this no templates.

Next, I want to create a values file, I am deploying to a pre-prod environment first like a good boy, so I create the following:

helm-charts/minio-operator/staging/values-control.yaml

operator:
  operator:
    env: []
    replicaCount: 2

  console:
    replicaCount: 1
    ingress:
      enabled: enable
      ingressClassName: "nginx"
      labels: {}
      annotations:
        kubernetes.io/ingress.class: nginx
        cert-manager.io/cluster-issuer: letsencrypt-staging
      tls:
        - secretName: minio-general-tls
          hosts:
            - minio.staging.somedomain.io
      host: minio.staging.somedomain.io
      path: /
      pathType: Prefix

The thing to note here, these values are for the chart listed in the dependency, and this values file is for the whole helm chart, so the values are listed under operator which is the name of the chart.

All we are doing here is setting up a number of replicas and enabling the console to be accessible through the nginx ingress controller.

ArgoCD Deployment

The last thing to do is set up the ArgoCD ApplicationSet that will manage and deploy everything.  So I create:

argocd-apps/staging/mino.yaml

apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
  name: minio-operator
  namespace: argocd
  annotations:
    argocd.argoproj.io/sync-options: Prune=false
    argocd.argoproj.io/sync-wave: "1"
spec:
  generators:
    - list:
        elements:
          - name: minio-operator
            path: helm-charts/minio-operator
            namespace: minio-operator
  template:
    metadata:
      name: "{{name}}-control"
      annotations:
        argocd.argoproj.io/sync-wave: "1"
    spec:
      project: default
      source:
        repoURL: https://github.com/wonderphil/helm.git
        targetRevision: main
        path: "{{path}}"
        helm:
          ignoreMissingValueFiles: false
          valueFiles:
            - "staging/values-control.yaml"
      destination:
        server: "https://kubernetes.default.svc"
        namespace: "{{namespace}}"
      syncPolicy:
        automated:
          prune: true
          selfHeal: true
        syncOptions:
          - CreateNamespace=true
          - ServerSideApply=true

I do a git commit and push and my ArgoCD instance will then start to sync the Helm Chart.

Let's Check

Now that ArgoCD has deployed, let's check its all running, first thing I like to do is make sure the pods are all up and running, then the Ingress has been built and finally go to the console.

Pods:

Ingress:

Console:

Logging into the Console

I have to say the first few times I tested this out, it didn't create the secret that houses the JWT token that's required for login.  Not sure why but it just started to work, might be a bug in a version of something i was using.

But once you're at the console like above, you need to get the JWT from the secret in K8s secrets.  I use openLens which is a great little UI tool for your K8 clusters and this is what it would look like from Lens:

Dont forget its base64 encoded and needs to be decoded first, once you have that, pop it in the console login page and away you go:

What's Next

In the next post I am going to go into how to add tenants, again this will be using Helm and ArgoCD and not the console.