Tanzu Kubernetes

Created by Kristaps Felzenbergs, Modified on Mon, 8 Jan at 10:51 AM by Kristaps Felzenbergs

This page described the process of enabling Opvizor Kubernetes Integration with Tanzu 8


1 Prerequisites

  • Cluster has either access to the public internet, or has access to a private registry for container images.
  • Typically, your cluster is equipped with an ingress and/or a load balancer. Nevertheless, it is possible to set up a node port alternatively (Step 7 in this document).


2 Setup credentials for your cluster

  • log in to Opvizor Metrics & Logs console with SSH (preferred), or vSphere Console

  • Select option 
    1) Shell

  • start opakube 

    sudo /data/app/start_opakube.sh


  • you have now entered opakube, the Kubernetes deployment helper

  • either run the command
    umask 077 && pico /home/admin/.kube/config
    and paste the contents of your kubeconfig into the editor, then save the file.
    or alternatively, you can also copy that file onto the appliance to the path
    /home/admin/.kube/config
    (e.g. with WinSCP). Make sure, it is only readable for the user, not group or world.

  • to verify, run the command
    kubectl cluster-info
    This should show some basic info about the cluster.


2.1 Credentials for Tanzu Kubernetes cluster

There are a few additional and specific steps to authenticate to Tanzu kubernetes cluster


You will need kubectl-vsphere installed. The binary can be downloaded once you visit your Tanzu environment namespace in vSphere and follow the “Link to CLI Tools” page.


Once you have the kubectl for vsphere execute the following command to authenticate (replace your cluster ip). 


kubectl-vsphere login --server 172.17.36.3 --tanzu-kubernetes-cluster-namespace <cluster-namespace> --tanzu-kubernetes-cluster-name <kubernetes-cluster-name> --insecure-skip-tls-verify

 

The above command should have generated a kubeconfig under ~/.kube/config with several contexts. 


The essential part now is to edit the ~/.kube/config and reorder the clusters, contexts and users so that details for your Tanzu cluster are on top of every section.


3 Check for existing Prometheus installation

This section is for determining if there is an existing Prometheus installation / Prometheus stack in your cluster.

!! You can continue with step 4, when you know there isn’t otherwise to continue with step 9.


3.1 Find existing prometheus pods


Run the command

kubectl get pods -l app.kubernetes.io/name=prometheus -A

if the output of the command above is empty, continue with the number 3.2.

If the output is not empty, run the following command to find out if there are already services defined.


kubectl get services -A | grep prometheus


If there are, find the service which points to the Prometheus pod. Unfortunately, there is no generic way to find this, so you need to explore the services with kubectl. Once you found the service, check if it is reachable from the appliance – e.g. with curl:

curl 198.51.100.2:9090

(Of course, replace IP and port with the endpoint you found)


When the service is reachable, run the commands

yq -i ".prometheus.ext_target = \"198.51.100.2:9090\"" ml_config.yaml
yq -i ".prometheus.ext_path = \"/\"" ml_config.yaml
yq -i ".prometheus.ext_proto = \"http\"" ml_config.yaml


!! Now continue with Step 9.


In case the service is not reachable, continue with step 3.2


3.2 Find existing prometheus operator


Run the command

kubectl get pods -A | grep prometheus-operator

If the output is not empty, run the command

yq -i ".prometheusOperator.enabled = false" prometheus_values.yaml


3.3 Find existing node exporter


Run the command

kubectl get pods -A | grep node-exporter

If the output is not empty, determine a port that is free on all your nodes (here in example

9540) and run the command

yq -i ".prometheus-node-exporter.service.port = 9540" prometheus_values.yaml


4. Create namespace


Create the namespace by executing the script

./createNamespace.sh


script - it will create the 
opakube namespace.


5 Only if needed: configure private container registry


You will need this step, when your cluster can not pull container images from the internet. In this

case, you will need to configure a private registry for deployment. Container images will be

uploaded by opakube to this registry, so that you cluster can use them.

Note: This document does not describe the setup of a private registry, since your cluster likely

already has one.

Steps:

  • When your private registry needs authentication

    • login to it with command

      docker login YOURREGISTRY

      where YOURREGISTRY is the hostname of your registry. You will be prompted for login and password.

    • Run the script ./createSecret.sh
      This will create a secret name opvregcred in namespace opakube, created from the data you used to login in the previous step.

  • Copy the images to the registry:

    ./exportImages.sh <your_registry_address>[/prefix]


    e.g.

    ./exportImages.sh registry.example.com/opvizor/monitoring


    your_registry_address> is the IP, hostname or FQDN of your registry. prefix is optional and will be added to the container images names. This allows you to avoid clashes with other images in your registry.

    (Note: in case you get the error „http: server gave HTTP response to HTTPS client“, you need to edit the appliances /etc/docker/daemon.json file and add an insecure registry. This is not supported by Codenotary).


6 Configure Service Exposure


Opvizor provides two options to expose Prometheus, you have to choose one out of them.

Use cluster’s ingress: proceed with step 6.1

Use cluster’s load balancer: proceed with step 6.2


6.1 Configure cluster ingress


Use

./configureIngress.sh <path_for_ingress> <cluster_ingress_fqdn> [<ip_address_of_cluster_ingress>]


(without line breaks) to configure the deployment values.


<path_for_ingress> is the path you want to use to access Prometheus. e.g.

/mon/prometheus/ (don’t forget the leading and trailing slash!)


<cluster_ingress_fqdn> is the FQDN of your cluster’s ingress. e.g.

clusteringress.example.com


[<ip_address_of_cluster_ingress>] is optional, e.g. 198.51.100.1 .

Only use this, when your cluster_ingress_fqdn is not resolveable by DNS. So you should

start without an IP given. In case the cluster_ingress_fqdn is not resolveable, you will get

an error message from the script. In that case, you can repeat the command with the IP address

added.

6.2 Configure load balancer


Use

./configureLoadBalancer.sh <port>

<port> is the TCP port of your cluster’s load balancer to expose the service. e.g. 9090 .


7 Optional: customize deployment

You can edit the generated file prometheus_values.yaml, e.g. if you want to

  • use node ports

  • change storage specification and retentions


Please refer to the comments in the file for details.


8 Prometheus deployment


Run the command for deployment:

./deployPrometheus.sh


9 Configure Grafana Datasource


Use

./configureGrafana.sh <DatasourceName>

<DatasourceName> is the Name for the datasource in the Opvizor appliance, e.g. localK3s .


10 Optional: Log & Kubernetes Events collector


10.1 Configure Log & Kubernetes Events collector


Use

./configureLoki.sh <urlForLoki> [<basicAuthUser> <basicAuthPassword>]

<urlForLoki> is the URL for the endpoint on the Opvizor appliance, e.g. http://opvizor_appliance:3100/loki/api/v1/push .


<basicAuthUser> and <basicAuthPassword> are optional, but if one is given, the other has to be given too. The Opvizor appliance itself does no authentication, but e.g. you have an reverse proxy in front of the appliance, these values can be used for authentication.


10.2 Deploy Log & Kubernetes Events collector


Run the command for deployment:

./deployLoki.sh


11 Troubleshooting


11.1 Cluster List is empty


If the Cluster List in Opvizor Metrics and Logs is empty, but no errors where reported in step 8, then try to resolve with hitting F5 key.


11.2 Test your setup


When using ingress, point your browser to the configured ingress and path, e.g.

https://clusteringress.example.com/mon/prometheus/ 

When using a load balancer, find the EXTERNAL-IP of your service with the following command:

kubectl get services -n opakube prometheus-kube-prometheus-prometheus


You should receive a service entry indicating the service and the ip adress assigned.


So your URL would be http://<ip-address>:9098/ 


In your browser, you should see a page similar to this:


11.3 List of running pods


kubectl get pods -n opakube


Check the status of each pod. The status should be READY and state Running ideally with 0 restarts.


Was this article helpful?

That’s Great!

Thank you for your feedback

Sorry! We couldn't be helpful

Thank you for your feedback

Let us know how can we improve this article!

Select at least one of the reasons
CAPTCHA verification is required.

Feedback sent

We appreciate your effort and will try to fix the article