Automation to deploy the required middleware and operations tools required by HDF.
You need to have Ansible and the Kubernetes module installed in your machine in order to run this playbook.
This deployment requires an OpenShift environment. You will also need an OpenShift user with cluster-admin
privileges.
Note
|
This automation was specifically tested with OCP 4.10. Older or newer versions may have API incompatibilities or different Operator Catalog that will break the automation. |
Most of the components deployed use bare minimum resources or do not force resource requirements. Those can be adjusted by changing parameters or templates used.
Some components require RWO volumes to work properly. Here are the default values.
Component | Default Value | Comment |
---|---|---|
Logging |
10Gi |
|
Kafka |
9Gi |
3Gi per replica |
Zookeeper |
9Gi |
3Gi per replica |
Tekton Shared Storage |
4Gi |
|
Total |
32Gi |
-
OpenShift User Workload Monitoring
-
AMQ Streams Operator
-
Grafana Operator
-
OpenShift GitOps
-
OpenShift Pipelines
-
OpenShift Logging
You can change the value of variables defined in the roles and playbook as you like to customize your deployment, but in order to access your OpenShift cluster, you need to pass the following required values as command-line properties
Parameter | Example Value | Definition |
---|---|---|
token |
sha256~vFanQbthlPKfsaldJT3bdLXIyEkd7ypO_XPygY1DNtQ |
Access token of an user with cluster-admin privileges |
server |
OpenShift cluster API URL |
|
docker_config |
vFanQbthlPKfsaldJT3bdLXIyEkd7ypO_XPygY1DNtQ |
|