-
Notifications
You must be signed in to change notification settings - Fork 302
/
template.adoc.in
254 lines (200 loc) · 8.31 KB
/
template.adoc.in
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
= {template}
:toc:
:toc-placement!:
:toclevels: 5
{description}
toc::[]
{#parameters}
== Parameters
Templates allow you to define parameters which take on a value. That value is then substituted wherever the parameter is referenced.
References can be defined in any text field in the objects list field. Refer to the
https://docs.okd.io/latest/architecture/core_concepts/templates.html#parameters[OKD documentation] for more information.
|=======================================================================
|Variable name |Image Environment Variable |Description |Example value |Required
{parametertable}
|=======================================================================
{/parameters}
{#objects}
== Objects
The CLI supports various object types. A list of these object types as well as their abbreviations
can be found in the https://docs.okd.io/latest/cli_reference/basic_cli_operations.html#object-types[OKD documentation].
{#Service}
=== Services
A service is an abstraction which defines a logical set of pods and a policy by which to access them. Refer to the
https://docs.okd.io/latest/architecture/core_concepts/pods_and_services.html#services[OKD documentation] for more information.
|=============
|Service |Port |Name | Description
{table}
|=============
{/Service}
{#Route}
=== Routes
A route is a way to expose a service by giving it an externally-reachable hostname such as `www.example.com`. A defined route and the endpoints
identified by its service can be consumed by a router to provide named connectivity from external clients to your applications. Each route consists
of a route name, service selector, and (optionally) security configuration. Refer to the
https://docs.okd.io/latest/dev_guide/routes.html[OKD documentation] for more information.
|=============
| Service | Security | Hostname
{table}
|=============
{/Route}
{#BuildConfig}
=== Build Configurations
A `buildConfig` describes a single build definition and a set of triggers for when a new build should be created.
A `buildConfig` is a REST object, which can be used in a POST to the API server to create a new instance. Refer to
the https://docs.okd.io/latest/dev_guide/builds/index.html[OKD documentation]
for more information.
|=============
| S2I image | link | Build output | BuildTriggers and Settings
{table}
|=============
{/BuildConfig}
=== Deployment Configurations
A deployment in OpenShift is a replication controller based on a user defined template called a deployment configuration. Deployments are created manually or in response to triggered events.
Refer to the https://docs.okd.io/latest/dev_guide/deployments/how_deployments_work.html#creating-a-deployment-configuration[OKD documentation] for more information.
{#triggers}
==== Triggers
A trigger drives the creation of new deployments in response to events, both inside and outside OpenShift. Refer to the
https://docs.okd.io/latest/dev_guide/deployments/basic_deployment_operations.html#triggers[OKD documentation] for more information.
|============
|Deployment | Triggers
{table}
|============
{/triggers}
{#replicas}
==== Replicas
A replication controller ensures that a specified number of pod "replicas" are running at any one time.
If there are too many, the replication controller kills some pods. If there are too few, it starts more.
Refer to the https://docs.okd.io/latest/dev_guide/deployments/basic_deployment_operations.html#scaling[OKD documentation]
for more information.
|============
|Deployment | Replicas
{table}
|============
{/replicas}
==== Pod Template
{#serviceAccountName}
===== Service Accounts
Service accounts are API objects that exist within each project. They can be created or deleted like any other API object. Refer to the
https://docs.okd.io/latest/dev_guide/service_accounts.html#dev-managing-service-accounts[OKD documentation] for more
information.
|============
|Deployment | Service Account
{table}
|============
{/serviceAccountName}
{#image}
===== Image
|============
|Deployment | Image
{table}
|============
{/image}
{#readinessProbe}
===== Readiness Probe
{table}
{/readinessProbe}
{#ports}
===== Exposed Ports
|=============
|Deployments | Name | Port | Protocol
{table}
|=============
{/ports}
{#env}
===== Image Environment Variables
|=======================================================================
|Deployment |Variable name |Description |Example value
{table}
|=======================================================================
{/env}
{#volumes}
===== Volumes
|=============
|Deployment |Name | mountPath | Purpose | readOnly
{table}
|=============
{/volumes}
=== External Dependencies
{#PersistentVolumeClaim}
==== Volume Claims
A `PersistentVolume` object is a storage resource in an OpenShift cluster. Storage is provisioned by an administrator
by creating `PersistentVolume` objects from sources such as GCE Persistent Disks, AWS Elastic Block Stores (EBS), and NFS mounts.
Refer to the https://docs.okd.io/latest/dev_guide/persistent_volumes.html#overview[OKD documentation] for
more information.
|=============
|Name | Access Mode
{table}
|=============
{/PersistentVolumeClaim}
{#secrets}
==== Secrets
This template requires link:../secrets/{secretName}.adoc[{secretFile}]
to be installed for the application to run.
{/secrets}
{#clustering}
[[clustering]]
==== Clustering
Clustering in OpenShift EAP is achieved through one of two discovery mechanisms:
Kubernetes or DNS. This is done by configuring the JGroups protocol stack in
standalone-openshift.xml with either the `<openshift.KUBE_PING/>` or `<openshift.DNS_PING/>`
elements. The templates are configured to use `DNS_PING`, however `KUBE_PING`is
the default used by the image.
The discovery mechanism used is specified by the `JGROUPS_PING_PROTOCOL` environment
variable which can be set to either `openshift.DNS_PING` or `openshift.KUBE_PING`.
`openshift.KUBE_PING` is the default used by the image if no value is specified
for `JGROUPS_PING_PROTOCOL`.
For DNS_PING to work, the following steps must be taken:
. The `OPENSHIFT_DNS_PING_SERVICE_NAME` environment variable must be set to the
name of the ping service for the cluster (see table above). If not set, the
server will act as if it is a single-node cluster (a "cluster of one").
. The `OPENSHIFT_DNS_PING_SERVICE_PORT` environment variables should be set to
the port number on which the ping service is exposed (see table above). The
`DNS_PING` protocol will attempt to discern the port from the SRV records, if
it can, otherwise it will default to 8888.
. A ping service which exposes the ping port must be defined. This service
should be "headless" (ClusterIP=None) and must have the following:
.. The port must be named for port discovery to work.
.. It must be annotated with `service.alpha.kubernetes.io/tolerate-unready-endpoints`
set to `"true"`. Omitting this annotation will result in each node forming
their own "cluster of one" during startup, then merging their cluster into
the other nodes' clusters after startup (as the other nodes are not detected
until after they have started).
.Example ping service for use with DNS_PING
[source,yaml]
----
kind: Service
apiVersion: v1
spec:
clusterIP: None
ports:
- name: ping
port: 8888
selector:
deploymentConfig: eap-app
metadata:
name: eap-app-ping
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
description: "The JGroups ping port for clustering."
----
For `KUBE_PING` to work, the following steps must be taken:
. The `OPENSHIFT_KUBE_PING_NAMESPACE` environment variable must be set (see table above).
If not set, the server will act as if it is a single-node cluster (a "cluster of one").
. The `OPENSHIFT_KUBE_PING_LABELS` environment variables should be set (see table above).
If not set, pods outside of your application (albeit in your namespace) will try to join.
. Authorization must be granted to the service account the pod is running under to be
allowed to access Kubernetes' REST api. This is done on the command line.
.Policy commands
====
Using the default service account in the myproject namespace:
....
oc policy add-role-to-user view system:serviceaccount:myproject:default -n myproject
....
Using the eap-service-account in the myproject namespace:
....
oc policy add-role-to-user view system:serviceaccount:myproject:eap-service-account -n myproject
....
====
{/clustering}
{/objects}